Dechert’s Brenda Sharton is no stranger to litigating issues at the edge of technological innovation.
In the 1990s, while on maternity leave, she read about the internet attracting millions of users and soon became an expert on its intersection with privacy law.
In the past couple of years, she has had a sense of déjà vu, after winning the dismissal of two of the first lawsuits brought against a generative AI company in the US, while getting up to speed on the nascent technology and explaining it to the courts.
Sharton, managing partner of Dechert’s Boston office and chair of the firm’s cyber, privacy and AI practice, points out that artificial intelligence “is not something new” and has been developed over more than a decade, mostly behind the scenes.
But, since the arrival of the latest wave of generative AI, led by OpenAI’s ChatGPT, Sharton and a handful of specialists are having to defend companies that now face sprawling copyright and privacy claims, which could hamper the emerging industry.
Sharton’s most high-profile AI case was a proposed class action against her client Prisma Labs, the maker of popular photo editing tool Lensa. As she puts it, the plaintiff had in effect alleged that “anyone in Illinois who ever uploaded a photo to the internet” had been harmed by the software allegedly being trained on images scraped from the web without their explicit consent.
But a federal judge ruled in August that the plaintiff had not shown “concrete and particularised” injury and could not prove their images were in the vast data set used. “Judges have said you’re going to have to explain what was inaccurate,” Sharton says, as well as “what was done that violated whatever existing law”.
In other instances, the limits of what AI businesses term “fair use” of copyrighted material is still to be established.
Andy Gass, partner at Latham & Watkins, is defending OpenAI in cases brought by publishers including the New York Times and DeviantArt over alleged copyright violations. He is also defending rival AI venture Anthropic in lawsuits brought by music publishers alleging wrongful copyright infringement.
Gass says the slew of cases currently being heard are “both fascinating and quite important” — although he cautions against interpreting initial decisions as predictive of future AI legal battles.
“The issues that we are seeing and dealing with now are, in some sense, foundational issues,” he says. “But they are going to be very different than the ones that are presented three years from now, five years from now, or ten years from now.”
Gass and his team, who had been working on generative AI questions well before ChatGPT was released to great fanfare in late 2022, have embedded lawyers with the technologists at some of the companies they represent. They delve into the details of how the models are being trained so they can analyse the copyright issues that may arise.
“[AI litigation] involves a very novel technology, but very well established principles of law,” Gass says. “The challenge, as an advocate, is explaining that to the judges.”
Sharton says exploring the details with the courts is one of the most demanding aspects of being an AI lawyer. “You have to do a tremendous amount of educating of the judges,” she says. “It’s a big learning curve for them as well. And they . . . don’t have the luxury of specialising [in particular subject matters] like lawyers do.”
Warrington Parker, managing partner of Crowell & Moring’s San Francisco office, is representing defendant ROSS Intelligence, an AI-powered legal tech company, in one of the first generative AI cases to allege copyright infringement — filed by Thomson Reuters in May 2020.
Parker argued in front of Delaware’s Judge Stephanos Bibas this month, in a lawsuit that is not yet settled. He is not sure the judge is “convinced yet” of his arguments, including his contention that the AI training data used by ROSS has a public benefit and should be considered fair use. “But I think he is interested.”
Aside from the judges, there is the matter of the general public. While none of the existing lawsuits has yet gone to a jury trial — and some doubt that any will, given the complexity — certain lawyers defending AI clients fear a negative public perception of AI could taint any panel’s view.
For a jury, “the idea that you took someone else’s work . . . is going to be an issue”, Parker says — although he does not accept that characterisation.
The question of how the incoming Trump administration will regulate the technology will be particularly pertinent to firms with AI clients.
If the new government decides to give companies freer reign, plaintiffs’ lawyers are not “going to be able to piggyback on, say, [Federal Trade Commission] actions, which they typically do,” Sharton says.
Moreover, the outcome of existing cases, even if some are lost, may not be enough to restrict the sector’s growth. “If it’s a matter of damages only, I think some actors will pay those damages and continue,” Parker says. “In other words, it is the cost of doing the business.”
For now, there are more anecdotal signs of the judiciary paying attention to generative AI’s capabilities. During a case management conference earlier this year, 90-year-old Judge Alvin Hellerstein proved his personal interest in the topic. The legendary judge “took out his iPad and played a song that had been generated by [an AI] tool that was sort of about his career on the bench”, Gass says.
Even less adventurous judges will end up with a stronger grasp of the technology, Gass predicts. To extend the analogy to the early internet age, he says, “we are still in the dial-up modem phase of the trajectory of these tools”.
Case studies: read about the most innovative services that lawyers have developed for clients, plus the top law firm practitioners here.