AI

Landscape of Generative Artificial Intelligence Litigation


WHAT YOU NEED TO KNOW IN A MINUTE OR LESS

Beginning in 2023, courts across the United States have grappled with a wave of lawsuits challenging the legality and use of generative artificial intelligence (AI) systems and tools. While courts have yet to definitively rule on the myriad questions raised by the lawsuits, these cases may well signal what’s on the horizon for generative AI litigation. 

This three-part series will discuss: 

  • Current trends in litigation regarding generative AI and strategies to mitigate litigation risk for companies considering deploying or currently using such systems or tools;
  • Privacy, consumer protection, and other generative AI-specific litigation concerns that may impact business; and
  • Generative AI’s impact on record-keeping and document storage, use, and exchange in litigation.

In a minute or less, here is what you need to know about the current trends in generative AI litigation.

What Is Generative Artificial Intelligence?

Generative AI refers to a type of AI that generates new content based on the patterns it has learned from its training1 and in response to a user’s prompt. This new content can be text, code, images, audio, video, or a combination of these outputs.

Many generative AI systems are built on large language models (LLMs). LLMs are designed to understand context, infer meaning, and generate coherent, contextually appropriate responses. LLMs give generative AI systems the ability to interact with users through natural language inputs and outputs.

Generative AI promises transformative benefits and opportunities for businesses across all sectors. Whether to enhance creativity and innovation, improve efficiency, personalize or customize experiences, reduce costs, or analyze data to make better decisions, many companies have either deployed generative AI systems in their businesses or are considering how best to leverage this technology.

What Has Been the Focus of the Initial Generative AI Litigation?

Early litigation regarding generative AI has primarily centered around the data used in training those systems. These cases have been brought against generative AI developers by authors, artists, publishers, and consumers and tend to focus on the plaintiffs’ intellectual property or privacy rights.2 These suits are in the early stages of litigation, and include claims for copyright infringement, invasion of privacy, violation of consumer protection acts, theft, and misappropriation of data.3

Although courts have expressed skepticism about plaintiffs’ claims in some early rulings, these cases continue to work their way through the legal process and still have the potential to significantly impact the rapidly growing market of generative AI systems and tools. 

For example, one of the earliest and best-known cases in this space, Andersen v. Stability AI Ltd.,4 involves a group of artists who argue that because certain image-generating AI tools used their online artwork for training purposes without their consent, the images created by the tool in response to user prompts constitute impermissible infringement upon their copyrighted work, unfair competition, and breach of contract. 

Initially, the Court dismissed nearly all of the claims, albeit with permission to refile the amended complaint to address the Court’s concerns. In that decision, the Court noted that the final images produced by the AI did not appear to copy any of the artists’ work in particular and found that—because the AI was trained on billions of online images—it was unlikely that it copied a particular artist or harmed them, individually, in a meaningful way. Plaintiffs subsequently filed an amended complaint, which also added additional parties and a new claim for unjust enrichment.

Earlier this month, in a tentative ruling on the defendants’ motions to dismiss the amended complaint, the Court indicated that it would this time allow plaintiffs’ copyright infringement claims to proceed on two alleged theories: 

  1. AI-developer defendants used plaintiffs’ images for training purposes without permission; and
  2. Plaintiffs’ artwork is “stored as mathematical information” in the generative AI models themselves.5

The Court noted that, given the dispute between the parties regarding how the generative AI systems operate, plaintiffs’ claims should be “tested at summary judgment.”6

Although it was inclined to allow some claims to proceed, the Court tentatively dismissed plaintiffs’ copyright claim under the Digital Millennium Copyright Act, their breach of contract allegations, and their unjust enrichment claim (with leave to attempt to re-allege the unjust enrichment claim).

What Litigation Risks Are on the Horizon?

A shift in litigation focus—from developers to users—has already begun and is likely to increase as adoption of generative AI systems and tools becomes more commonplace. As more companies attempt to take advantage of generative AI by deploying and using it in their businesses, litigation risk for these users will only continue to grow. In addition to claims similar to those alleged against generative AI developers, claims against generative AI users have and will revolve around allegations that plaintiff(s) suffered injury, either due to a company’s (or its personnel’s) misuse of generative AI systems or tools, or due to an autonomous error by the generative AI system or tool, which was not caught or corrected by the company (or its personnel). 

Companies using generative AI may also face claims for failing to appropriately protect or failing to receive appropriate consent to use consumers’ personal data that makes its way into the generative AI systems or tools used by the company.

What Steps Can Companies Take to Protect Themselves from Liability?

Companies can, however, take steps when deploying and using generative AI in their businesses to help reduce the risk of liability in this next wave of litigation. These steps include:

  1. Understanding the system or tool that is being deployed. Companies should invest time and effort—on a cross-disciplinary basis—to understand the capabilities, limitations, and intended use cases of the generative AI system or tools they intend to deploy. Importantly, companies should pay close attention to the ownership and sources of training data, fine tuning, prompts, and outputs. 
  2. Securing adequate and appropriate contractual protections. Companies should carefully review and negotiate the contractual terms in licenses and other agreements with generative AI system providers to address risks specific to the generative AI system or tool under consideration. These risks can revolve around data protection and privacy and liability for outputs, among others. Representations and warranties, indemnities, and limitations of liabilities should be closely examined and tailored to the system or tool and its intended use cases. 
  3. Creating clear acceptable use policies (AUPs) and guidelines for personnel. Clear AUPs and guidance can help ensure that generative AI is used appropriately, responsibly, ethically, and in compliance with applicable laws and regulations. Setting policy specific to generative AI systems and tools provides personnel with clear boundaries and expectations, which can help to limit the risk liability.
  4. Establishing a risk management framework, which includes continuously assessing, monitoring, and managing potential risks. Companies should adopt a proactive framework to identify and address potential legal, operational, and reputational risks associated with the use of generative AI. Regular audits and updates to this risk management framework keep mitigation strategies aligned with new legal and technological developments.
  5. Educating and training personnel on safe and responsible use of generative AI. Regular and ongoing training programs can familiarize employees with best practices for use of generative AI, as well as the risk of its use. These programs can mitigate risk by preventing inadvertent or unintended misuse and encourage safe and responsible use.

The ultimate resolution of many of the legal questions raised in this current litigation remains to be seen. Companies exploring or interacting with generative AI should arm themselves with an understanding of the current state of litigation, as well as the technology itself, to ensure that any use includes appropriate safeguards to protect from potential liability. 

“Training” is the process of analyzing data to create algorithms and patterns that are then used to make predictions about data and inputs with similar characteristics in the future. Generative AI systems are trained on vast amounts of data, much of which has been gathered from the internet.

An earlier alert provided an initial review of the cases filed as of late 2023. Christopher J. Valente, Michael J. Stortz, Amy Wong, Peter E. Soskin, Michael W. Meredith, Recent Trends In Generative Artificial Intelligence Litigation In The United States, K&L Gates (5 September 2023).

See, e.g., Doe v. GitHub, Inc., Case No. 4:22-cv-06823 (N.D. Cal. 2022); Andersen v. Stability AI Ltd., Case No. 3:23-cv-00201 (N.D. Cal. 2023); P.M. v. OpenAI LP, Case No. 3:23-cv-3199 (N.D. Cal. 2023); Tremblay v. OpenAI, Inc., Case No. 4:23-cv-03223 (N.D. Cal. 2023); Silverman v. OpenAI, Inc., Case No. 3:23-cv-03416 (N.D. Cal. 2023); Kadrey v. Meta Platforms, Inc., Case No. 3:23-cv-03417 (N.D. Cal. 2023); J.L. v. Alphabet Inc., Case No. 3:23-cv-03440 (N.D. Cal. 2023); A.T. v. OpenAI LP, Case No. 3:23-cv-04557 (N.D. Cal. 2023); Getty Images (US), Inc. v. Stability AI, Inc., Case No. 1:23-cv-00135 (D. Del. 2023); Concord Music Group, Inc. v. Anthropic PBC, Case No. 3:23-cv-01092 (M.D. Tenn. 2023).

Case No. 3:23-cv-00201 (N.D. Cal. 2023).

Andersen et al. v. Stability AI Ltd. et al., Case No 3:23cv201, Dkt. No. 193, at *1 (N.D. Cal. Jan. 13, 2023). 

Id. at *2.



Source

Related Articles

Back to top button