Code of Law vs. Law of Code

OpenAI, Microsoft and Meta are involved in several billion Dollar lawsuits over privacy violations. We discuss the implications for Generative AI, data ethics and corporate accountability. How it started and how it’s going. (switch to German)

The lawsuits, filed by separate groups of authors, accuse OpenAI, Meta and Microsoft of incorporating books and personal data and information into datasets for training their AI models without proper authorization.

How it started

End of June 2023, OpenAI and Microsoft found themselves embroiled in a significant legal battle. Class action lawsuits, seeking several billion in damages, allege that tech giants have unlawfully harvested vast amounts of private data from internet users without their consent to train their AI models. As the cases unfold in a California federal court, they shed light on critical aspects of privacy, AI ethics, corporate accountability, and the implications for the broader landscape of Generative AI.

The Allegations and Implications

The lawsuits filed end of June allege that e.g. OpenAI had secretly collected approximately 300 billion words from the internet without registering as a data broker or obtaining proper consent. This extensive data reportedly included names, contact details, email addresses, payment information, social media content, chat logs, usage data, analytics, and cookies.

A core concern raised by the plaintiffs is that OpenAI’s AI products, including ChatGPT, were developed using stolen data, leading to what they term as unjust enrichment. Beyond the financial claims, the lawsuit emphasizes the need for transparency, data ethics, and the rights of users to control their personal information.

How it’s going

OpenAI has denied the core allegations related to copyright infringement in its recent legal response on August 28th. OpenAI’s legal argument revolves around the assertion that the generated text produced by ChatGPT is not similar enough to the authors’ original works to constitute copyright infringement. The company contends that ChatGPT’s output undergoes transformation and repurposing, thus qualifying as fair use under US copyright law.

While OpenAI is trying to have specific claims dismissed, it has indicated a willingness to address the first claim of direct copyright infringement in court, aiming to clarify its legal standing in the matter.

The legal dispute highlights the complex interplay between AI technology and copyright law, as well as the broader implications of AI-generated content and its relationship with existing intellectual property frameworks on the one hand and the question of enrichment on the other. As the lawsuit progresses, it adds to the ongoing dialogue surrounding the legal boundaries and responsibilities of AI developers when it comes to using copyrighted material for the training of AI models.

Additionally, the United States Copyright Office has recently solicited public input on the intersection of AI and copyright law, reflecting the growing need to address legal ambiguities in this evolving field.

Universality of Privacy Concerns

The significance of this lawsuit extends beyond financial compensation. Carissa Véliz, Associate Professor at the University of Oxford and author of “Privacy is Power,” highlights that the lawsuit underscores the universal nature of privacy concerns. Véliz also refers to the privacy concerns and restrictions to ChatGPT use in Italy, while the current case underscores the widespread global unease regarding unchecked data harvesting and possible misuse by tech giants.

AI Ethics and Corporate Accountability

The lawsuit brings into sharp focus the ethical dimensions of AI development and deployment in general. OpenAI and Microsoft are pioneers in Generative AI, and their practices set the tone for the industry. The allegations underscore the necessity for tech companies and all other institutions developing AI to ensure ethical data collection practices, prioritize user consent, and be accountable for the data they utilize, said Véliz (the link to her analysis can be found in the sources at the bottom of the page).

The case draws parallels to Clearview AI, which faced legal action for scraping data for facial recognition purposes. It demonstrates that companies like OpenAI should not regard personal data as an unrestricted source for innovation and profit. Nathan Freed Wessler, Deputy Director of the ACLU’s Speech, Privacy, and Technology Project, stated, “Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profits.” This sentiment resonates with the current lawsuit, reflecting the growing demand for ethical boundaries in AI development.

The Road Ahead

The implications of this lawsuit are multifold. If the plaintiffs prevail, OpenAI and Microsoft may be required to compensate users, disclose their data collection practices, and offer the ability to opt out of data collection. Furthermore, the case might lead to a broader discourse on the ethical use of AI and the need for tech companies to align with societal norms and laws, rather than expecting society to adapt to their practices. Another consequence may be that OpenAI, Meta, Microsoft and others might need to ditch their current models and start training them again from scratch.

The Evergreen Challenge: Balancing Innovation and Ethics

Critics might argue that legal actions like this could stifle innovation. The lawsuits counter this perspective by asserting that technical advancements should not come at the expense of privacy and democracy. AI can flourish without compromising individual rights or democratic principles.

As the lawsuit unfolds, it beckons a crucial moment for the tech industry to rethink its approach. The outcome will likely set a precedent for how AI development and data usage are governed. Ultimately, it’s a call for tech companies to adapt their practices to the values and rights of society.

Possible Claim for Profit Forfeiture in Personality Rights

In his new book “Claim for Profit Forfeiture in Personality Rights” Joachim Pierer addresses the legal aspects of personal rights in Austria, as lawyer Wilhelm Milchrahm points out in his MS Legal Blog (see link below). Pierer discusses both the obligation to reimburse saved expenses and the obligation to surrender the profits obtained. Pierer also addresses the overarching question of whether a wrongful infringer can retain a portion of the profits if they contributed through their own efforts to the generation of those profits. After providing a clear overview of the existing scholarly opinions and current case law, the author proposes a new legal concept to solve this distribution issue: “redliches Alternativverhalten” [German original], which translates to “honest alternative behavior”.

The question of whether a wrongful infringer should be allowed to retain a portion of profits they contributed to through their own efforts raises an interesting ethical and legal dilemma. This aspect of Pierer’s analysis could indeed spark broader discussions in legal circles also in other jurisdictions. The idea of “honest alternative behavior” as a proposed solution could potentially offer a new perspective on addressing distribution issues in cases of personal rights infringement. Innovative approaches like this one often have the potential to inspire legal thought and practice beyond their country of origin as other legal systems encounter similar challenges.


Sources:
Reuters, August 29, 2023: OpenAI asks court to trim authors’ copyright lawsuits
Reuters, June 29, 2023: Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot
Carissa Véliz, Associate Professor, University of Oxford and author of “Privacy is Power” (The Economist Book of the Year), July 1, 2023, analysis thread on Twitter
Vice, June 29, 2023: OpenAI and Microsoft Sued for $3 Billion Over Alleged ChatGPT ‘Privacy Violations’
Class Action Lawsuit against OpenAI and Microsoft, June 28, 2023, Case 3:23-cv-03199-JCS (157 pages)
OpenAI motion to dismiss claims, August 28, 2023, Case 3:23-cv-03223-AMO (36 pages)
Case Updates LLMlitigation https://llmlitigation.com/case-updates.html
The New York Times, May 9, 2022, Clearview AI settles suit and agrees to limit sales of facial recognition database.
Ars Technica, August 17, 2023, Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over.
MS Legal Blog by Wilhelm Milchrahm on the book Claim for Profit Forfeiture in Personality Rights by Joachim Pierer [in German]

Upcoming events