Authors sue AI startup Anthropic, accusing it of using pirated books for chatbot training.

In a first-of-its-kind lawsuit, authors have accused AI startup Anthropic of using pirated copies of copyrighted books to train its chatbot, Claude. The startup, which markets itself as a responsible AI developer, is facing allegations of "large-scale theft" and undermining its claims to be safety-focused. This case joins a growing number of lawsuits against developers of AI large language models, as tech companies argue that the training of AI models falls under the "fair use" doctrine, while authors dispute this and accuse Anthropic of using a pirated dataset called The Pile.

August 20, 2024
77 Articles

Further Reading