Business

Authors file lawsuit against AI startup Anthropic over copyright infringement

Three authors have filed a class-action lawsuit against artificial intelligence startup Anthropic, alleging the company used pirated copies of their work to t🐠rain itsﷺ chatbot Claude.

While the first suit of its kind against Anthropic — a big player in the AI industry that has raised more than $7 billion over the past year — it joins a growing pile of copyright challenges targeting AI chatbots.

Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson seek to represent authors of fiction and non-fiction works whose work has been 🧜stolen and used without pay by Anthropic.

Three authors have filed a class action lawsuit against AI startup Anthropic, alleging the company stole copyrighted work to train its chatbot. AP

The writers claim Anthropic — backed by venture capital firms like Menlo Ventures, which pledged a $750 million donation — committed “large-scale theft” by using copyrighted materials to train Claude without permission.

Similar lawsuits have piled up against OpenAI and its chatbot ChatGPT, but this is the first sꦍuit 🐷against the smaller San Francisco-based startup.

Anthropic was co-founded by sib𝓀lings Dario and Daniela Amodei, who both worked at OpenAI prior to starting their own company.

Anthropic has largely been viewed as more safety-conscious than its larger, well-kn🍰own predecessor. 

It gained this image when – even though he could have released it before OpenAI’s wildly popular ChatGPT – over safety♐ concerns.

Claude has been used to generate emails, summarize documents and interact with human users. But now it is facing the same claims of copyright infringem🔥ent as its forebear. 

The lawsuit filed Monday in a San Francisco feder🌌al court claims ൩Anthropic has “made a mockery of its lofty goals” by using pirated copies of authors’ work to train its chatbot.

The suit claimed Anthropic built a multibi�🐬�llion-dollar business off the backs of authors, stealing copyrighted books and feeding them into AI models to help the chatbot create human-like messages.

Anthropic gained a reputation as a safer AI startup after its founder said he refrained from releasing the chatbot earlier over safety concerns. REUTERS

“It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expไression and ingenuity behind eaꩵch one of those works,” the lawsuit said.

California and New York AI startups have been hit with a growing number of copyrightꦰ infringement lawsuits from authors, musicians, producers and other artists.

The Authors Guild filed a lawsuit against Microsoft-backed OpenAI last September on behalf of famous writers including John Grisham, Jodi Picoult and “Game of Thrones” author George R.R. Martin, accusing the startup of illegally using puꦇblished works to train its chatbot.

Anthropic’s chatbot Claude has been used to generate emails, summarize documents and interact with human users. Anthropc

OpenAI is facing lawsuits from media giants a🦂s well, incಞluding The New York Times, Chicago Tribune and Mother Jones.

The tech startups have argued they are protected under the “fair use” doctrine, which allows for limited use of copyrighted materi൲als for teaching or when transforming the work into somethꦍing new.

But the lawsuit argued that the copyright infringement could not be considered “fair use” because the c♋hatbot is not being “taught” like a real human.