Disney claimed that Midjourney's artificial image generator was trained on films such as The Lion King.
Maximum Film/Alami
In the three years since the release of ChatGPT, OpenAI's generative AI chatbot, huge changes have occurred in every area of our lives. But one area that hasn't changed—or at least is still trying to maintain pre-AI norms—is copyright law enforcement.
This It's no secret that leading artificial intelligence companies build their models by collecting dataincluding copyrighted material from the Internet without prior permission. Major copyright holders have hit back this year with a series of copyright infringement lawsuits against artificial intelligence companies.
The most high-profile case was filed by Disney and Universal in Juneboth of whom alleged in the lawsuit that Midjourney's AI image generator was trained on their intellectual property, allowing users to create images that “explicitly incorporate and copy famous Disney and Universal characters.”
This case is still ongoing and The answer is in the middle in August said that “the limited monopoly granted by copyright must give way to fair use,” allowing artificial intelligence companies to train their models on copyrighted works because the results are transformative.
Midjourney's fighting words highlight that the copyright argument is not as simple as it may seem at first glance. “Many people thought that copyright would be the silver bullet that would kill AI, but it turned out that this was not the case,” says Andres Guadamus at the University of Sussex in the UK. Guadamuz says he's surprised by how little impact copyrights have on the progress of artificial intelligence companies.
This is despite some governments intervening in the debate. In October, the Japanese government officially asked OpenAIThe company behind the video generator Sora 2 AI respects the intellectual property rights of its culture, including manga and popular video games such as those released by Nintendo.
Sora 2 faced further controversy due to its ability to create realistic shots of real people. OpenAI has tightened restrictions on the image of Martin Luther King Jr. after members of his class complained that the civil rights leader was depicted in a stylization of his famous “I Have a Dream” speech, including one where he made monkey noises.
“While there is a strong interest in free speech in the depiction of historical figures, OpenAI believes that public figures and their families should ultimately have control over how their image is used,” OpenAI says the statement. The decline has only been partial, with celebrities and public figures being forced to stop using their likenesses in Sora 2, which some still consider overly liberal. “No one should report to OpenAI unless they want to expose themselves or their families to deep falsification,” says Ed Newton-Rexformer AI executive and founder of the Fairly Trained campaign.
In some cases, artificial intelligence companies have faced lawsuits for their activities, as evidenced by one of the largest proposed lawsuits last year. In September, three authors alleged that Anthropic, the company behind the chatbot Claude, knowingly downloaded more than seven million pirated books to train its artificial intelligence models.
A the judge evaluates the case considered that if the firm were to use this material to train its AI, it would not be essentially infringing since training these models would be a sufficiently “transformative” application. However, the piracy charge was considered serious enough to go to trial. Instead, Anthropic chose to settle the case for at least $1.5 billion.
“The takeaway is that AI companies appear to have made their calculations and will likely end up paying through a combination of calculations and strategic licensing agreements,” Guadamuz says. “Only a few companies will go out of business because of copyright infringement lawsuits,” he says. “AI is here to stay, even if many of the existing companies can’t because of lawsuits or because of the bubble.”
Topics:






