Scrolling through the Sora app can feel a little like stepping into a real-life multiverse.
Michael Jackson performs stand-up; the alien from the movie Predator flips hamburgers at McDonald's; a home security camera captures a moose crashing into a glass door; Queen Elizabeth dives off a table in a pub.
Such incredible realities, fantastical futures and absurdist videos are the basis of Sora, a new short video app released by ChatGPT maker OpenAI.
The constant stream of hyper-realistic short videos created by artificial intelligence is initially stunning and mesmerizing. But this quickly creates a new need to reconsider every piece of content: real or fake.
“The biggest risk with Sora is that it makes it impossible to overcome plausible deniability and undermines confidence in our ability to distinguish the real from the synthetic,” said Sam Gregory, a deepfake expert and executive director of the human rights organization WITNESS. “Individual fakes matter, but the real damage is the fog of doubt that shrouds everything we see.”
All videos in the Sora app are entirely AI-generated and there is no option to share real footage. But since the first week of launch, users have been sharing their Sora videos on all types of social networks.
Less than a week after launching on September 30, the Sora app crossed million downloadsoutpacing ChatGPT's initial growth. Sora also reached the top of the App Store in the US. For now, the Sora app is only available to iOS users in the US, and people won't be able to access it unless they have an invite code.
To use the app, people must scan their face and read three numbers displayed on the screen so the system can record a voice signature. Once this is done, users can enter their own text prompt and create hyper-realistic 10-second videos with background audio and dialogue.
With the Cameos feature, users can superimpose their face or a friend's face onto any existing video. Although all content has a visible watermark, many websites now offer watermark removal for Sora videos.
At launch, OpenAI was lax about enforcing copyright restrictions and by default allowed the re-creation of copyrighted material unless the owners opted out.
Users have begun creating AI-powered videos featuring characters from games like SpongeBob SquarePants, South Park and Breaking Bad, as well as videos styled after the game show The Price is Right and the '90s sitcom Friends.
This was followed by recreations of dead celebrities, including Tupac Shakur roaming the streets of Cuba, Hitler fighting Michael Jackson and remixes of the Rev. Martin Luther King Jr. giving his iconic “I Have a Dream” speech but calling for the release of disgraced rapper Diddy.
“Please just stop sending me AI-generated videos of your dad,” wrote Zelda Williams, daughter of the late comedian Robin Williams. Instagram. “You don't make art, you make disgusting, recycled hot dogs of people's lives, of art history and music history, and then shove them down someone else's throat, hoping they'll give you a little thumbs up and you'll like it. Disgusting.”
Other recreations of deceased celebrities, including Kobe Bryant, Stephen Hawking and President Kennedy, created on Sora have been cross-posted on social media sites, garnering millions of views.
A spokesperson for Fred Rogers Productions said Rogers' family is “disappointed by the videos of artificial intelligence misrepresenting Mister Rogers that are circulating online.”
Videos of Mr. Rogers holding a gun, saluting the rapper Tupac and other satirical fake-outs were widely shared on Sora.
“The videos directly contradict the careful intent and respect for core child development principles that Fred Rogers brought to every episode of Mister Rogers' Neighborhood. We have contacted OpenAI to request that Mister Rogers' voice and image be locked for use on the Sora platform, and we expect that they and other artificial intelligence platforms will respect human identity in the future,” the spokesperson said in a statement to The Times.
Hollywood agencies and talent unions, including SAG-AFTRA, began accusing OpenAI of image misuse. The main tension comes down to control over the use of actors' and licensed characters' likenesses – and fair compensation for use in videos with artificial intelligence.
In the wake of Hollywood's concerns about copyrights, Sam Altman shared a blog post promising copyright holders more control over how their characters can be used in AI videos and is exploring ways to share revenue with copyright holders.
He also said that studios can now “opt in” to have their characters used in AI recreations, a departure from OpenAI's original stance on opt-out mode.
The future, according to Altman, is moving towards creating personalized content for an audience of several people or an audience of one.
“Creativity may experience a Cambrian explosion, and with it the quality of art and entertainment may skyrocket,” Altman wrote, calling this genre of interaction “interactive fan fiction.”
However, the estates of deceased actors are rushing to protect their likenesses in the age of artificial intelligence.
CMG Worldwide, which represents the estates of deceased celebrities, has entered into a partnership with a deepfake detection company. Lots of AI to protect CMG cast listings and estates from unauthorized digital use.
Loti AI will continuously track AI avatars of 20 personalities represented by CMG, including Burt Reynolds, Christopher Reeve, Mark Twain and Rosa Parks.
“For example, since the launch of Sora 2, our registrations have increased approximately 30-fold as people look for ways to regain control over their digital likeness,” said Luc Arrigoni, co-founder and CEO of Loti AI.
Since January, Loti AI said it has removed thousands of pieces of unauthorized content as new artificial intelligence tools make it easier to create and spread deepfakes.
Following numerous “disrespectful depictions” of Martin Luther King Jr., OpenAI said it was suspending the creation of videos featuring the civil rights icon on Sora at the request of Royal estate. While there are strong free speech interests in the depiction of historical figures, public figures and their families should ultimately have control over how their image is used, OpenAI said.
Authorized representatives or property owners can now request that their images not be used in Sora cameos.
As legal pressure mounts, Sora has become more strict about when copyrighted characters will be allowed to be recreated. It is increasingly posting notices of content policy violations.
Creating Disney characters or other images now triggers a content policy warning. Users who aren't fans of the restrictions have started creating video memes about content policy warnings.
There is a growing virality of what has been dubbed “AI scum.”
Last week featured ring-cam footage of a grandmother chasing a crocodile outside a door, as well as a series of “fat Olympics” videos of obese people competing in sporting events such as pole vaulting, swimming and track and field.
Specialized slop factories have turned the event into a source of income, generating a constant stream of videos that are hard to look away from. One insightful tech reviewer called it “Cocomelon for adults.”
Despite increased protections for celebrity images, critics warn that casually “appropriating the likeness” of any ordinary person or situation could lead to public confusion, fuel misinformation and undermine public trust.
Meanwhile, while the technology is being used by bad actors and even some governments to propagandize and promote certain political views, people in power can hide behind a flood of fake news by claiming that AI has even created real evidence, said Gregory of WITNESS.
“I am concerned about the possibility of fabricating footage of protests, staging false atrocities, or inserting real people with words in their mouths in incriminating scenarios,” he said.






