Listen to this article
Approximately 6 minutes
The audio version of this article was created using text-to-speech, an artificial intelligence-based technology.
Nonprofit consumer advocacy group Public Citizen demanded in a letter Tuesday that OpenAI withdraw its video-making software Sora 2 after the app raised concerns about the spread of misinformation and privacy violations.
The letter, addressed to the company and CEO Sam Altman, accused OpenAI of rushing the app out so it could launch before competitors.
It showed “a continuing and dangerous trend of OpenAI rushing to market with a product that is either inherently unsafe or lacks the necessary guardrails,” the watchdog group said.
Sora 2, the letter says, demonstrates a “reckless disregard” for product safety and the rights of people in his likeness. It also contributes to a broader erosion of public trust in the authenticity of online content, the report argues.
The group also sent a letter to the US Congress.
OpenAI did not immediately respond to a request for comment Tuesday.
Respond more quickly to complaints about celebrity content
A typical Sora video is designed to be funny enough for you to click it and share it on platforms like TikTok, Instagram, X and Facebook.
It could have been a rap by the late Queen Elizabeth II or something more common and believable. One popular genre of sora features fake doorbell camera footage of something slightly uncanny—say, a boa constrictor on a porch, or an alligator approaching an unperturbed child—and ends with a mildly shocking image, such as a screaming grandmother beating the animal with a broom.
Electric current24:17New AI video app Sora is here: can you tell what's real?
Whether it's your best friend riding a unicorn, Michael Jackson teaching math, or Martin Luther King Jr. dreaming of selling vacation packages, turning those ideas into realistic videos is now easier and faster with the new AI-powered app Sora. The company behind it, OpenAI, promises to create firewalls to prevent violence and fraud, but many critics fear the app could lead to an over-spread of misinformation… and pollute society with even more “AI scum.”
Public Citizen is joining a growing chorus of advocacy groups, academics and experts raising the alarm about the dangers of allowing people to create AI videos based on virtually anything they can type into a prompt, leading to the proliferation of inconsistent images and realistic deepfakes in a sea of less harmful “AI slop.”
OpenAI has cracked down on artificial creations of public figures who have committed outlandish acts – including Michael Jackson, Martin Luther King Jr. and Mister Rogers – but only after protests from family estates and the actors' union.
“Our biggest concern is the potential threat to democracy,” Public Citizen technology policy advocate J.B. Branch said in an interview.
“I think we're entering a world where people can't really trust what they see. And we're starting to see strategies in politics where the first image, the first video released, is what people remember.”
Fences did not stop persecution
Branch, who wrote the letter Tuesday, also sees broader threats to people's privacy and says they could disproportionately affect certain groups.
AI-generated videos are all over the internet, but what happens if your image or voice is reproduced without your permission? CBC's Ashley Fraser looks at how Denmark is trying to change digital identity protection and how Canada's laws compare.
OpenAI blocks nudity, but Branch said “women see themselves being harassed online” in other ways.
Fetishized niche content has overcome the limitations of the app. 404 Media reported this on Friday. a stream of videos shot by Sora of women being strangled.
OpenAI introduced its new Sora app for iPhone over a month ago. It launched last week on Android phones in the US, Canada and several Asian countries, including Japan and South Korea.
The strongest reaction to it came from Hollywood and other entertainment circles, including the Japanese manga industry.
OpenAI announced its first big changes just days after the release, saying that “over-moderation is very frustrating” for users, but it was important to be conservative “while the world is still adjusting to this new technology.”
This was followed by a public announcement agreements with the family of Martin Luther King Jr. on Oct. 16, preventing a “disrespectful portrayal” of a civil rights leader while the company worked on more effective protections, and another on Oct. 20 with Breaking Bad actor Bryan Cranston, SAG AFTRA union and talent agencies.
“It’s all good if you’re famous,” Branch said. “This is sort of the OpenAI template where they are willing to respond to the outrage of a very small group of the population. They are willing to release something and then apologize. But a lot of these issues are design decisions they can make before release.”
European AI company Particle6 says its AI creation Tilly Norwood has attracted a lot of interest, but Hollywood actors including Emily Blunt, Melissa Barrera and Whoopi Goldberg, as well as the union SAG-AFTRA, have spoken out against the AI character.
Legal proceedings against ChatGPT continue
OpenAI has faced similar complaints regarding its flagship product ChatGPT. Seven new lawsuits were filed in California courts last week. claim that chatbot drove people to suicide and harmful misconceptions even if they have no previous history of mental health problems.
The lawsuits, filed on behalf of six adults and one teenager by the Social Media Victims Rights Center and the Technology Justice Legislative Project, allege that OpenAI knowingly released GPT-4o prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims committed suicide.
Public Citizen was not involved in the lawsuits, but Branch said he sees parallels with how Sora was freed.
“A lot of this seems predictable,” he said. “But they would rather release a product, get people to download it, get people addicted to it, rather than do the right things, stress test those things up front, and worry about the plight of everyday users.”
OpenAI responds to anime and video game creators
OpenAI has spent the past week responding to complaints about Sora from a Japanese trade association representing renowned animators such as Hayao Miyazaki's Studio Ghibli, as well as video game makers Bandai Namco, Square Enix and others.
OpenAI defends the app's extensive ability to create fake videos based on popular characters, saying many anime fans want to interact with their favorite characters.
But the company also said it had put in place barriers to prevent famous characters from being created without the consent of the people who own the copyright.
“We are working directly with studios and rights holders, listening to feedback and studying how people use Sora 2, including in Japan, where the cultural and creative industries are deeply valued,” OpenAI said in a statement about the trade group's letter last week.








