How OpenAI’s Sora could change the internet with deepfakes : NPR

Online security experts say what's happening may be less obvious, but more important for the future of the Internet: OpenAI has essentially rebranded deepfakes as a light-hearted toy, and recommender engines love it.

OpenAI


hide signature

switch signature

OpenAI

Videos made with OpenAI's Sora app are flooding TikTok, Instagram Reels and other platforms, making people increasingly familiar with—and bored with—the almost inevitable synthetic footage pumped out by what amounts to an artificial intelligence cesspool.

Digital security experts say what's happening may be less obvious, but more important for the future of the Internet: OpenAI has essentially rebranded deepfakes as a light-hearted toy, and recommender systems love it. As videos spread across the feeds of millions of people, perceptions of the truth, and soon perhaps the basic norms of being online, are rapidly changing.

“It’s as if deepfakes have a publicist and a distribution deal,” said Daisy Soderberg-Rivkin, former manager of trust and safety at TikTok. “It’s an amplification of something that was scary for a while, but now it has a whole new platform.”

Aaron Rodericks, Bluesky's head of trust and safety, said the public is not ready for such a radical gap between reality and fake.

“In a polarized world, it becomes easy to create false evidence against groups or individuals, or deceive people on a large scale. What was once an inflammatory rumor—such as a fabricated story about an immigrant or a politician—can now be presented as credible video evidence,” Rodericks said. “Most people don’t have the media literacy or tools to tell the difference.”

NPR spoke with three former OpenAI employees, who all said they weren't surprised the company would launch a social media app showcasing its latest video technology as investors put pressure on the company to wow the world, as it did three years ago with the release of ChatGPT.

As with the chatbot, change is already happening quickly.

OpenAI has included many protective measures in Sora, including moderation, restrictions on fraudulent, violent, and pornographic content, watermarking, and control over the use of someone's image. Some of the safety rails are used by users looking to find workarounds. And in response, OpenAI is trying to plug the holes.

One former OpenAI employee, who was not authorized to speak publicly, said concerns about security eroding over time are a legitimate fear.

“The release of Sora tells the world where the party is going. AI video could be the latest frontier in social media, and OpenAI wants to own it, but with Silicon Valley competing for it, companies will break the rules to stay competitive, and that could be bad for society,” the person said.

Former TikTok manager Soderberg-Rivkin said it was only a matter of time before a developer released a Sora-style app without security rules, similar to how Elon Musk created Grok specifically as an “anti-woke” device. and a more rampant response to leading chatbots.

“When an unregulated version without safety rails comes out, it will be used to create synthetic child sexual abuse material that bypasses current detections,” said Rodericks, whose employer, Blueaky, leaned over into customized content moderation to differentiate it from platforms like X, which have fewer rules. “You will see state-sponsored actors fabricate realistic news stories and propaganda to legitimize false narratives.”

An OpenAI spokesman declined to comment.

Experts say it may be too late to implement a 'no-AI policy'

Sora is currently the No. 1 most downloaded iPhone app, but people can only use it with an invite code from current users.

Those who use it regularly have already noticed how much more restrictive it has become. Making videos with celebrities has become more difficult. Trying to replicate some of the more outlandish videos that have been produced, such as a fake Jeffrey Epstein on a boat heading towards an island or a copy of Sean “Diddy” Combs addressing his prison sentence, has become more difficult. However, other controversial calls, such as arresting someone or dressing someone up in a Nazi uniform, still generate videos.

“We're already at the point where we can't tell what's real and what's not on the network, and OpenAI and other tech companies will have to solve that problem,” said a former OpenAI engineer.

“We're already at the point where we can't tell what's real and what's not on the network, and OpenAI and other tech companies will have to solve that problem,” said a former OpenAI engineer.

OpenAI


hide signature

switch signature

OpenAI

OpenAI CEO Sam Altman wrote Within days of the launch of the Sora app, copyright holders will have more control over how their image is used by changing the default approach from “opt-in” to “opt-in.” Altman also wrote that Sora will eventually share revenue from the app with copyright holders.

“Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT,” he said.

Constant flow Backup slop The filling of all feeds has raised the question of whether users will tire of AI video as a content genre.

Most major video platforms now have relatively lenient policies regarding AI video sharing, but will Sora cause a backlash? Could this force social media companies to crack down on or ban AI-generated content? It's unlikely, Soderberg-Rivkin said, noting that even if that happened, enforcement would be challenged by how sophisticated leading AI generators have become.

“If you're saying AI isn't being used on a social media platform, the fact is that it's getting harder and harder to detect when text, video and images are AI, and that's scary,” she said. “An AI ban policy will not stop AI penetration.”

“Liar's Dividend” Increased Like Never Before

Another former OpenAI employee, who was also not authorized to speak publicly, argued that releasing a deep fake AI social media platform was the right business decision, even if it contributes to destroying everyone's shared sense of reality.

“We're already at the point where we can't tell what's real and what's not on the network, and OpenAI and other tech companies will have to solve that problem,” said a former OpenAI engineer, using technical jargon to find a solution to the problem. “But that’s not an argument for not trying to dominate this market. You can't stop progress. If OpenAI hadn't released Sora, someone else would have done it.”

In fact, Meta is also trying to create a platform called Vibes, where people can create and share short deepfakes created with the help of artificial intelligence. In July, Google introduced Veo 3, an artificial intelligence-powered video tool. But it wasn't until OpenAI released the Sora app that personalized AI really took off.

Trust and safety experts like Soderberg-Rivkin say Sora will likely mark a turning point in the history of the Internet, the moment when deepfakes moved from a mostly one-off phenomenon to a status quo that could drive people further away from social media or at least shatter faith in the integrity of what people watch online.

Disinformation experts have long warned about the concept known as The “liar's dividend” is when the proliferation of deepfakes allows people, especially people in power, to dismiss real content as fiction. But now, experts say, the reality is starker than ever before.

“I'm less concerned about the very specific nightmare scenario of deepfakes affecting an election and more concerned about the underlying erosion of trust,” Soderberg-Rivkin said. “In a world where everything can be fake, but fake looks and feels real, people will stop believing everything.”

Leave a Comment