OpenAI’s Sora Underscores the Growing Threat of Deepfakes

When OpenAI released its AI-powered video creation app Sora in September, promised that “you control your likeness from start to finish.” The app allows users to include themselves and their friends in videos through a feature called “cameos” – the app scans the user's face and checks their activity, providing data to create the user's video and to confirm friends' consent to use their image in the app.

But Reality Defender, a company that specializes in identifying deepfakes, says it has succeeded get around Sora's protective measures to combat impersonation within 24 hours. Platforms like Sora provide a “credible sense of security,” says Reality Defender CEO Ben Coleman, even though “anyone can use off-the-shelf tools” to authenticate under a different name.

Reality Defender researchers used publicly available footage of famous people, including CEOs and entertainers, from earnings calls and media interviews. The company managed to break security measures using all the likenesses they tried to impersonate themselves. Coleman claims that “any smart 10th grader” can figure out the tools his company uses.

An OpenAI spokesperson said in a statement emailed to TIME that “researchers have created a sophisticated system of deepfakes of CEOs and artists to try to bypass these protections, and we are continually strengthening Sora to make it more resilient to this type of abuse.”

Sora's post and the quick bypass of her authentication mechanisms are a reminder that society is not ready for the next wave of increasingly realistic, personalized deepfakes. The gap between evolving technology and lagging regulation forces people to navigate an uncertain information landscape on their own and protect themselves from potential fraud and harassment.

“The platforms absolutely know this is happening, and they absolutely know they could fix this problem if they wanted to. But until the rules catch up – we're seeing the same thing across all social media platforms – they won't do anything,” Colman says.

Sora achieved 1 million downloads in less than five days—faster than ChatGPT, which was the fastest-growing consumer app at the time—despite requiring users to be invited. according to Bill Peebles, head of OpenAI at Sora. The OpenAI release follows a similar offering from Meta called Vibes, which is integrated into the Meta AI application.

The growing availability of convincing deepfakes has alarmed some observers. “The truth is that spotting [deepfakes] “By eye” is becoming virtually impossible given the rapid advances in cloning capabilities for text to image, text to video, and audio,” Jennifer Eubank, former deputy director for digital innovation at the CIA, said in an email to TIME.

Regulators have been grappling with how to combat deepfakes since at least 2019, when President Trump signed legislation requiring the Director of National Intelligence to investigate the use of deepfakes by foreign governments. However, as the availability of deepfakes has increased, the focus of legislation has shifted closer to home. In May 2025, the Take It Down Act was passed. signed to a federal law that would ban the online publication of “intimate visual images” of minors and non-consenting adults, and would require platforms to remove offensive content within 48 hours of a request, but would only enforce start off in May 2026.

Legislation banning deepfakes could be fraught. “It's actually very difficult, technically and legally, because there are First Amendment concerns about removing certain speech,” says Jameson Spivak, deputy director of U.S. policy at the Future of Privacy Forum. In August, a federal judge struck down a California deepfake law that sought to curb AI-generated deepfake content during the election, after Elon Musk X sued state on the grounds that the law violated First Amendment protections. As a result, labeling requirements for AI-generated content are more common than outright bans, Spivak says.

Another promising approach is for platforms to adopt more effective know-your-customer schemes, says Fred Heiding, a fellow at Harvard University's Defense, Emerging Technologies and Strategy Program. Know-your-customer schemes require users of platforms like Sora to log in using a verified identity, increasing accountability and allowing authorities to track illegal behavior. But there are compromises here too. “The problem is that we in the West really value anonymity,” says Hiding. “That's good, but anonymity comes at a cost, and the cost is that these things are really hard to enforce.”

As lawmakers grapple with the growing prevalence and realism of deepfakes, individuals and organizations can take steps to protect themselves. Spivak recommends using authentication software such as Content Credentialsdeveloped by the Coalition for Content Provenance and Authenticity, which adds provenance metadata to images and videos. Cameras from Canon And Sony support watermark like Google Pixel 10. The use of such authentication increases the credibility of genuine images and undermines counterfeits.

As the online information landscape changes, making it difficult to trust what we see and hear online, lawmakers and individuals must increase public resilience to fake media. “The more we develop this resilience, the harder it becomes for anyone to monopolize our attention and manipulate our trust,” Eubank says.

Leave a Comment