Did you know you can configure Google to filter junk? Take these steps to improve search results, including adding my work to Lifehacker as a preferred source.
It's scary how realistic AI-generated videos become. What's even scarier, however, is how accessible the tools are to create these videos. Using something like OpenAI's Sora apppeople can create hyper-realistic short videos about almost anything they want.including real peoplesuch as celebrities, friends or even themselves.
OpenAI is aware of the risks associated with an application that makes it easy to create realistic videos. As such, the company places a watermark on any generation of Sora you create through the app. So, if you're scrolling through your social media feeds and see a little Sora logo with a cute cloud and bouncing eyes, you know it was created by artificial intelligence.
You Can't Trust the Sora Watermark
When OpenAI announced this app, my immediate concern was that people would find a way to remove the watermark, causing confusion on the Internet. I was not mistaken: there are already many options for stakeholders who want to make their AI even more realistic. But what I didn't expect was the opposite: people who want add Sora watermark to real videos to make them look like they were created using artificial intelligence.
I was scrolling – or perhaps scrolling – on X recently when I started watching some of these videos, e.g. This photo features Apple executive Craig Federighi.: The post says “Sora is getting so good” and there is a watermark of Sora, so I assumed that someone had made a cameo of Federighi on the app and posted it on X. However, to my surprise, the video was simply taken from one of Apple's pre-recorded WWDC events – the one where Federighi parkours around Apple's headquarters.
Later, I saw this clipwhich also uses the Sora watermark. At first glance, you might be fooled into thinking that this is an OpenAI product. But look closely and you'll see that the clip features real people: the footage is too perfect, without the fuzziness or glitches you usually see when creating videos using artificial intelligence. This clip simply parodies the way Sora tends to create multi-frame clips of people talking. (Astute viewers may also notice that the watermark is slightly larger and more static than Sora's actual watermark.)
This tweet is currently unavailable. It may be downloading or has been deleted.
As it turned out, the account that posted the second clip also made the tool to add a Sora watermark to any video. They don't explain the nature or purpose of this tool, but it is definitely real. And even if this tool didn't exist, I'm sure editing Sora's watermark in a video wouldn't be that difficult, especially if you weren't bothered by copying the movement of Sora's official watermark.
What are your thoughts so far?
To be clear, people were already posting similar posts before adding the watermark tool. The joke is to say you did something to Sora, but instead post a popular or infamous clip, like this: Drake's Sprite ad from 15 years ago, Taylor Swift dances on The Eras Touror whole Sonic the Hedgehog movie. Funny meme, especially when you can see that the video there was no Sora did.
This tweet is currently unavailable. It may be downloading or has been deleted.
Real or not real?
But it's an important reminder to be constantly vigilant when scrolling through videos in your feeds. You should keep an eye out for both clips that are not real and clips that are real but are advertised as being created by artificial intelligence. There are many implications here. Sure, it's fun to put Sora's watermark on a viral video, but what happens if someone adds a watermark to a real video of illegal activity? “Oh, this video is not real. All the videos you see without a watermark have been doctored.”
At the moment, no one seems to have figured out how to perfectly replicate Sora's watermark, so there will be signs if someone actually tries to pass off the real video as AI. But all this is still a concern and I don't know what the solution might be. We may be heading towards a future in which Internet videos are simply considered unreliable across the board. If you can't tell what's real and what's fake, why try?