When Everything Is Fake, What’s the Point of Social Media?

Earlier this week touching post A video of a girl, a puppy and a policeman has gone viral on social media. The post consisted of two dashcam images of a distraught 12-year-old girl who, desperate to cure her sick puppy, got behind the wheel for the first time and tried to go to the vet. She was stopped, but the officer praised her for being “amazing, strong, compassionate and smart” and the puppy was saved. Comments poured in, celebrating the bond between the girl and her furry best friend.

But when social media users took a closer look, they noticed a few strange things: the steering wheel was on the right side of the car, which was also missing a dashboard. And the image did not appear on any news platform or official police page, but simply appeared on Facebook on its own.

The image was, predictably, yet another example of AI slop: AI-generated images designed to maximize engagement on social media are slipping into users' feeds without any signal as to whether they are real or fake. As far as AI dregs go, this specimen was relatively harmless. But there's been more artificial intelligence junk on social media this week, thanks to the arrival of Sora 2, OpenAI's new and improved text-to-video model.

Some videos were clearly fabricated, e.g. Pope John Paul II fights Tupac in the ring. Others were more difficult to distinguish, such as the boy swept away tornado or homeless men get inserted into people's homes. Sora became the most downloaded free app in the Apple App Store in its first week.

Sam Altman, CEO of OpenAI, says he hope these Sora 2 videos will “seem interesting and new” and will also help teach its AI how the 3D world works. Critics, on the other hand, see them as a potential death knell for social media. What was supposed to be a revolutionary means of maintaining friendships and relationships has now turned into a fake content machine where it is impossible to tell what is real and what is not.

“For years, the Internet has been a place where people go to feel connected. But if everything on the Internet starts to seem fake, and all our For You pages are videos created by Sora, people will start to return to what is physically provable,” says Kashyap Rajesh, vice president of the youth organization Encode. “The irony is that AI may eventually ultimately saving human connections and human relationships because they make us so desperate for this real thing.”

Short term growth

Realistic AI-generated images and videos have been an important goal for every major AI lab over the past few years. Artificial intelligence leaders hope users will be able to quickly and cheaply create music videos, films and advertisements, ushering in a new era of creativity. Some also believe that video models are the key to creating artificial general intelligence, or AGI—a super-intelligent AI that perfectly understands physics and can thus move around the world seamlessly.

To hone their models, these companies need users to create large volumes of content that can be used to train the data. Just this year, Meta released a dedicated AI video channel called Vibes; Google released Veo 3; and Bytedance launched Seedance, to name just a few of its AI-powered video offerings. These applications can be thought of as part of a larger flywheel: they are designed to bring mass adoption while improving their products.

In the short term, these products will see significant usage and attract significant traffic creators who will take advantage of this environment. For example, Pigeon Gnomea series of Spanish-language videos about a GoPro-wearing gnome going on magical adventures has racked up hundreds of thousands of likes and subscribers in the last four months alone. And spin-off video games from the Italian cinematic universe Brainrot broke all kinds of records in Roblox and Fortnite.

But Ben Coleman, CEO and co-founder of deepfake detection platform Reality Defender, says that while these videos can give platforms a boost in revenue, their success may be short-lived. “I think history has proven that this kind of race to the bottom in terms of content quality tends to be negative for the platforms themselves,” he says.

Coleman points to MySpace as an example of a platform that suffered when it didn't prioritize its users, instead cluttering its pages with ads and a frustrating experience. “If you just see a bunch of noise, it stops being a personal connection,” he says.

Altman, in blog postwrote that Sora 2 will be optimized for “long-term user satisfaction” and that OpenAI will shut down the product if it feels it impairs user well-being.

Social dangers

The decline of social media, if it occurs, will inevitably be slow. In the meantime, critics worry about how the rise of AI will affect society. Encode's Rajesh argues that AI-powered realistic videos will threaten our shared understanding of reality. In the past, videos on social media were seen as proof that events actually happened—and could change the course of history, as in the case of George Floyd. Now real events will be rejected as fakes, and fake events will be considered real; Disinformation and disinformation campaigns may become rampant.

“This makes many of our channels a high-noise, low-trust place where every emotional moment is suspect,” says Rajesh. “It kind of creates a low-level paranoia in people that kills the spontaneity and the magic of social media to begin with.”

Sora videos are watermarked to indicate their artificial origin. But tools have already been created that allow either add or remove watermarks from videos. This means it's now easy createFor example, fake dashcam footage for insurance fraud.

Colman conducted a security experiment with his team at Reality Defender and found that he was able to use Sora to create AI avatars of prominent people and then “verify” their authenticity as if they came from the celebrities themselves. “When you have a multibillion-dollar company claiming to have already done identity verification, it makes this whole danger a million times more dangerous,” he says.

It's no secret that social media algorithms reward divisive content. Coleman worries that AI will only exacerbate these dynamics. “Platforms are essentially markets for attention. The return on investment is better if you try to draw attention to extremist views,” he says. “This creates an infinitely more polarizing echo chamber, giving mass market consumers what they need to become more extreme in everything.”

Meanwhile, Rajesh says the widespread rise of deepfakes could lead to an escalation in the use of “identity verification” systems in which people need to prove they are human to participate online. (Sam Altman does have one such solution, Worldcoin, which verifies users by scanning their eyes.)

Read more: The sphere will see you now

Going offline

In response to these changes, a growing number of frustrated people are taking the plunge and ditching their phones altogether. Earlier this year, Grant Besner co-hosted an educational program in Washington called Month Offline, which encourages participants to turn off their smartphones for a month and examine their relationship with the devices.

To advertise the program this summer, Besner posted fliers around D.C. that read: “Fake images of real people, real images of fake people, dissatisfaction with the content… Refuse the doomscroll. Call 1-844-OFFLINE.”

Promotional poster for the Month of Offline program. Grant Besner

Besner says the hotline has received hundreds of calls in response. “I've talked to a lot of people who have a very difficult relationship with the touch screen, and a lot of it has to do with their relationship with the content itself: mindlessly scrolling through things that don't add anything to their life,” Besner says.

Other organizations are testing similar approaches. For example, the Aspen Institute. organized Airplane Mode is coming this year. Andrew Youngformer presidential candidate, throws phone-free parties in New York and promotes Noble Mobile's new data plan, which reimburses users for leftover data.

Besner adds that the advent of Sora 2 and hyper-realistic video “could be a tipping point where people sort of reclaim some of their agency and say, 'You know what, this whole unfettered way of interacting with information and with each other and with ourselves may not be leading to better results.'

Leave a Comment