Will Smith concert video highlights concern about AI faking crowds : NPR

Still showing Aaaaaaaa Open advertising video For its new platform, the generation of the Sora 2 video. The AI ​​crowds are traditionally set for companies such as Openai and Google. But their models improve all the time.

Openai


Hide the signature

The heading switch

Openai

Willa Smith's concert video I recently broke the Internet – not for his performance, but for the crowd. Spectators with eagle eyes noticed strange fingers and faces, in the audience, among other visual failures and suspected manipulations with AI.

The crowd of the scene is a special technological task for tools for creating images of AI – especially video. (Smith’s team did not publicly comment on, and did not answer the NPR request about how the video was made.) “You control so many difficult details,” said the artist and researcher from San Francisco, an expert on creating artificial intelligence images. “You have every individual person in the crowd. They all move independently and have unique features – their hair, their face, hat, their phone, shirt. ”

But the latest models of video generation with artificial intelligence, such as Google I see 3 and Openai's Sora 2 It gets pretty good. “We go into a world where, in a generous temporary assessment of the year, the line of reality will become really blurry,” Giana said. “And checking what is real and what is not real should almost become as practice.”

Why are the images of the crowd they matter

This observation can potentially have serious consequences in a society where the images of large, involved crowds at public events, such as rock concerts, protests and political rallies, have serious currency. “We need a visual metric, a way to determine whether anyone suits or not,” said Thomas Smith, general director Gado imagesA company that uses AI to manage visual archives. “And the size of the crowd is often a good indicator of this.”

A report From the Global Consulting company, Capgemini shows that almost three quarters of the images separated on social networks in 2023 were obtained using AI. Since the technology is becoming increasingly skillful in creating convincing crowd, manipulating visual effects has never been easier. With this, both a creative opportunity and social danger arise. “AI is a good way to deceive and how to inflate the size of your crowd,” Smith said.

He added that there is also the reverse side of this phenomenon. “If there is a real image that appears, and this shows something politically uncomfortable or destructive, there will also be a tendency to say:“ No, this is a fake of AI. ”

One example of this happened in August 2024, when the candidate from the Republic of Donald Trump disseminate false claims This opponent, a democrat, Kamal Harris used AI to create an image of a large crowd of supporters.

University lecturer Capen Charlie Fink, who writes about AI and other new technologies ForbesHe said that it is especially easy to deceive people to believe that the fake crowd is real, or a real crowd – fake from the way the images are delivered. “The problem is that most people watch content on a small screen, and most people are not very critical of what they see and hear,” Fink said. “If it looks real, it is real.”

Balancing creativity with public security

For technological companies standing behind the generators of AI images and social networks platforms, where the video generated by the generated AI lands, there is a subtle balance that should be achieved between users to create an increasingly realistic and plausible content-introduction of detailed scenes of the crowd and mitigation of potential harm.

“The more realistic and believable we can create results, the more options it gives people a creative expression,” said Oliver Van, the main scientist in the Google Deepmind, which together with inciting the company's efforts to generate the image. “But misinformation is that we are very serious. Therefore, we push off all the images that we generate with a visible water sign and invisible water sign. ”

Nevertheless, the visible one is a public-accessible-vein sign, currently displayed in the video created using the Google VEO3, tiny, and it is easy to skip it, hidden in the corner of the screen. (Invisible watermarks such as Google Sinentidenot visible to the eyes of a permanent user; They help technological companies to control the CONTAM AI behind the scenes.)

And the labeling systems of AI are still applied quite unevenly on all platforms. There are still no generally interesting standards, although the companies with which the NPR spoke for this story said that they are motivated for their development.

META, Maternal company Instagram, currently Tags load the content generated AI when users open it or when their systems detect it. Google video created using your own generative tools on YouTube automatically have a mark in the description. This He asks Those who create the media using other tools for self -disclosure when using AI. Tiktok Requires The creators, in order to label the generated AI or significantly edited content that shows realistically looking scenes or people. The irresistible content can be removed, limited or marked with our team, depending on the harm that it can cause.

Meanwhile, Will Smith received more pleasure from AI since this controversial video of the concert came out. He published the playful follow up In which the camera is released from the frames of the singer who protrudes energetically on the stage to show the audience filled with fists of cats. Smith turned on the comment: “The crowd was Poppin -Tonite !!”

Leave a Comment