OpenAI’s Sora ushers in the age of AI-generated social media : NPR

Sora/Open AI/Annotation from NPR

Fascist spongy squares, a dog that controls the machine and Jesus playing Minecraft – This only some from things You can see how you scroll through the new Openai application, inhabited exclusively with the help of short videos generated using artificial intelligence.

And if you cannot find what you are looking for, do not worry: you can easily use a small text window of the hint in the application. The result is a very used stream of sometimes funny, and sometimes strange 10-second videos.

Openai released Sora application On Tuesday, just a few days after the release of META A similar product As part of your META AI platform. NPR looked before and found that the Openai application can easily generate very realistic videos, including real people (with their resolution). Early results are surprise and alarming researchers.

“You can create insanely real videos when your friends say what they will never say,” said Solomon Messing, Associate Professor at the New York University at the Center for Social Networks and Politics. “I think that we could be in the era when a vision does not believe.”

Deepfake Tiktok

The Sora 2 application looks and feels surprisingly similar to other applications for social networks of a vertical video, such as Tiktok. It comes with several different settings – for example, you can choose a video by mood. According to Openai, users are allowed to control how their face is in the “Testure” in the video generated by AI. This means that users can allow their people to use everything, a small circle of friends or only themselves. Moreover, they are allowed to delete videos showing their similarities at any time.

Sora is also supplied with ways to determine its content as generated AI. Video loaded from the application contain moving watermarks with the Sora logo, and the files have built-in metadata, which identify them as AI-MADE, According to the companyField

Openai says that he placed the fences on what the application could do. The company representative also directed the NPR to Sora's System mapWhich prohibits creating content that can be used for things such as “deception, fraud, fraud, spam or personification.”

“In order to maintain law enforcement, we provide reporting in the application, combine automation with a human review to detect the patterns of improper use, and apply fines or delete content when violations occur,” the document says.

But for a short time the NPR, using the application, found that the fences seemed to be somewhat free around Sora. Although many hints were refused, it was possible to create a video that support the theory of conspiracy. For example, it was easy to create a video about what seemed president Richard Nixon, giving a television address in which America was falsified by landing on the moon.

And one of the astronaut Neil Armstrong, taking off his helmet on the moon.

NPR was also able to generate videos that depicted the drone attack on the power station. It also seemed to violate the leading principles for violence and (possibly) terrorism.

In addition, the application seemed to contain other loopholes. NPR was able to get it to create short videos on topics related to chemical, biological, radiological and nuclear weapons in the direct contradiction of Openai Global use policyThe field (the created videos have never been separated and inaccuracies were contained, which will make them useless for anyone who seeks such information.)

Clown on the run

Although it is unclear whether other users find similar exploits, a quick review of content shows that Sora is used to generate a huge volume of video depicting brands and materials protected by copyright. One video shows Ronald McDonald, who is running around the police in a Hamburger car. Many others included characters from popular cartoons and video games.

Openai told NPR that he knows about the use of material protected by the copyright in Sora, but he felt that he gave his users more freedom, allowing him.

“People strive to interact with their family and friends through their own imagination, as well as stories, characters and worlds that they love, and we see new opportunities for the creators to deepen their connection with fans,” said Waun Shhetti, head of Partnerhips Openai, in a written statement related to NPR. “We will work with copyright holders to block characters from Sora at their request and respond to requests for blows.”

Openai currently sue To New York Times For violation of copyright with its large language model, Chatgpt.

Brave virtual world

According to the mess, what the effect of the world of social networks is completely based on AI, remains unclear. Many researchers were deeply concerned about Deepfakes when the video appeared for the first time, and yet only a few of these videos get momentum. “We were all collectively panicked in Deepfakes a couple of years ago, but society did not actually disintegrate from the deep pilchins,” he said.

But others are concerned that the collective sense of reality may begin to unravel. Sora is just the last of many tools that can generate images, video and audio at will.

“We really see the ability to be more insignificant to generate an incredibly realistic, hyperrealist content with any kind that you want,” said Henry Auder, head of the Latent Space consultation, which tracks the evolution of content generated AI.

No matter how concerned that people are deceiving, Aider said that he is also very concerned about the consequences that no one trusts what they see on the Internet.

“We must resist a somewhat nihilistic attraction:“ We can no longer say that it is real, and therefore it does not matter anymore, ”he said.

Messing said that, while it is unclear, what the consequences will be, it is clear that Sora is very good in creating everything that can be imagined: “It just leaves me like silent,” he said. “I do not quite understand how good the content is.”

Leave a Comment