Is that a newscast or a sales pitch? New AI videos make it tough to tell

In a short video post, the influencer is excited about the TV news from California. The images broadcast behind her appear authentic, with the anchor calling viewers to action, victims and even the CNN logo.

“California accident victims are getting crazy payouts,” the anchor says above a banner that reads “BREAKING NEWS.”

But what may be a social media star excited about local news is actually an ad encouraging people to sign up for legal services. And much of it is created by artificial intelligence.

With the emergence of a plethora of new AI-powered video tools and new ways to distribute them in recent months, the line between news releases and promotional offers is beginning to blur.

Personal injury lawyers have long been known for over-hyping. They use the latest techniques—radio, television, 1-800 numbers, billboards, bus stop benches, and commercials—to plant their brands in the minds of consumers. The ad is deliberately repetitive, outrageous and memorable, so if viewers have an accident, they remember who to call.

Now they are using AI to create a new wave of advertising that is more persuasive, engaging and local.

“Online advertising for both products and services uses AI-generated people and AI copies of influencers to promote their brand without revealing the synthetic nature of the people featured,” said Alexios Mantzarlis, director of trust, safety and security at Cornell Tech. “This trend is not conducive to the pursuit of truth in advertising.”

Bots aren't just cloning TV news. Increasingly, flashy headlines in people's news feeds are generated by artificial intelligence on behalf of advertisers.

In one online debt relief ad, a man holds a newspaper with a headline saying Californians with $20,000 in debt are eligible for help. The ad shows borrowers lining up to get benefits. According to experts, this man, the Forbes newspaper he runs, and a number of people were created by artificial intelligence.

Despite growing criticism of what some call “AI junk,” companies continue to launch increasingly powerful tools for creating realistic video using AI, making it easier to create complex fake news and broadcasts.

Meta recently introduced Vibes, a dedicated app for creating and sharing short videos generated by artificial intelligence. A few days later, OpenAI released its own AI video sharing app Sora with an updated video and audio generation model.

Sora's Cameo feature allows users to insert their own or a friend's image into short, photorealistic, AI-powered videos. Creating a video takes seconds.

Since its launch last Friday, Sora has risen to the top of the App Store download rankings. OpenAI encourages companies and developers to use its tools to develop and market their products and services.

“We hope that now with Sora 2's video [Application Programming Interface]you'll create the same high-quality videos directly inside your products, complete with realistic and synchronized audio, and find lots of great new things to create,” OpenAI CEO Sam Altman told developers this week.

A new class of synthetic social networks is emerging that allow users to create, share and discover AI-generated content in a customized feed tailored to individual tastes.

Imagine a constant stream of videos that are as captivating and viral as those on TikTok, but it's often impossible to tell which ones are real.

The danger, experts say, is how to use these powerful new tools that are now available to almost everyone. In other countries, state-backed actors have used news broadcasts and stories generated by artificial intelligence to spread disinformation.

Online security experts say AI churning out questionable stories, propaganda and advertisements is in some cases drowning out human-generated content and degrading the information ecosystem.

YouTube has had to remove hundreds of AI-generated videos featuring celebrities, including Taylor Swift, that promoted Medicare fraud. Spotify has removed millions of music tracks created by artificial intelligence. According to the FBI, the Americans lost 50 billion dollars deepfake scams since 2020.

Last year, a Los Angeles Times journalist was wrongly declared dead AI news anchors.

In a world of legal services advertising that has long pushed the envelope, some are concerned that rapidly evolving AI is making it easier to circumvent restrictions. This is a fine line, as legal advertising may be dramatic, but it is not allowed to promise results or payouts.

AI news releases featuring AI victims conducting major AI tests are testing new territory, said Samuel Hyams-Millard, an associate at the law firm SheppardMulin.

“Someone might see this and think it's real, oh, this person actually got paid this amount of money. It actually looks like news, even though it may not be,” he said. “This is a problem.”

One of the pioneers in this area is Case Connect AI. The company runs sponsored ads on YouTube Shorts and Facebook, targeting people who have been involved in car accidents and other injuries. It also uses artificial intelligence to tell users how much they can get from a lawsuit.

In one ad, an enthusiastic social media influencer appears to say that insurance companies are trying to shut down Case Connect because its “compensation calculator” is costing insurance companies so much.

The ad then cuts to a five-second news clip about the payouts users receive. The actor appears again, pointing to another short video that appears to show couples holding large checks and celebrating.

“Everyone behind me used the app and got huge payouts,” says the influencer. – Now it’s your turn.

In September, at least a half-dozen short YouTube ads from Case Connect featured AI-generated news anchors or testimonials featuring fictitious people, according to Case Connect. ads found via the Google Ads Transparency website.

Case Connect doesn't always use AI-generated humans. Sometimes he uses AI generated robots or even monkeys spread your message. The company said it is using Google's Veo 3 model for video creation. The company did not say which parts of its ads were created using AI.

Angelo Perone, founder of Pennsylvania-based Case Connect, says the firm is running social media ads that use artificial intelligence to target users in California and other states who may have suffered from car accidents, accidents or other injuries to potentially sign up as customers.

“This gives us a super power to connect with people who have been injured in car accidents so that we can serve them and assign them the right attorney based on their situation,” he said.

His company generates leads for law firms and receives a flat fee or monthly retainer from the firms. He does not practice law.

“We are navigating this space the same as everyone else—trying to do it responsibly while maintaining efficiency,” Perone said in an email. “There’s always a balance between meeting people where they are and communicating with them in a way that resonates, without over-promising, under-delivering, or misleading anyone.”

Perone said Case Connect complies with the rules and regulations associated with legal advertising.

“Everything follows the proper disclaimers and language,” he said.

Some lawyers and marketers think his company is going too far.

In January, Robert Simon, trial lawyer and co-founder of Simon Law Group, published Instagram video saying some Case Connect advertisements that appeared to target victims of the Los Angeles County fires were “egregious,” warning people about the damage calculator.

Simon said that as a member of Consumer Trustees of California, a legislative consumer lobbying group, he helped draft Senate Bill 37, which combats misleading advertising. This was a problem long before the advent of AI.

“We've been talking about this for a long time, trying to limit the ethics of lawyers,” Simon said.

The personal injury law market in the United States is worth $61 billion, and Los Angeles is one of the largest hubs for the business.

Hyams-Millard said that even if Case Connect is not a law firm, lawyers working with it may be liable for the potentially misleading nature of its advertising.

Even some lead generation companies acknowledge that some agencies may be abusing AI, leading the advertising industry into dangerous, uncharted waters.

“The need for guardrails is not new,” said Vince Wingerter, founder of 4LegalLeads, a lead generation company. “What’s new is that the technology is now more powerful and layered.”

Leave a Comment