Character.ai to ban teens from talking to its AI chatbots

Chatbot site Character.ai is making it impossible for teenagers to interact with virtual characters after they faced harsh criticism over the way young people interact with online chatters.

Founded in 2021, the platform is used by millions of people to communicate with artificial intelligence (AI)-powered chatbots.

But in the US he has been hit with several lawsuits by parents, including over the death of a teenager, with some calling it “clear and present danger“youth.

Now, as of November 25, Character.ai reports, kids under 18 will be able to create content, such as videos with their characters, rather than talking to them as they can now.

Online safety campaigners welcomed the move but said the feature should not have been made available to children in the first place.

Feature.ai said it was making the changes following “reports and feedback from regulators, security experts and parents” who highlighted concerns about chatbots interacting with teenagers.

Experts have previously warned that AI chatbots can create problems, be overly encouraging and feign empathy, which could pose a danger to young and vulnerable people.

“Today's announcement is a continuation of our shared belief that we need to continue to build the safest artificial intelligence platform on the planet for entertainment purposes,” Character.ai chief executive Karandeep Anand told BBC News.

He said AI safety is a “moving target” but the company has taken an “aggressive” approach. parental controls and barriers.

Online safety group Internet Matters welcomed the announcement but said security measures should have been in place from the start.

“Our own research shows that children are exposed to harmful content and are at risk when interacting with artificial intelligence, including artificially intelligent chatbots,” it said.

Character.ai has been criticized in the past for hosting potentially harmful or offensive chatbots that children could interact with.

Avatars depicting British teenagers Brianna Gay, who was killed in 2023, and Molly Russell, who took her own life aged 14 after viewing suicide material online, were discovered on site in 2024 before it's taken down.

Later in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot based on pedophile Jeffrey Epstein that had logged over 3,000 chats with users.

The publication reported this “Bestie Epstein” avatar continued to flirt with his reporter after he said he was a child. It was one of several bots flagged by TBIJ that were subsequently removed by Feature.ai.

The Molly Rose Foundation, created in memory of Molly Russell, questioned the platform's motivation.

“Once again, it took constant pressure from the media and politicians to get the tech firm to do the right thing, and it looks like the character AI is deciding to act now, before regulators force them,” said Andy Burrows, the company's chief executive.

Mr Anand said the company's new focus was to provide “an even deeper gaming experience”. [and] storytelling features in role-playing games” for teens – adding them would be “much safer than what they could do with an open bot.”

There will also be new verification methods, and the company will fund a new AI safety research lab.

Social media expert Matt Navarra said it's a “wake-up call” for the AI ​​industry, which is moving “from unpermitted innovation to post-crisis regulation.”

“When a platform that creates the teenage experience then goes offline, it says filtered chats are not enough when the emotional appeal of the technology is strong,” he told BBC News.

“This is not about errors in content. It's about how AI bots imitate real relationships and blur the boundaries for young users,” he added.

Mr Navarra also said a big challenge for Character.ai would be to create an attractive artificial intelligence platform that teenagers would still want to use rather than move on to “less safe alternatives”.

Meanwhile, Dr Nomisha Kurian, who has researched the safety of artificial intelligence, said limiting teenagers' use of chatbots was a “sensible move”.

“It helps separate creative play from more personal, emotionally sensitive communication,” she said.

“This is very important for young users who are still learning to navigate emotional and digital boundaries.

“Character.ai’s new measures may reflect a stage of maturity in the AI ​​industry, with child safety increasingly recognized as an urgent priority for responsible innovation.”

Leave a Comment