Leading AI company to ban kids from long chats with its bots

Character.AI, a platform for creating and communicating with artificially intelligent chatbots, plans to begin blocking minors from having “open” conversations with their virtual characters.

The major change comes as the Menlo Park, Calif., company and other leaders in artificial intelligence face greater scrutiny from parents, child safety groups and politicians over whether chatbots are harming the mental health of teenagers.

Character.AI said in blog post Wednesday said it is working on a new experience that will allow teens under 18 to create videos, stories and streams with characters. However, as the company makes this transition, it will limit chats for minors to two hours per day, a number that will “lower” through November 25th.

Suicide Prevention and Crisis Counseling Resources

If you or someone you know is struggling with suicidal thoughts, seek professional help and call 9-8-8. The first national three-digit hotline in the United States, 988, will connect callers with trained mental health counselors. Text “HOME” to 741741 in the US and Canada to reach Crisis text line.

“We do not take this move to remove public character chat lightly, but we believe it is the right thing to do given the questions raised about how teens are and should be interacting with this new technology,” the company said in a statement.

The decision shows how tech companies are responding to mental health issues as more parents sue the platforms after the deaths of their children.

Politicians are also putting more pressure on tech companies by introducing new laws aimed at making chatbots more secure.

OpenAI, the creator of ChatGPT, has announced a new security features after California couple the lawsuit alleges that its chatbot provided information about suicide methods, including the one their teenager Adam Rein used to kill himself.

Last year, several parents sued Feature.AI over allegations that chatbots caused their children to harm themselves and others. The lawsuits accused the company of releasing the platform without ensuring its safety.

Character.AI said it takes the safety of teens seriously and outlined steps it can take to moderate inappropriate content. Company policies prohibit the promotion, glorification or encouragement of suicide, self-harm and eating disorders.

Following the deaths of their teenagers, parents have called on lawmakers to do more to protect young people as chatbots grow in popularity. While teenagers use chatbots for study, entertainment and more, some also interact with virtual characters for communication or advice.

Character.AI has over 20 million monthly active users and over 10 million characters on its platforms. Some characters are fictional, others are based on real people.

Megan Garcia, a Florida mom who sued Character.AI last year, claims the company failed to notify her or offer help to her son, who expressed suicidal thoughts to the app's chatbots.

Her son Sewell Setzer III committed suicide after talking to a chatbot named after Daenerys Targaryen, a character in the fantasy television series and book series Game of Thrones.

Garcia then testified in support of legislation this year that would require chatbot operators to have procedures in place to prevent the creation of content about suicide or self-harm and to install safety guardrails such as directing users to a suicide hotline or emergency text line.

California Gov. Gavin Newsom signed the legislation, Senate Bill 243, into law but faced resistance from the tech industry. Newsom vetoed a more controversial bill that he said could inadvertently lead to a ban on artificial intelligence tools used by minors.

“We cannot prepare our youth for a future in which AI is ubiquitous by completely preventing them from using these tools,” he wrote in the veto message.

In a blog post, Feature.AI said it decided to ban minors from communicating with its artificial intelligence chatbots after receiving feedback from regulators, parents and security experts. The company is also developing a way to ensure users have an age-appropriate experience and is funding a new nonprofit dedicated to AI safety.

In June, character.AI also named Karandeep Anand, who previously worked as an executive at Meta and Microsoft, as its new chief executive.

“We want to set a precedent that prioritizes teen safety while still offering opportunities for young users to discover, play and create,” the company said.

Leave a Comment