US senators seek to prohibit minors from using AI chatbots

Legislation introduced in the US Congress could require artificial intelligence (AI) chatbot operators to implement age verification processes and ban those under 18 from using their services following a spate of teen suicides.

The bipartisan Age Verification and Responsible Conversation (Guard) Act, introduced by Republican Senator Josh Hawley and Democratic Senator Richard Blumenthal, aims to protect children when they interact with chatbots and generative artificial intelligence (GenAI).

This step followed a series high-profile teen suicides that parents linked this to their child’s use of artificial intelligence chatbots.

Hawley said the legislation could set a precedent to challenge the power and political dominance of big tech companies, saying that “there should be a sign outside the Senate chamber that says 'Bought and Paid for by Big Tech, because the truth is that almost nothing they object to crosses this Senate floor.'

In the statement Blumenthal came under fire tech companies' role in harming children, saying that “artificial intelligence companies foist children with insidious chatbots and look away when their products cause sexual violence or drive them to self-harm or suicide…Big tech companies have betrayed any claim that we should trust companies to do the right thing when they consistently put profits first over the safety of children.”

The invoice will arrive in a month bereaved families Testified before Congress before the Senate Judiciary Committee The harm of artificial intelligence chatbots.

Senator Hawley also launched an initiative investigation into Meta's AI policies in August, after publishing an internal meta-policy document that said the company allowed chatbots to “engage a child in romantic or sensual conversations.”

In September, the Senate heard a statement from Megan Garcia, the mother of a 14-year-old child. Sewell Setzerwho used Character.AIregularly talking to a chatbot nicknamed Daenerys Targaryen and shooting himself in February 2024.

Parents of a 16-year-old teenager Adam Rein also testified before the committee. Adam committed suicide after using ChatGPT for support and companionship in the area of ​​mental health, and in August his parents filed a lawsuit against OpenAI for wrongful death, a world first.

The bill would require AI chatbots to remind users that they are not human at 30-minute intervals, and would also impose measures that would prohibit them from impersonating humans and disclosing that they do not provide “medical, legal, financial or psychological services.”

The bill will be announced the same week as OpenAI. published data found that more than a million users per week displayed “suicidal intent” content when using ChatGPT, while more than half a million showed possible signs of a mental health emergency.

Criminal penalties are also included in the bill's scope, meaning that artificial intelligence companies that design or develop AI companions that encourage sexually explicit behavior in minors or encourage suicide would be subject to criminal penalties and fines of up to $100,000.

The SECURE Act defines AI companions as any AI chatbot that “provides adaptive, human-like responses to user actions” and is “designed to encourage or facilitate simulations of interpersonal or emotional interaction, friendship, camaraderie, or therapeutic communication.”

This year's study from Harvard Business Review found that GenAI's primary use case is now therapy and communication, replacing personal organization, idea generation, and specific search.

Parent StatementSOS

In a statement from ParentsSOSA coalition of 20 families affected by online trauma welcomed the legislation but stressed it needs to be strengthened. “This bill should address the core design practices of big tech companies and prohibit artificial intelligence platforms from using features that maximize interaction at the expense of the safety and well-being of young people,” they said.

Historically, artificial intelligence companies have argued that chatbot speech should be protected by the First Amendment and the right to freedom of expression.

In May of this year American judge rules against Feature.AInoting that AI-generated content may not be protected by the First Amendment if it results in foreseeable harm. Other bipartisan efforts to regulate tech companies, including the Children's Online Safety Act, have failed to become law due to controversy over free speech and Article 230 Communications Decency Act.

Currently, ChatGPT, Google Gemini, Meta AI and xAI's Grok allow children as young as 13 years of age to use their services. Earlier this month, California Gov. Gavin Newsom introduced the nation's first legislation regulating artificial intelligence chatbots. Senate Bill 243which will come into force in 2026.

The day after the SAFE Act was announced, Character.AI announced that it would ban those under 18 from using its chatbots as of November 25th. The decision followed investigation It found the company's chatbots were being used by teenagers and providing harmful and inappropriate content, including bots modeled after the likes of Jeffrey Epstein, Tommy Robinson, Anne Frank and Madeleine McCann.

Leave a Comment