If you or someone you know may be experiencing a mental health crisis or contemplating suicide, call or text 988. In an emergency, call 911 or seek help from a local hospital or mental health provider. Regarding international resources, Click here.
A new bill introduced in Congress today would require anyone who owns, operates, or otherwise provides access to artificially intelligent chatbots in the United States to verify the age of their users—and, if the users are found to be underage, to ban them from using artificially intelligent companions.
The GUARD Act, proposed by Sens. Josh Hawley, a Missouri Republican, and Richard Blumenthal, a Connecticut Democrat, is designed to protect children when they interact with AI. “These chatbots can manipulate emotions and influence behavior by exploiting developmental vulnerabilities in minors,” the bill states.
The bill passed after Hawley chaired a Senate Judiciary subcommittee hearing. studying The Harms of AI Chatbots last month, during which the committee heard evidence from the parents of three young people who began self-harming or took their own lives after using OpenAI and Character.AI chatbots. Hawley also launched investigation to Meta's AI policy in August, after releasing internal documents allowing chatbots to “engage a child in romantic or sensual conversations.”
The bill broadly defines the term “AI companions” to include any artificially intelligent chatbot that “provides adaptive, human-like responses to user interactions” and is “designed to encourage or facilitate simulations of interpersonal or emotional interaction, friendship, companionship, or
Therapeutic communication.” So this could apply to both advanced model providers like OpenAI and Anthropic (creators of ChatGPT and Claude) and companies like Character.ai and Replika, which provide artificial intelligence chatbots that pretend to be specific characters.
It would also require age verification measures that go beyond simply entering a date of birth, requiring “government-issued identification” or “any other commercially reasonable method” that can accurately determine whether a user is a minor or an adult.
Developing or creating accessible chatbots that pose a risk of inciting, encouraging or encouraging minors to engage in sexual behavior or that promote or encourage “suicide, non-suicidal self-harm, or imminent physical or sexual violence” would also be considered a criminal offense and could result in a fine of up to $100,000.
“We are encouraged by the recent passage of the GUARD Act and appreciate the leadership of Senators Hawley and Blumenthal in this effort,” said a statement signed by a coalition of organizations including the Youth Alliance, the Technology Justice Act Project and the Institute for Families and Technology. Noting that “this bill is part of a national movement to protect children and teens from the dangers of companion chatbots,” the statement proposes that the bill strengthen the definition of AI companions and “focus on platform design, prohibiting AI platforms from using features that maximize interaction at the expense of the safety and well-being of young people.”
The bill would also require artificial intelligence chatbots to periodically remind all users that they are not human and state that they “do not provide medical, legal, financial, or psychological services.”
Earlier this month, California Gov. Gavin Newsom signed SB243 into law. law it also requires artificial intelligence companies operating in the state to implement child protection measures, including creating protocols to identify and address suicidal ideation and self-harm, and taking steps to prevent users from causing harm. Law kit will come into force on January 1, 2026.
In September, OpenAI announced that they Creation an “age prediction system” that automatically redirects to a teen-friendly version of ChatGPT. Minors “ChatGPT will be trained not to flirt when asked and not to engage in discussions about suicide or self-harm, even in creative writing settings,” the company wrote. “And if a user under 18 experiences suicidal thoughts, we will attempt to contact the user's parents and, if we cannot, contact authorities in the event of imminent harm.” In the same month the company rolled up introduced “parental controls”, allowing parents to control their children’s use of the product. There were parental controls too. presented from Meta for its artificial intelligence models earlier this month.
In August, the family of a teenager who committed suicide launched a campaign. lawsuit v. OpenAI, alleging that the company weakened protections that would have prevented ChatGPT from engaging in conversations about self-harm—a “deliberate decision” to “prioritize interactions,” according to one of the family’s lawyers.






