After sustained protest From child safety advocates, families and politicians, California Gov. Gavin Newsom has signed legislation aimed at curbing artificial intelligence. chatbot behavior that experts consider unsafe or dangerousespecially for teenagers.
Law, known as SB 243requires chatbot operators to prevent their products from exposing minors to sexual content and to continually remind those users that chatbots are not human. In addition, companies subject to the law must implement a protocol for dealing with situations where a user discusses suicidal ideation, suicide, and self-harm.
State Sen. Steve Padilla, a Democrat who represents San Diego, is a sponsor and introduced the bill earlier this year. He told Mashable in February that SB 243 was intended to address pressing emerging security issues using artificial intelligence chatbots. Given the rapid development and adoption of technology, Padilla said “regulatory restrictions are lagging far behind.”
Common Sense Media, a non-profit group that supports children and parents in their use of media and technology, said AI companions in chatbots are dangerous for teenagers under 18 years of age earlier this year.
The Federal Trade Commission recently launched investigation of chatbots acting as companions. Last month, the agency informed major companies that use chatbot products, including OpenAI, Alphabet, Meta and Character Technologies, that it was looking for information on how they monetize user interactions, generate results and develop so-called personas.
Before SB 243 passed, Padilla lamented how AI companions in chatbots could clearly harm young users: “The technology can be a powerful educational and research tool, but left to its own devices, the tech industry has an interest in capturing and keeping young people's attention at the expense of their real-world relationships.”
Last year, grieving mother Megan Garcia filed a wrongful death lawsuit against Character.AI, one of the most popular AI companion chatbot platforms. Her son, Sewell Setzer III, committed suicide after a difficult encounter with a Character.AI companion. The lawsuit alleges that Feature.AI was designed to “manipulate Sewell—and millions of other young customers—into confusion between reality and fiction,” as well as other dangerous defects.
Mashable Trends Report
Garcia, who lobbied for SB 243, applauded Newsom's signing.
“California has now ensured that a companion chatbot will not be able to talk to a child or vulnerable person about suicide, nor will it be able to help a person plan their own suicide.” Garcia said in a statement..
SB 243 would also require companion chatbots to provide an annual report on the relationship between use of their product and suicidal ideation. It allows families to file private lawsuits against “non-compliant and negligent developers.”
Some however, experts do not agree with this that SB 243 would effectively protect children from harm from artificial intelligence chatbots. James P. Steyer, founder and CEO of Common Sense Media, told Mashable in a statement that the bill was “watered down after significant pressure from big tech companies.”
Companies could avoid liability if protections fail if they are implemented in the first place, according to the nonprofit's analysis of the bill.
A separate bill sponsored by Common Sense Media AB 1064also awaits Governor Newsom's signature. This legislation, among other safety measures, would prohibit the use of chatbots by minors if they are capable of causing certain foreseeable harm.
California is quickly becoming a leader in regulating artificial intelligence technologies. Last week Governor Newsom signed the bill by requiring artificial intelligence labs to disclose both the potential harms of their technologies and information about their safety protocols.
How Mashable's Chase DiBenedetto reports.The bill aims to “force AI developers to comply with security standards even when they face competitive pressure, and includes protections for potential whistleblowers.”
Newsom also signed two separate bills Monday aimed at improving children's online safety. AB 56 requires warning labels on social media platforms highlighting the damage that addictive social media channels can have on children's mental health and well-being. Another bill AB 1043introduces an age verification requirement that will come into force in 2027.
UPDATE: October 13, 2025 3:11 pm PT. This story has been updated to include a statement from James P. Steyer, CEO of Common Sense Media.