California Gov. Gavin Newsom signed several artificial intelligence safety bills into law and vetoed one of the most controversial on Monday as lawmakers' efforts to protect children from artificial intelligence met strong resistance from the tech industry.
One of the key bills signed into law, Senate Bill 243, would require chatbot operators to have procedures in place to prevent the creation of suicide or self-harm content and install safety guardrails such as directing users to a suicide hotline or emergency text line.
The bill is among a number of bills Newsom signed Monday that will affect tech companies. Some of the other laws he signed addressed issues such as age verification, warning labels on social media and the non-consensual distribution of sexually explicit content by AI.
Under SB 243, operators would be required to notify minor users at least every three hours that they need to take a break and that the chatbot is not human. They will also be required to take “reasonable measures” to prevent companion chatbots from creating sexually explicit content.
“New technologies like chatbots and social media can inspire, educate and connect, but without real barriers, technology can also exploit, mislead and endanger our children,” Newsom said in a statement.
Signing the bill shows how Newsom is trying to balance child safety concerns with California's leadership in artificial intelligence.
“We can continue to lead in artificial intelligence and technology, but we must do so responsibly—protecting our children every step of the way,” Newsom said.
Some technology industry groups such as TechNet remained opposed to SB 243, and child safety groups such as Common Sense Media and Tech Oversight California also withdrew support for the bill due to “industry-favorable exceptions.” Changes in the bill would limit who receives certain notifications and include exceptions for certain chatbots in video games and virtual assistants used in smart speakers.
Technology lobbying group TechNet, which includes OpenAI, Meta, Google and others as well as other trade groups, said the definition of a companion chatbot is too broad, according to a legislative analysis. The group also told lawmakers that allowing people to sue for violations of the new law would be an “overly punitive method of enforcement.”
Newsom later announced that he had vetoed a more controversial AI safety bill, Assembly Bill 1064.
This bill would prohibit businesses and other organizations from making companion chatbots available to minors in California unless the chatbot is “predictably capable” of harmful behavior, such as encouraging a child to engage in self-harm, violence, or an eating disorder.
In his veto statement, Newsom said that while he agreed with the bill's intent, it could inadvertently lead to a ban on artificial intelligence tools used by minors.
“We cannot prepare our youth for a future in which AI is ubiquitous by completely preventing them from using these tools,” he wrote in the message.
Child Safety Teams and the California Attorney. General Rob Bonta called on the governor to sign AB 1064.
Common Sense Media, a nonprofit that sponsored AB 1064 and recommends that minors not use artificial intelligence, said the veto was “disappointing.”
“It is truly sad that big tech companies fought this law, which is actually in the best interests of their industry in the long run,” Jim Steyer, founder of Common Sense Media, said in a statement.
Facebook's parent company, Meta, opposes the legislation and the Computer and Communications Industry Association. lobbied for the bill, saying it would jeopardize innovation and disadvantage California companies.
California is a world leader in artificial intelligence and home to 32 of the 50 largest artificial intelligence companies all over the world.
The technology, which can answer questions and quickly generate text, code, images and even music, has exploded in popularity over the past three years. As it evolves, it changes the way people consume information, work and learn.
Suicide Prevention and Crisis Counseling Resources
If you or someone you know is struggling with suicidal thoughts, seek professional help and call 9-8-8. The first nationwide three-digit hotline in the United States, 988, will connect callers with qualified mental health counselors. Text “HOME” to 741741 in the US and Canada to reach Crisis text line.
Lawmakers fear chatbots could harm the mental health of young people as they rely on technology for communication and advice.
Parents have sued OpenAI, Character AI and Google, claiming the companies' chatbots harmed the mental health of their teens who committed suicide.
Technology companies including character maker Feature.AI and ChatGPT OpenAI say they take child safety seriously and are rolling out new features to help parents monitor how much time their children spend with chatbots.
But parents also want lawmakers to act. One parent, Megan Garcia, testified in support of SB 243, calling on lawmakers to do more to regulate AI following the death of her son, Sewell Setzer III, by suicide. Last year, a Florida mom sued chatbot platform Character.AI, alleging the company failed to notify her or offer help to her son, who was expressing suicidal thoughts to virtual characters on the app.
She praised the bill after the governor signed it.
“American families, like mine, are fighting for the online safety of our children,” Garcia said in a statement.