OpenAI, the company behind ChatGPT, launched new parental controls for teen accounts on Monday, a month after a California family filed a lawsuit alleging its chatbot encouraged their son to commit suicide.
With new controls rolling out starting Monday, parents will be able to set specific hours when their children won't be able to use ChatGPT, disable the voice mode that allows users to interact with the chatbot, and prevent teens' chat history from being saved in ChatGPT's memory. Another option prevents teenage data from being used to train OpenAI models.
“Parental controls allow parents to link their account with their teen's account and customize settings for a safe and age-appropriate experience,” OpenAI said in a statement. blog post about announcing a new feature.
Parents will have to link their account to their children's accounts in order to receive parental controls.
Under the new controls, parents of dependent accounts will be notified if ChatGPT detects potential signs that their teens may be harming themselves. A team of specialists will review the signs of trouble and contact parents via email, text message or mobile push notifications.
“No system is perfect, and we know we can sometimes raise the alarm when there is no real danger, but we believe it is better to act and alert parents so they can intervene than to remain silent,” OpenAI said in a blog post.
If OpenAI is unable to contact a parent in an immediate life-threatening situation, it is exploring ways to contact law enforcement directly. Parents will need a ChatGPT account to access these new controls.
To set it up, a parent can send an invitation to the teen's settings from their account, which the teen must accept, after which the parent can manage the teen's experience with ChatGPT from their account. Teens can also invite a parent to join.
Once parents and children link their accounts, the teen's account will automatically receive additional content protections, including reduced graphic content, viral issues, sexual, romantic or violent role-playing games, and extreme beauty ideals to help maintain age-appropriateness.
These changes come amid increased regulatory and public scrutiny of teens' use of chatbots.
Last year, a Florida mother alleged in a federal lawsuit that another chatbot from a platform called Character.AI was responsible for the suicide of her 14-year-old son.
She accused the company of failing to notify her or offer help when her son expressed suicidal thoughts to virtual characters. Charachter.AI is a chatbot platform for role-playing games where people can create and interact with digital characters that mimic real and fictional people. More families have sued Charachter.AI this year.
In August, the parents of 16-year-old Adam Raine sued OpenAI, alleging that ChatGPT provided him with information about suicide tactics, including the one the teenager used to kill himself. Adam used the paid version of ChatGPT-4o, which encouraged him to seek professional help when he expressed thoughts of self-harm. He managed to bypass the security measures already in place by saying that these were details of a story he was writing.
In September, OpenAI CEO Sam Altman wrote that the company was putting “security ahead of privacy and freedom for teens.”
While OpenAI's rules prohibit users under 13 from using its services, the company announced Monday that it is also building an “age prediction system” that will predict whether a user is under 18 and automatically apply settings appropriate for teens.
California lawmakers have passed two AI chatbot safety bills that the tech industry lobbied against.
Gov. Gavin Newsom has until mid-October to approve or reject the bills, Assembly Bill 1064 and Senate Bill 243.
Rights groups noted that while OpenAI's latest changes were a step in the right direction, laws are needed for real accountability.
“They reflect a broader pattern of companies making hasty public commitments only after harm has been done,” said Adam Billen, vice president of public policy at Encode AI, a youth-led coalition of activists advocating for AI safety. “We don't need empty promises; we need accountability enshrined in laws like AB 1064.”
Times staff writer Queenie Wong contributed to this report.






