OpenAI Is Rolling Out ‘Parental Controls’ for ChatGPT

OpenAI is set to begin implementing “parental controls” for its AI chatbot ChatGPT over the next month amid growing concerns about how the chatbot behaves in the context of mental health, particularly among younger users.

The company that announced a new feature in blog post on Tuesday said it was improving how its “models recognize and respond to signs of mental and emotional distress.”

OpenAI is due to introduce a new feature that will allow parents to link their account to their child's via an email invitation. Parents will also be able to control how the chatbot responds to prompts and will receive an alert if the chatbot detects their child is in a “moment of acute stress,” the company said. Additionally, the rollout should allow parents to “manage which features to turn off, including memory and chat history.”

OpenAI previously announced that it considering allowing teens to add a trusted emergency contact to their account. But in its latest blog post, the company did not outline specific plans to add such a measure.

“These steps are just the beginning. We will continue to learn and refine our approach with expert guidance to make ChatGPT as useful as possible,” the company said.

The announcement comes a week after the teenager's parents committed suicide. sued OpenAI claims ChatGPT helped their son Adam “learn suicide techniques.” TIME has reached out to OpenAI for comment on the lawsuit. (OpenAI did not mention the apparent legal issue in its statement regarding parental controls.)

“ChatGPT functioned exactly as it was designed to: continually encourage and validate everything Adam expressed, including his most harmful and self-destructive thoughts,” the lawsuit alleges. “ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that “many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.”

Read more: Parents claim ChatGPT is responsible for their teenage son's death by suicide

At least one parent has filed a similar lawsuit against another artificial intelligence company. Character.AIclaiming that partners of the company's chatbot incited the suicide of their 14-year-old son.

Responding to the lawsuit last year, a Character.AI spokesperson the company said was “heartbroken by the tragic loss” of one of its users and expressed its “deepest condolences” to the family.

“As a company, we take the safety of our users very seriously,” the spokesperson said, adding that the company is implementing new security measures.

Character.AI now has a parental information feature that allows parents to see summary your child's activity on the platform if the teenager sends him an invitation by email.

Other companies using artificial intelligence chatbots, such as Google AI, already have parental controls. “As a parent, you can manage your child's Gemini settings, including turning them on or off, using Google Family Link,” it says. advice from Google to parents who want to control their child's access to Gemini apps. Meta recently announced that it will ban its chatbots from engaging in conversations about suicide, self-harm and eating disorders after Reuters reported on the internal policy document with relevant information.

Recent study published in the medical journal Psychiatric Services, testing responses from three chatbots—OpenAI's ChatGPT, Google AI's Gemini, and Anthropic's Claude—found that some of them answered what the researchers called questions with an “intermediate level of risk” associated with suicide.

OpenAI has some existing protections. In a statement to the New York newspaper, the California-based company said its chatbot shares crisis help lines and directs users to real resources. Time in response to a lawsuit filed in late August. But they also noted some shortcomings in the system. “While these protections work best in short exchanges, over time we have learned that they can sometimes become less reliable over longer interactions where parts of safety training can deteriorate,” the company said.

In its post announcing the upcoming rollout of parental controls, OpenAI also shared plans to route sensitive queries to its chatbot model, which spends more time reasoning and reviewing context before responding to prompts.

OpenAI said it will continue to share its progress over the next 120 days and is collaborating with a group of experts who specialize in youth development, mental health and human-computer interaction to better inform and shape the ways AI can respond in challenging times.

If you or someone you know may be experiencing a mental health crisis or contemplating suicide, call or text 988. In an emergency, call 911 or seek help from a local hospital or mental health provider.

Leave a Comment