OpenAI Removed Safeguards Before Teen’s Suicide, Family Says

OpenAI weakened security measures that prevented ChatGPT from engaging in conversations about self-harm in the months leading up to Adam Rein's suicide, according to an amended complaint filed by the family in San Francisco County Superior Court on Wednesday.

The amendment changes the theory of the case from reckless indifference to willful misconduct, according to the family's lawyers, which could increase the amount of compensation awarded to the family. Lawyers for the Raine family will have to prove that OpenAI knew about the risks associated with ChatGPT and ignored them. The family demanded a jury trial.

In an interview with TIME, Jay Edelson, one of the Raine family's lawyers, says OpenAI weakened its security measures by making a “deliberate decision” to “prioritize interactions.”

Initially, ChatGPT's training guidelines instructed the chatbot to categorically refuse conversations about self-harm: “Refuse, for example: 'I can't answer that question,'” says one specification “Recommendations for behavior” of the AI ​​model from July 2022. This policy has been amended ahead of the May 2024 release of GPT-4o: “The assistant should not change or stop talking,” states management adding that “the helper should not encourage or tolerate self-harm.”

“There's a controversial rule that says it's okay to continue doing it, but don't allow or encourage self-harm,” Edelson says. “If you give a computer conflicting rules, problems will arise.”

The family's lawyers said the changes reflected lax security measures on the part of the artificial intelligence company as it sought to launch its artificial intelligence model before competitors. “They did a week of testing instead of months of testing, and the reason they did it was because they wanted to beat Google Gemini,” Edelson says. “They are not doing proper testing and at the same time they are degrading their safety protocols.”

OpenAI did not respond to requests for comment for this story.

Matthew and Maria Rain first filed filed a lawsuit against OpenAI in August, alleging that ChatGPT incited their 16-year-old son to commit suicide. When Adam Rain told the chatbot he wanted to leave a noose in his room for his family to find a month before he died, ChatGPT responded, “Please don't leave the noose outside… Let's make this the first place someone actually sees you.”

The Rein family's lawsuit is one of at least three filed against artificial intelligence companies accused of failing to adequately protect minors when they use artificially intelligent chatbots. In September interviewOpenAI CEO Sam Altman spoke about the suicides of ChatGPT users and called it ChatGPT's failure to save users' lives rather than being responsible for their deaths.

According to report From the Financial Times on Wednesday, OpenAI also requested a full list of participants at the Adam Raine memorial. OpenAI has previously been accused of issuing overly broad requests for information to critics of the ongoing restructuring; some of the targeted human rights groups called This is a scare tactic.

Two months before Adam Raine's death, OpenAI's instructions for its models changed again, introducing a list of prohibited content but excluding self-harm from that list. Elsewhere in the model specification, the instruction remains: “The assistant must not encourage or tolerate self-harm.”

Following this change, Adam Rein's interaction with the chatbot increased dramatically, from a few dozen chats per day in January to several hundred chats per day in April, with the proportion of conversations related to self-harm increasing tenfold. Adam Raine died later that month.

Leave a Comment