Earlier this month the company made public health board to address these issues, although critics noted that the board did not include a suicide prevention expert. OpenAI also recently rolled out controls for parents of children using ChatGPT. The company says it is building an age prediction system that will automatically detect children using ChatGPT and enforce stricter age-related protections.
Rare but impressive conversations
The data released Monday appears to be part of the company's efforts to show progress on these issues, although it also sheds light on how deeply AI chatbots can impact the health of society as a whole.
In a blog post about the newly released data, OpenAI said the types of conversations on ChatGPT that could raise concerns about “psychosis, mania, or suicidal ideation” are “extremely rare” and therefore difficult to measure. The company estimates that about 0.07 percent of active users in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. In terms of emotional attachment, the company estimates that about 0.15 percent of users are active in a given week, and 0.03 percent of posts indicate a potentially elevated level of emotional attachment on ChatGPT.
OpenAI also claims that after assessing more than 1,000 complex conversations related to mental health, the new GPT-5 model was 92 percent consistent with desired behavior, compared to 27 percent for the previous GPT-5 model released on August 15. The company also says its latest version, GPT-5, better meets OpenAI's guarantees on long-term conversations. OpenAI has previously recognized that its security measures are less effective during long conversations.
In addition, OpenAI says it is adding new assessments to try to measure some of the most serious mental health issues facing ChatGPT users. The company says its baseline safety testing of AI language models will now include tests for emotional trustworthiness and non-suicidal mental health emergencies.
Despite ongoing mental health issues, OpenAI CEO Sam Altman announced On October 14, the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The company had weakened ChatGPT content restrictions in February, but then abruptly drawn out them after the August lawsuit. Altman explained that OpenAI made ChatGPT “pretty restrictive so that we are careful about mental health issues,” but acknowledged that this approach made the chatbot “less useful/unpleasant for many users who did not have mental health issues.”
If you or someone you know is feeling suicidal or stressed, call the Suicide Prevention Lifeline: 1-800-273-TALK (8255), which will connect you with a local crisis center.






