OpenAI has published new estimates of the number of ChatGPT users who show possible signs of mental health problems, including mania, psychosis or suicidal ideation.
The company said that about 0.07% of ChatGPT users active in a given week showed such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
While OpenAI claims such cases are “extremely rare,” critics say even a small percentage could reach hundreds of thousands of people, with ChatGPT recently reaching 800 million weekly active users, according to boss Sam Altman.
As the investigation intensifies, the company said it has established a network of experts around the world to advise.
These experts include more than 170 psychiatrists, psychologists and primary care physicians who have practiced in 60 countries, the company said.
According to OpenAI, they developed a series of responses in ChatGPT to encourage users to seek help in the real world.
But a look at the company's data has raised some eyebrows among mental health experts.
“While 0.07% sounds like a small percentage, at the level of a population with hundreds of millions of users, it could actually be quite a lot of people,” said Dr. Jason Nagata, a professor who studies technology use among youth at the University of California, San Francisco.
“AI can increase access to mental health support and support mental health in some ways, but we need to be aware of the limitations,” Dr. Nagata added.
The company estimates that 0.15% of ChatGPT users engage in conversations that contain “clear indicators of potential suicidal planning or intent.”
OpenAI said recent updates to its chatbot are designed to “safely and compassionately respond to potential signs of delusion or mania” and flag “indirect signals of potential self-harm or risk of suicide.”
ChatGPT is also trained to redirect sensitive conversations “originating from other models to more secure models” by opening them in a new window.
In response to questions from the BBC about criticism of the number of people potentially affected, OpenAI said that this small percentage of users represented a significant number of people and noted that they were taking the changes seriously.
The changes come as OpenAI faces growing legal scrutiny over how ChatGPT interacts with users.
In one of the most high-profile trials recently submitted Against OpenAI, a California couple sued the company over the death of their teenage son, alleging that ChatGPT encouraged him to commit suicide in April.
The lawsuit was filed by the parents of 16-year-old Adam Rein and is the first lawsuit accusing OpenAI of wrongful death.
In another case, a suspect in an August murder-suicide in Greenwich, Connecticut, published hours of his conversations with ChatGPT that appeared to fuel the alleged perpetrator's delusions.
More users are struggling with AI psychosis as “chatbots create an illusion of reality,” said Professor Robin Feldman, director of the University of California's Institute for AI Law and Innovation. “It's a powerful illusion.”
She said OpenAI deserved praise for “sharing statistics and efforts to solve the problem” but added: “A company can display all sorts of warnings, but a person at mental risk may not be able to heed those warnings.”






