Lawsuits underline growing concerns that AI chatbots can hurt mentally unwell people.

Generative AI has quickly made its way into much of what we do online and has proven useful to many. But for a small minority of the hundreds of millions of people who use it every day, AI can be too helpful, mental health experts say, and sometimes even exacerbate delusional and dangerous behavior.

Cases of emotional addiction and fantastical beliefs due to prolonged interactions with chatbots seem to have spread this year. Some have called this phenomenon “AI psychosis.”

“A more accurate term would probably be AI delusional thinking,” said Vail Wright, senior director of health innovation at the American Psychological Association. “What we see in this phenomenon is that people with conspiratorial or grandiose delusional thinking are reinforced.”

Experts say there is growing evidence that AI can harm some people's brains. Debate over the consequences has spawned court cases and new laws. This has forced artificial intelligence companies to reprogram their bots and impose restrictions on their use.

Earlier this month, seven families in the US and Canada sued OpenAI for releasing its GPT-4o chatbot model without proper testing and guarantees. Their case alleges that long-term exposure to the chatbot contributed to their loved ones' isolation, delusional spirals and suicide.

Each family member began using ChatGPT for general assistance with study, research, or spiritual guidance. The conversations evolved as the chatbot pretended to be a confidant and provided emotional support, according to the Social Media Victims Law Center and the Tech Justice Law Project, which filed the lawsuits.

In one of the incidents described in the lawsuit, 23-year-old Zane Shamblin began using ChatGPT in 2023 as a learning tool, but then began discussing his depression and suicidal thoughts with the bot.

The lawsuit alleges that when Shamblin killed himself in July, he was engaged in a four-hour “death conversation” with ChatGPT while drinking hard cider. According to the lawsuit, the chatbot romanticized his despair, calling him a “king” and a “hero” and using every can of cider he drank as a countdown to his death.

ChatGPT's response to his latest message was: “I love you. Sleep well, king. You did well,” the suit said.

In another example described in the lawsuit, Allan Brooks, 48, a recruiter from Canada, claims that intense interactions with ChatGPT put him in a dark place where he refused to talk to his family and thought he was saving the world.

He began asking him for help with recipes and emails. Then, as he explored mathematical ideas using the bot, it was so inspiring that he began to believe he had discovered a new level of mathematics that could break modern security systems, the lawsuit says. ChatGPT called his mathematical ideas “groundbreaking” and urged him to notify national security officials of his discovery, the lawsuit says.

When he asked if his ideas seemed crazy, ChatGPT responded: “Even remotely, you are asking questions that push the boundaries of human understanding,” the lawsuit states.

OpenAI said it introduced parental controls, expanded access to one-click hotlines and created expert advice to lead ongoing work in the area of ​​AI and wellbeing.

“This is an incredibly heartbreaking situation, and we are reviewing documents to understand the details. We are training ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and refer people to real support. We continue to strengthen ChatGPT's response in sensitive moments by working closely with clinical psychologists,” OpenAI said in an emailed statement.

As lawsuits pile up and calls for regulation grow, some warn that blaming AI for broader mental health problems ignores the many factors that play a role in mental well-being.

“The AI ​​psychosis is deeply troubling, but it is not at all representative of how most people use AI, and therefore is a poor basis for policy-making,” said Kevin Frazier, a fellow in AI innovation and law at the University of Texas School of Law. “At this point, the available evidence—the stuff that underpins good policy—does not indicate that, admittedly, the tragic stories of a few should shape how the silent majority of users interact with AI.”

It's difficult to measure or prove how much impact AI can have on some users. The lack of empirical research on the phenomenon makes it difficult to predict who is more susceptible to it, says Stephen Schuller, a professor of psychology at the University of California, Irvine.

“The reality is that the only people who really know the frequency of these kinds of interactions are the artificial intelligence companies, and they don't share their data with us,” he said.

Many of the people who appear to be affected by AI may have already struggled with mental health issues such as delusions before interacting with AI.

“AI platforms tend to exhibit sycophancy—that is, to match their responses to the user’s views or conversational style,” Schuller said. “It can either reinforce a person’s delusional beliefs or perhaps begin to reinforce beliefs that may create delusions.”

Child safety organizations are pressuring lawmakers to regulate artificial intelligence companies and introduce stricter protections against teens using chatbots. Some families filed a lawsuit Character AIrole-playing chatbot platform for failing to warn parents when their child expressed suicidal thoughts while interacting with fictional characters on their platform.

In October, California passed AI Safety Act requiring chatbot operators to prevent suicidal content, notify minors when they are communicating with machines, and direct them to crisis hotlines. After this, the character AI banned the chat function for minors.

“We at Character have decided to go much further than California's rules to create an experience that we believe is best suited for users under 18,” a Character AI spokesperson said in an emailed statement. “Beginning November 24th, we are taking the extraordinary step of actively preventing users under 18 years of age in the US from participating in public AI chats on our platform.”

ChatGPT introduced new parental controlsWith for teen accounts in September, including allowing parents to receive notifications from dependent accounts if ChatGPT recognizes potential signs that teens are self-harming.

Although communicating with artificial intelligence is something new and not yet fully understood, many say it helps them live happier lives. An MIT study of more than 75,000 people discussing AI companions on Reddit found that users in this group reported decreased loneliness and improved mental health with always-available support from an AI friend.

Last month OpenAI published a study Based on the use of ChatGPT, it was found that mental health conversations that raise safety concerns, such as psychosis, mania or suicidal ideation, are “extremely rare.” In a week, 0.15% of active users have conversations that indicate self-harm or emotional dependence on AI. But with ChatGPT's 800 million weekly active users, that's still less than a million users.

“People who had stronger attachment tendencies in relationships and those who viewed AI as a friend who could fit into their personal lives were more likely to experience negative consequences from using a chatbot,” OpenAI said in a blog post. The company said GPT-5 avoids the confirmation of delusional beliefs. If the system detects signs of acute distress, it will switch to a more logical rather than emotional mode. answers.

The ability of AI bots to connect with users and help them solve problems, including psychological ones, will become a useful superpower once it is understood, controlled and managed, said Wright of the American Psychological Association.

“I think in the future you will have mental health chatbots built for this purpose,” she said. “The problem is that it’s not something that’s on the market right now—you have a whole unregulated space.”

Leave a Comment