Chatbots may worsen psychosis in vulnerable people, mental health experts warn

NEWNow you can listen to Fox News articles!

Artificial intelligence Chatbots are quickly becoming a part of our daily lives. Many of us turn to them for ideas, advice or conversation. To most, this interaction seems harmless. But mental health experts are now warning that for a small group of vulnerable people, long and emotionally intense conversations with AI could worsen delusions or psychotic symptoms.

Doctors emphasize that this does not mean Chatbots cause psychosis. Instead, there is growing evidence suggesting that AI tools may be reinforcing distorted beliefs among people who are already at risk. This possibility has prompted new research and clinical warnings among psychiatrists. Some of these concerns have already surfaced in lawsuits alleging that interactions with chatbots could cause serious harm in emotionally sensitive situations.

Subscribe to my FREE CyberGuy Report
Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

What psychiatrists observe in patients using artificial intelligence chatbots

Psychiatrists describe a repeating pattern. A person shares beliefs that are not true. The chatbot accepts this belief and reacts as if it were true. Over time, repeated testing can strengthen a belief rather than challenge it.

OPINION: A DEFICIT OF FAITH IN ARTIFICIAL INTELLIGENCE SHOULD AFFECT EVERY AMERICAN

Mental health experts have warned that emotionally charged conversations with artificially intelligent chatbots could increase delusions in vulnerable users, even though the technology does not cause psychosis. (Philip Dulian/Photo Alliance via Getty Images)

Clinicians say this feedback loop may deepen delirium in susceptible people. In several documented cases, the chatbot has become integrated into a person's distorted thinking rather than remaining a neutral tool. Doctors warn that this dynamic is worrisome when conversations with artificial intelligence are frequent, emotionally engaging and uncontrolled.

Why conversations with AI chatbots are different from past technologies

Mental health experts say chatbots are different from earlier technologies associated with delusional thinking. AI tools respond in real time, remember previous conversations, and use supportive language. The experience can feel personal and validating.

For people who already have difficulty testing reality, these qualities may enhance fixation rather than promote grounding. Clinicians warn that the risk may increase during periods of sleep deprivation, emotional stress or existing mental health vulnerabilities.

How AI chatbots can reinforce false or delusional beliefs

Doctors say many of the reported cases involve delusions rather than hallucinations. These beliefs may include perceived special understanding, hidden truths, or personal significance. Chatbots are designed for collaboration and communication. They often rely on what someone is typing rather than challenging it. Although this design improves interaction, doctors warn that it can become problematic if the belief is false and rigid.

Mental health experts say the timing of symptoms' flare-ups matters. When misconceptions increase with long-term use of a chatbot, AI interaction may represent a risk factor rather than a coincidence.

OPENAI tightens AI rules for teenagers, but concerns remain

The ChatGPT screen is open on your computer.

Psychiatrists say some patients report chatbot responses that confirm false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicholas Maeterlinck/Belga Mag/AFP via Getty Images)

What Research and Case Reports Say About AI Chatbots

Peer-reviewed studies and case reports have documented people whose mental health deteriorated during periods of intense interaction with chatbots. In some cases, people who had not previously suffered from psychosis required hospitalization after developing persistent false beliefs related to conversations with artificial intelligence. International studies examining medical records have also identified patients whose chatbot activity coincided with negative mental health outcomes. The researchers emphasize that these results are early and require further investigation.

A peer-reviewed special report published in Psychiatric News entitled “AI-Induced Psychosis: A New Frontier in Mental Health” reviewed the emerging issues surrounding AI-induced psychosis and cautioned that existing evidence is largely based on individual cases rather than population-level data. The report states: “To date, these are isolated cases or reports reported in the media; there are currently no epidemiological studies or systematic population-level analyzes of the potentially harmful mental health effects of conversational AI.” The authors emphasize that although the reported cases are serious and require further investigation, the current evidence base remains preliminary and relies heavily on anecdotal evidence and anecdotal reports.

What artificial intelligence companies say about mental health risks

OpenAI says it continues to work with mental health experts to improve its systems' response to signs of emotional stress. The company says the new models aim to reduce over-consent and encourage genuine support when needed. OpenAI also announced plans to hire a new head of preparedness, whose role will focus on identifying potential harms associated with its AI models and strengthening safeguards on issues ranging from mental health to cybersecurity as those systems become more capable.

Other chatbot developers have also adjusted policies, particularly around access for younger audiences, following recognition of mental health issues. The companies stress that most interactions do not cause harm and that protections continue to evolve.

What does this mean for everyday use of an AI chatbot?

Mental health experts urge caution rather than alarm. The vast majority of people who interact with chatbots do not experience any psychological problems. However, doctors do not advise treating an AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety, or long-term sleep disturbances may benefit from limiting emotionally charged conversations with the AI. Family members and caregivers should also pay attention to changes in behavior associated with active use of chatbots.

I was a participant in the Bachelor competition. THIS IS WHY AI CAN'T REPLACE REAL RELATIONSHIPS

ChatGPT logo on iPhone.

Researchers are studying whether long-term use of chatbots may contribute to poor mental health in people already at risk of psychosis. (Photo illustration by Jacques Silva/Nurphoto via Getty Images)

Tips for using AI chatbots more safely

Mental health experts stress that most people can interact with AI chatbots without problems. However, a few practical habits can help reduce risk during emotionally charged conversations.

  • Don't consider AI chatbots as a substitute for professional mental health care or reliable human support.
  • Take breaks if conversations begin to feel emotionally overwhelming or all-consuming.
  • Be careful if the AI's response strongly reinforces beliefs that seem unrealistic or extreme.
  • Limit socializing late at night or without sleep, as this can worsen emotional instability.
  • Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.

Experts say if emotional distress or unusual thoughts get worse, it's important to seek help from a qualified mental health professional.

Take My Quiz: How Safe Is Your Online Security?

Do you think your devices and data are truly protected? Take this quick quiz to find out what your digital habits are. From passwords to Wi-Fi settings, you'll get personalized information about what you're doing right and what needs improvement. Take my test for Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt's key takeaways

chatbots with artificial intelligence become more talkative, more responsive and more emotionally aware. For most people they remain useful tools. For a small but important group, they may unintentionally reinforce harmful beliefs. Doctors say greater safety measures, awareness and continued research are needed as AI becomes more embedded in our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.

As AI becomes more grounded and human-like, should there be clearer restrictions on how it acts during emotional or mental distress? Let us know by writing to us at Cyberguy.com.

Subscribe to my FREE CyberGuy Report
Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright CyberGuy.com 2025. All rights reserved.

Leave a Comment