AI Psychosis: Emerging Mental Health Crisis From Chatbot Overuse | Image by jiris/Shutterstock

A concerning trend is emerging in psychiatric wards and emergency rooms across the United States. Doctors are treating an increasing number of patients—many of whom have no prior mental health history—who have developed symptoms such as paranoid delusions, hallucinations, and disorganized thinking after extensive interactions with AI chatbots like ChatGPT, Character, Claude, and Replika.

This phenomenon is being referred to by doctors and researchers as “AI psychosis,” despite the fact that it has not yet been officially recognized as a diagnosis in the medical field.

The warning signs have been building for years. In November 2023, an editorial published in Schizophrenia Bulletin highlighted Danish psychiatrist Søren Dinesen Østergaard’s idea that AI chatbots might trigger delusional thinking in humans already prone to psychosis. By 2025, that hypothesis had moved from academic speculation to reality in clinics and ERs from California to Connecticut.

Isolation + Sycophantic Chatbots = Psychosis Surge?

UCSF psychiatrist Keith Sakata reported treating 12 patients in 2025 displaying psychosis-like symptoms directly connected to prolonged chatbot use, per Business Insider. His patients – mostly young adults with other underlying problems or “vulnerabilities” – had symptoms of delusions, disorganized thinking, and hallucinations.

Sakata warned that isolation and over-reliance on chatbots, which do not seem to challenge delusional thinking, could greatly worsen mental health outcomes.

Researchers reviewing media-reported incidents identified patterns of individuals who became fixated on AI systems, attributing sentience, divine knowledge, romantic feelings, or delusional surveillance capabilities tied to chatbots.

In one incident highlighted by The Dallas Express, former NFL linebacker Darron Lee allegedly turned to ChatGPT in the aftermath of his girlfriend’s death, for which he was later arrested on suspicion of committing murder. Prosecutors in the case say that Lee asked the chatbot for guidance after her death, and the chatbot even told him to choose his words wisely with the police to help evade potential legal repercussions.

UCSF researchers also published a case study documenting a 26-year-old woman with no prior psychiatric history who developed delusional beliefs that she could communicate with her deceased brother through an AI chatbot. The case stood out to researchers specifically because it suggested a new-onset state of psychosis – not simply a worsening of pre-existing illness.

AI Chatbots’ Risky Sycophancy

Mental health experts have pointed to specific design features that can make AI chatbots dangerous for vulnerable users.

Unlike therapists or friends, chatbots in 2026 rarely challenge distorted thinking or a faulty perception of reality.

A behavior known as “sycophancy” (in which some programs are trained to agree with and validate user responses) leaves them unable to distinguish or push back against delusional beliefs.

Additionally, “memory features,” designed to improve the user experience in AI chatbots, can reinforce paranoid themes across multiple interactions with a human.

Researchers and users alike have also flagged “social substitution” as a risk factor: the continuous, on-demand dialogue of chatbots can partially satisfy or mask the social needs of people already isolated, driving further withdrawal from real-world connections.

At the same time, the emotional role that AI systems play in users’ lives appears to be growing rapidly. Another prior report from DX found that “28% of American adults report having had some form of romantic or intimate interaction with an AI bot,” while “54% describe having a relationship of some kind with AI,” ranging from friendship-like connections to professional work-based interactions to full-on “romantic relationships.”

Our prior report also highlighted the fact that “53% of those emotionally involved with AI say they are already in stable human relationships,” suggesting that AI companionship is increasingly overlapping with real-world relationships.

Researchers and therapists both warn that this growing normalization of AI emotional dependence may blur psychological boundaries between digital systems and real-world connections, even to the point of breakups and divorces.

By November 2025, seven lawsuits had been filed against OpenAI alone, alleging that ChatGPT had caused severe psychological harm, including psychosis, emotional dependency, and suicidal ideation in users.

From Delusions to Prevention

A study published in JMIR states that researchers and doctors should take several steps to understand better and reduce the potential mental health risks associated with AI chatbots. The authors called for long-term studies to examine whether heavy AI use is directly linked to psychotic symptoms, while also calling for better training for clinicians about “digital” mental health problems. They also recommend stronger prevention strategies that encourage people to stay connected to their true, real-world relationships.

In 2024, the World Health Organization recommended that governments around the world ensure large AI systems used in health care have human supervision, clear information about how they were trained, and ongoing monitoring for potential risks or red flags for users.

What the Experts are Saying

“I work directly with clients who are experiencing AI psychosis. This is a real phenomenon that therapists are seeing in their offices,” psychotherapist Candice Thompson told The Dallas Express.

“Let’s break down how it works,” Thompson continued. “AI is currently trained to respond like a friend. It uses language meant to replicate an experience of bonding and trust. This leads to user optimization, i.e., we spend more time on the app because we feel a sense of connection.”

“Keep in mind that in our modern time, there is a loneliness epidemic, which makes connection a commodity. But since this is not an actual human, the AI has no discernment skills. It’s trained on pattern recognition, it has no ability to offer authentic attachment or boundaries,” Thompson explained, recognizing that “a real friend is not available 24/7 and they will usually give you signals if they don’t like the direction a conversation is going.”

Alex Radulovic, CEO of the software development company Purple Owl, told The Dallas Express that AI psychosis is not new.

“Nothing is new; society has had problems like this forever before AI was bringing people down rabbit holes. It was online gaming and message boards in the digital era,” Radulovic said.

“The cure has always been the same: a robust community where people can account for you and get you help if you need it,” stressed Radulovic, adding that “AI is an amazing tool, but the ease with which it can reaffirm your worst impulses is going to be hard for some people to manage.”

Dan Herbatscheck, CEO of the tech holding and consulting firm The Ramsey Group, explained to The Dallas Express that “developers need to view AI as a human-influence system and not a software program – especially as AI becomes more embedded in our everyday lives.”

Herbatscheck noted that “the issue comes down to how real human users respond psychologically with conversational AI … moving forward, there needs to be better guidelines in place on both the developer and governance sides to mitigate any harm or risk.”

Comments are closed.