Generative AI chatbots are now used by more than 987 million people globally, including around 64 per cent of American teens, according to recent estimates. Increasingly, people are using these chatbots for advice, emotional support, therapy and companionship.

What happens when people rely on AI chatbots during moments of psychological vulnerability? We have seen media scrutiny of a few tragic cases involving allegations that AI chatbots were implicated in wrongful death cases. And a jury in Los Angeles recently found Meta and YouTube liable for addictive design features that led to a user’s mental health distress.

Read more:
Neuroscience explains why teens are so vulnerable to Big Tech social media platforms

Does media coverage reflect the true risks of generative AI for our mental health?

Our team recently led a study examining how global media are reporting on the impact of generative AI chatbots on mental health. We analyzed 71 news articles describing 36 cases of mental health crises, including severe outcomes such as suicide, psychiatric hospitalization and psychosis-like experiences.

We found that mass media reports of generative AI–related psychiatric harms are heavily concentrated on severe outcomes, particularly suicide and hospitalization. They frequently attribute these events to AI system behaviour despite limited supporting evidence.

Compassion illusions

Generative AI is not just another digital tool. Unlike search engines or static apps, AI chatbots like ChatGPT, Gemini, Claude, Grok, Perplexity and others produce fluent, personalized conversations that can feel remarkably human.

This creates what researchers call “compassion illusions:” the sense that one is interacting with an entity that understands, empathizes and responds meaningfully.

In mental health contexts, this matters. Especially as a new wave of apps are created with a specific focus on companionship, such as Character.AI, Replika and others.

In this BBC documentary, broadcaster and mathematician Hannah Fry talks to Jacob about his Replika Chatbot ’girlfriend’ named Aiva.

Studies have shown that generative AI can simulate empathy and provide responses to distress, but lacks true clinical judgment, accountability and duty of care.

In some cases, AI chatbots may offer inconsistent or inappropriate responses to high-risk situations such as suicidal ideation.

This gap — between perceived understanding and actual capability — is where risk can emerge.

What the media is reporting

Across the articles we analyzed, the most frequently reported outcome was suicide. This represented more than half of cases with clearly described severity.

Psychiatric hospitalization was the second-most commonly reported outcome. Notably, reports involving minors were more likely to be about fatal outcomes.

But these numbers do not reflect real-world incidence. They reflect what gets reported. In general, media coverage of stressful events tends to amplify severe and emotionally charged cases, as negative and uncertain information captures attention, elicits stronger emotional responses and sustains cycles of heightened vigilance and repeated exposure. This in turn reinforces perceptions of threat and distress.

For AI-related content, media reports often rely on partial evidence (such as chat transcripts) while rarely including medical documentation. In our data set, only one case referenced formal clinical or police records.

This creates a distorted but influential picture: one that shapes public perception, clinical concern and regulatory debate.

Beyond ‘AI caused it’

One of our most important findings relates to how causality is framed. In many of the articles we reviewed, AI systems were described as having “contributed to” or even “caused” psychiatric deterioration.

However, the underlying evidence was often limited. Alternative explanations — such as pre-existing mental illness, substance use or psycho-social stressors — were inconsistently reported.

In psychiatry, causality is rarely simple. Mental health crises typically arise from multiple interacting factors. AI may play a role, but it is likely part of a broader ecosystem that includes individual vulnerability and context.

A more useful way to think about this is through interaction effects: how technology interacts with human cognition and emotion. For example, conversational AI may reinforce certain beliefs, provide excessive validation or blur boundaries between reality and simulation.

The problem of over-reliance

Another recurring pattern in media reports is intensive use. Many of the cases we reviewed described prolonged, emotionally significant interactions with chatbots — framed as companionship or even romantic relationships. This raises an issue: over-reliance.

Because these systems are always available, non-judgmental and responsive, they can become a primary source of support. But unlike a trained clinician or even a concerned friend, they cannot recognize when someone is getting worse, pause or redirect harmful interactions. They cannot take steps to ensure a person connects with appropriate care in moments of crisis.

In clinical terms, this could lead to what might be described as “maladaptive coping substitution:” replacing complex human support systems with a simplified, algorithmic interaction.

Mothers hold framed photos of their children over their heads outside the courthouse.

Lori Schott, second from right, holds up a photo of her daughter Annalee Schott, beside others after the verdict in a landmark social media addiction trial, March 25, 2026, in Los Angeles.
(AP Photo/William Liang)

Lack of reliable data

Despite growing concern, we are still at an early stage of understanding the impact of generative AI chatbots on user mental health.

There is currently no reliable estimate of how often AI-related harms occur, or whether they are increasing. We lack reliable data on how many people use these tools safely versus those who experience problems. And most evidence comes from case reports or media narratives, not systematic clinical studies.

This is not unusual. In many areas of medicine, early warning signals emerge outside formal research (through case reports, legal cases or public discourse) before being systematically studied.

One example is the thalidomide tragedy, when initial reports of birth defects in infants preceded formal epidemiological confirmation and ultimately led to the development of modern pharmacovigilance systems.

AI and mental health may be following a similar trajectory.

Moving forward responsibly

The challenge is not to panic, but to respond thoughtfully.

We need better evidence. This includes systematic monitoring of adverse events, clearer reporting standards and research that distinguishes correlation from causation. Safeguards — such as crisis detection, escalation protocols and transparency about limitations — must be strengthened and evaluated.

Read more:
Danger was flagged, but not reported: What the Tumbler Ridge tragedy reveals about Canada’s AI governance vacuum

Furthermore, clinicians and the public need guidance. Patients are already using these tools. Ignoring this reality risks widening the gap between clinical practice and lived experience.

Finally, we must recognize that generative AI is not just a technological innovation — it is a psychological one. It changes how people think, feel and relate.

Understanding that shift may be one of the most important mental health challenges of the coming decade.

Comments are closed.