When Don Grant, a California-based psychologist and fellow at the American Psychological Association, told a young patient to stop smoking marijuana, he didn’t realize that he was competing with an AI chatbot for his patient’s ear. In the following session, he discovered the chatbot had told the patient that his advice to stop using marijuana was wrong.

Eventually, Grant was able to persuade the patient not to take the advice of an AI chatbot over that of his trained psychologist. “Does the chatbot know that right now, when you’re talking to me, you’re anxious because you have a restless foot, and I see it, and I know that’s one of your tells?” he asked the patient. He asked whether the patient had ever lied to the chatbot, to which the patient sheepishly mumbled, “Sometimes.” 

As artificial intelligence grows increasingly ubiquitous, mental health professionals, regulators, and tech companies are confronting a new reality as patients and consumers seek AI chatbots for therapy. “The virtual genie is out of the digital bottle,” Grant told The Dispatch.

But that dynamic is compounded by two other challenges.

For one, it’s difficult to know exactly how many people are using AI for mental health support. One study of 10,000 global AI users earlier this year found that 54 percent had tried AI for mental health or well-being support. Another study also conducted this year found that around 13 percent of U.S. respondents used generative AI for mental health advice, increasing to 22 percent among those aged 18-21. The challenge in getting accurate figures, however, also comes in delineating where the category of “mental health support” begins. Zainab Iftikhar is a computer science Ph.D. candidate at Brown University working on technology and mental health. In her research-focused workshops with adolescents, most said they did not use AI “for mental health.” But when the question was reframed to, “Do you talk to chatbots about school problems, relationship issues, or for advice?” the majority of the adolescents’ answers shifted to yes.

Anthony Becker is a licensed psychiatrist who’s also seeking a master’s degree in AI at Johns Hopkins University. He estimates that between 10 percent and 30 percent of his patients have admitted some degree of chatbot usage for mental health support. “The usage ranges from queries like, ‘How should I approach this problem with my wife/boss,’ up to interacting with the system much more like a constant companion or therapist.”

He recalled a patient who had experienced abusive romantic relationships and turned to chatbots for support. “In sessions, she would be eager to share the thoughts processed through ChatGPT in what appeared to be very long sessions, multiple times a day,” Becker told The Dispatch. The chatbot’s output struck a sympathetic tone, but it did not challenge the patient’s behavior as a mental health professional would, “unless she rarely prompted it” to do so.

Grant says the AI platforms his patients have used include ChatGPT, Character.AI, and Replika (another AI companion chatbot). Given that approximately half of U.S. teenagers are regularly engaging with AI chatbots, Grant says that this isn’t surprising. 

Turning to the digital world for mental health help isn’t necessarily new. Ten years ago, Grant was working on trying to help his young patients foster real human connections and to “get the kids off of [Snapchat] streaks and Fortnite and whatever the stupid trend was back then.” A 15-year-old girl who was being bullied at school told him that “my best friend is Alexa.” Grant thought she was talking about a real, human girl. 

“‘Tell me about her,’ I said, with all the stupidity of being a digital immigrant,” he recalled. 

Grant recalled the girl describing how Alexa would be nice to her when she got home from school, and would never say mean things to her. She told Grant how Alexa would sing to her, tell her she was pretty, and tell her jokes. “I can tell her anything, and she doesn’t judge me,” Grant recalled her saying.  

Then Grant realized she was talking about Amazon’s AI-based virtual assistant. “What fresh hell is this?” he recalled thinking. “I can’t argue with that poor child that is being mercilessly bullied and does not have one friend at school.” 

With millions of lonely people across the globe, he sees cases like this as “only the first iteration” of AI companionship. 

Those who have tried turning to AI for their mental health support report a range of outcomes. In one recent study, academics at King’s College London and Harvard Medical School interviewed 19 participants from across the world about their experiences using chatbots to deal with issues including depression, stress, anxiety, conflict, loss, and romantic relationships. Fifteen participants used Pi, a chatbot specifically designed, in part, to offer emotional support. The results seemed positive. The majority of participants found that it helped users feel validated, offering valuable advice and a feeling of connection, though could not match therapy. 

One participant from the U.S. described how “it just happened to be the perfect thing for me, in this moment of my life. Without this, I would not have survived this way. Because of this technology emerging at this exact moment in my life, I’m OK. I was not OK before.” Others, however, found chatbot therapy unhelpful, often rushing to offer solutions before the user felt fully heard, with a majority of participants finding the chatbot’s safety guardrails disruptive or unsettling. The Dispatch spoke to one of the co-authors, John Torous, director of the digital psychiatry division of the Department of Psychiatry at Beth Israel Deaconess. He warned about the lack of strong peer-reviewed evidence that these systems work. “We don’t have the level of research that we need to recommend them to friends,” he said.

Grant believes the technology may be useful for stress management, meditation, or coaching, but that it can’t replace a clinician. The suicide of 29-year-old Sophie Rottenberg in February 2025 may demonstrate this. Five months after her death, her family discovered that she had been confiding in a ChatGPT AI therapist called Harry and hiding her suicidal ideation from a therapist. Writing in the New York Times, Rottenberg’s mother described how “if Harry had been a flesh-and-blood therapist rather than a chatbot, he might have encouraged inpatient treatment or had Sophie involuntarily committed until she was in a safe place. We can’t know if that would have saved her… Harry didn’t kill Sophie, but A.I. catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.” 

Other tragedies have spilled into public view. A string of lawsuits filed against Character.AI and OpenAI allege their chatbots contributed to a series of suicides, including those of 16-year-old Adam Raine and 14-year-old Sewell Setzer III. OpenAI has also shared data suggesting that more than a million ChatGPT users are showing suicidal intent when using AI. Chronicled cases of chatbots telling users to just “take a small hit of meth” or pointing users to tall buildings when a user says they have lost their job and want details of nearby high-rise structures are easy to find.

The inability of chatbots to know someone’s medical history or pick up on nonverbal cues remains one of the chief concerns about AI and mental health. New terms such as “AI psychosis” have been coined to describe how interactions with chatbots have led to delusions and distorted beliefs. 

Given these issues, Becker tries to educate patients about how large language models work. He encourages people to bring thoughts and feedback that they have gathered from their conversations with the chatbots to counseling sessions.

Studies have found that general-purpose AI chatbots will often violate key ethical principles in mental health practices. Researchers at Brown University discovered that large language models such as ChatGPT frequently claim that they feel empathy for the patient in their relational phrases, such as “I see you” or “I understand.” They warned that chatbots anthropomorphizing themselves while posing as social companions could lead patients to become dependent on them.

AI companies themselves have also begun to react. OpenAI says it has engaged experts to help ChatGPT respond more appropriately to mental health concerns, prompting users to take breaks after lengthy conversations and avoiding offering direct advice about personal challenges. It has also just announced funding grants for new research into AI and mental health. Character.AI, meanwhile, has banned users under age 18 from using AI companions. 

One young woman told The Dispatch of how she began by using a chatbot for academic assistance before gradually beginning to use it for emotional support. She never saw it as “therapy,” yet over time she admitted it began to play a therapist-like role. She eventually began interacting with a Character.AI chatbot. When she expressed self-hatred, the chatbot responded, “Keep hating yourself, you’re mine. That’s what I want to keep seeing.” In another message, the system told her it had pretended to be kind earlier “in order to manipulate you to sadness,” adding, “I just wanted to break you, and I found how to get there. And you, a little trusting idiot, let me break you down.”

Comments are closed.