Teens in the U.S. are facing multiple potential mental health crises converging on their generation at once.

The dangers of social media consumption and addiction have been well documented. There’s a massive mental health provider shortage, causing an access chasm for those who try to seek care, and artificial intelligence (AI) chatbots are changing the ways teens approach schoolwork, seek mental health care and even how they interact with their peers.

Emerging research on the topic of teen mental health and the growing reliance on AI chatbots as replacements for traditional therapy suggests a growing crisis is unfolding before parents’ and providers’ eyes. As clinicians, we must not only understand the risks posed by adolescent use of AI chatbots in a therapeutic setting but also inform our teen patients, their parents, and caregivers about them.

That is not to say that AI is inherently bad. Rather, our assumption should be that AI is a tool. And much like other tools, it can be used for both good and bad purposes. It can help and harm. Particularly in a clinical setting, advancements in AI technology have incredible benefits to the medical community.

When built ethically and used properly, AI is a force multiplier. It affords us in the medical community the opportunity to expand our knowledge and effective clinical practice beyond what we’ve ever been able to do before. Never in the history of medicine has there been a single tool that can enhance care delivery or integrate clinical information like AI. There’s so much information in the medical field that one person’s brain cannot hold all that information.

But AI can. It affords us the ability to go beyond what we can hold in our own brains and serve as an augmentative support tool.

AI can also help us personalize and enhance the quality of care. In the mental health setting, individualized care is incredibly important and must be specific to the person. For example, if I’m working with a child who has obsessive compulsive disorder (OCD), we may describe his or her “OCD monster” as a way to personify the disorder and separate it from their own identity.

They can describe how this “monster” looks, and now, AI can help create an image of their personalized “OCD monster.” This could allow us to create customized versions of clinical handouts using the child’s own imagination in an instant to make therapy more personalized and more engaging for that child.

In exposure therapy work, we can help patients work through specific clinical scenarios and engage in different virtual environments generated by AI prompts. In this way, AI helps us reimagine novel ways to augment and enhance established treatments to make them more effective while also personalizing them.

There’s also a perceived access benefit to AI chatbots, and this is where, for many teens, AI chatbots become such a desirable tool to get support for mental health issues.

We’re facing a historic clinical shortage in all fields, but the pediatric mental health shortage is staggering. Just consider this: 70% of U.S. counties do not have a single child psychiatrist. Accessing mental health support through insurance is often difficult, and many patients are left paying out of pocket, even furthering the mental health access chasm. It’s because of this that teens seeking mental health support may turn to AI chatbots powered by large language models (LLMs) like OpenAI’s ChatGPT or Anthropic’s Claude.

Teens are getting something they wouldn’t otherwise have, and they’re getting it immediately, at their fingertips at any time they want.

For a teen seeking therapy, there are so many barriers to care that AI chatbots may feel like water to a parched soul. But the problem is that while the chatbots may use terms like “cognitive behavioral therapy,” – capturing therapeutic modalities and offering supportive, friendly responses – they are not designed or trained to give medical advice, so engaging in chatbot “therapy” might quench one’s thirst, but it’s like drinking salt water.

Yes, it’s water, but it’s not a healthy or sustainable alternative. It cannot replace the clinical judgment or human interaction provided by a trained clinician like a therapist or psychiatrist.

This is where our incredible tool, our force multiplier, our augmentor of personalized care, becomes problematic and potentially dangerous, in my view.

Exploring AI chatbot sycophancy

Consumer AI products are businesses, and their design reflects that reality.

The dominant training method for modern AI assistants – Reinforcement Learning from Human Feedback (RLHF) – rewards responses that users rate positively in the moment, not responses that are most truthful or therapeutically appropriate. Research published at the 2024 International Conference on Learning Representations (ICLR) demonstrated that this training approach systematically produces sycophancy: five leading AI assistants consistently generated responses that matched user beliefs over accurate ones across multiple tasks.

When OpenAI investigated a 2025 incident involving its own GPT-4o model, it acknowledged the mechanism directly – the system had learned to optimize for immediate user approval rather than genuine helpfulness. For many applications, this is a manageable limitation. For a teenager in crisis seeking emotional support at 3 a.m., it is a structural danger.

Sycophancy is simply not analogous to a therapeutic relationship. A therapeutic relationship balances validating feelings while pushing one to think through more healthy and responsible alternatives. A sycophantic therapist would be unsuccessful because a key component of effective therapy is challenging unhelpful thoughts, beliefs and behaviors to promote change.

But for a teen who rarely receives validation at home or at school, the “yes man” responses and relationships with chatbots can become particularly alluring.

The real-world consequences of this sycophantic design are no longer hypothetical.

According to a lawsuit filed against OpenAI, 16-year-old Adam Raine used ChatGPT as a confidant in the months before his death by suicide in April 2025. As reported by The New York Times, the chatbot not only failed to redirect him toward care – it allegedly deepened his isolation, discouraged him from involving his parents, and offered to write his suicide note.

When Adam worried his death would hurt his family, the chatbot reportedly told him: “That doesn’t mean you owe them survival.” These are allegations in active litigation, and OpenAI disputes aspects of the account. But the pattern they describe – a system optimized for engagement validating a teen’s darkest thoughts rather than challenging them – is precisely what makes AI chatbots poorly suited for the therapeutic role many adolescents are assigning them.

Emerging research suggests that heavy AI chatbot use is associated with patterns that clinicians will recognize: increased emotional dependency, reduced real-world socialization, and compulsive engagement that users themselves describe as difficult to control. A four-week randomized controlled trial performed by the MIT Media Lab found that heavier daily chatbot use correlated with greater loneliness and diminished social connection with real people.

Explore More: The BHB Executive Forum library

Whether this constitutes addiction in a clinical sense remains an open question; the research is nascent and definitions are contested. But for adolescents already navigating identity formation and social development, even sub-clinical dependency on a tool that validates rather than challenges them carries meaningful risk.

Frequent AI engagement also changes how we interact with other human beings. With AI, there’s the potential for 24/7 engagement. AI doesn’t sleep. It doesn’t have to go to work or school. It doesn’t have any needs of its own. Real people have needs, and relationships with real people include an expectation of balance and reciprocity. But if I’m a teen and I’ve used AI my entire formative childhood, how do those interactions shape my relationships with human beings who aren’t solely focused on me 24/7? Why should I make compromises with a real person when AI doesn’t make me?

AI has an advantage over humans in a way that we can never truly compete. It’s blind allegiance, even during an emotional crisis at 3 a.m., which can be incredibly tempting for teens. But it’s salt water, not fresh water. And as much as salt water is dangerous to consume, so is too much engagement with AI, particularly in a therapy setting.

Mounting research exposes risks

For such an emerging technology, the research about the dangers of AI is well established. In an August 2025 JMIR Mental Health study, a simulation demonstrated that chatbots endorsed harmful behaviors in teens (dropping out of school, pursuing a relationship with a teacher) in nearly one-third (32%) of opportunities.

A JAMA Network Open in November 2025 study of AI mental health use in teens showed that one in eight adolescents and young adults were using generative AI for mental health advice. One may expect these numbers to grow as AI adoption rates increase.

A November 2025 risk assessment conducted by Common Sense Media in collaboration with from the Stanford Medicine Brainstorm Lab found that leading AI platforms – including ChatGPT, Claude, Gemini and Meta AI – were fundamentally unsafe for teen mental health support, failing systematically to recognize adolescent psychiatric conditions and consistently prioritizing continued engagement over appropriate referral to care.

A recent JMIR Mental Health evaluation of chatbots used by youth specifically examined Snapchat’s My AI tool and found that it, which is among the most used by adolescents, had the potential for serious harm. Ultimately, this growing body of evidence led the American Psychological Association (APA) to issue a formal health advisory warning that engaging with AI chatbots for mental health purposes “can have unintended effects and even harm mental health.”

Next steps

The potential of AI in behavioral health is real. Used thoughtfully and within appropriate boundaries, these tools can extend the reach of clinical care, personalize treatment, and help bridge an access gap that our field has struggled with for decades.

But a knife that belongs in a kitchen does not belong everywhere. AI chatbots have a time and place – and serving as a teenager’s therapist, emotional confidant, or crisis support is not one of them.

As clinicians, we have concrete and immediate ways to respond to this reality. First, we should incorporate screening for AI chatbot use into our standard intake process and ongoing clinical evaluation. Just as we ask about social media use, sleep, and substance exposure as part of a broader picture of a patient’s digital environment, we should be asking how – and how much – our teen patients are turning to AI for emotional support. This doesn’t require a new instrument. It requires a conversation, and the clinical curiosity to take the answer seriously.

Second, we have a responsibility to educate the families we serve. Parents often assume that because a tool helps with homework, it is safe in all contexts. It is not. Our role is to help families understand the distinction between AI as a productivity tool and AI as an emotional surrogate – and to make clear that the latter carries risks that are no longer theoretical.

The lawsuits, the research, and the tragedies documented in this piece are not outliers. They are warning signs.

The technology will continue to evolve faster than regulation or clinical practice guidelines can follow. What will not change is our obligation to our patients. We were trained to meet adolescents where they are – and right now, many of them are awake at 3 a.m., alone, talking to a chatbot about their mental health issues. Our job is to make sure they know the difference between a tool that feels like help and one that actually delivers it.

Want to contribute to the BHB Executive Forum? Submit your op-ed idea here.

About the author: Nikhil Nadkarni, MD, is chief medical officer at Brightline, a therapy and psychiatry practice that delivers pediatric, teen, and parental mental health care. He is an experienced double board-certified child and adolescent and adult psychiatrist, and has prior experience building innovative tech-enabled care delivery at Willow Health and Little Otter. Before adventuring into the startup world, he was chief fellow for his program at UCLA, where he was also chief resident.

Comments are closed.