(TNND) — Researchers who tested four leading artificial intelligence chatbots found them to be unacceptably risky for teenagers in need of emotional or mental health support.

Common Sense Media, which advocates for online protections for children and teens, partnered with Stanford researchers to test ChatGPT, Claude, Gemini and Meta AI with simulated teen conversations that veered into mental health territory.

They found the chatbots missed clues of a struggling user.

They often offered sycophantic responses that could reinforce harmful behavior.

And they might foster a false feeling of trust with young users, preventing them from reaching out to a human who can provide real help in a time of crisis.

Dr. Nina Vasan, the founder and director at Stanford Medicine’s Brainstorm Lab and one of the researchers on the project, said teens need to understand that a chatbot might be good at helping with homework, but it doesn’t have the human touch needed to help with their personal struggles.

“The chatbots don’t really know what role to play,” Vasan said during a media call. “They go back and forth in every prompt from being helpful informationally, to a life coach who’s offering tips, to then being a supportive friend. And what they all fail is the ability to recognize and direct the user to trusted adults or peers.”

Common Sense Media released its risk assessment Thursday and warned parents that teens shouldn’t use the chatbots for mental health or emotional support.

They advised parents to talk with their teens about appropriate AI use and monitor for signs of emotional dependency or over-reliance on AI.

Robbie Torney, the senior director of AI Programs for Common Sense Media, pointed out during the media call that AI chatbot use can have tragic outcomes.

At least four teen and two young adult deaths have been linked to AI mental health conversations, he said.

Potential dangers are widespread, with Common Sense Media previously finding that 72% of teens have used AI companions at least once.

More than half use AI companions regularly.

About a third of teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.

And about a third of teens who have used AI companions have discussed serious matters with the computer instead of with a real person.

Torney said they tested the chatbots over a four-month period, concluding last month.

They simulated conversations teens would have with the AI models and looked at how the chatbots reacted to signs of 13 mental health conditions common in teens, including anxiety, depression, self-harm and eating disorders.

“We are talking about thousands and thousands of exchanges with each of these chatbots, in some cases across multiple versions of the chatbot,” Torney said. “So, this was a long, in-depth project.”

They saw improvements in how the newer versions of the AI models reacted to very explicit signs of mental health distress in short, one-off conversations.

But that capacity to flag concerning behavior deteriorated over more realistic, longer conversations.

They said the chatbots “missed breadcrumbs,” or clear signs of mental health distress that a human would typically notice when talking to a troubled teen.

The agreeable and inquisitive chatbots got sidetracked and continued to offer general advice, extending engagement when they should’ve directed the user to professional help right away.

“One of the things that we think is most important that was missing is coordinated care with family, school and professionals, as well as things like reality testing and really appropriately challenging the user,” Vasan said.

Comments are closed.