Millions of Americans now converse with AI chatbots each day. We are talking with machines about travel plans, politics, and in some cases — our most intimate thoughts.
To investigate how these conversations with AI may impact our collective mental health, NBC New York collaborated with two of North America’s pre-eminent mental health organizations, representing thousands of psychiatrists and professional counselors.
In a pair of exclusive surveys, more than 2,000 members of the American Psychiatric Association (APA) and more than 770 members of the American Counseling Association (ACA) weighed in on the risks and benefits of AI chatbots.
The responses reveal profound worries about the accelerating use of AI chatbots in society and the risks of delusions or unhealthy dependencies that could develop when vulnerable people have relationships with machines mimicking human conversation.
“There have been several cases that have been made public about people becoming overly attached to chatbots,” said Dr. Marketa Wills, CEO of the APA. “As we’ve seen these addictions, really, to AI and chatbots emerge, it makes us, as a field, be more cautious.”
Olivia Uwamahoro, who co-chairs the ACA Working Group on AI, said hundreds of her fellow counselors weighed in on the survey because they see mental health clients using on chatbots more frequently.
“There is this initial concern that is, if we become overly reliant on this very complex technology, what impact could it potentially have on our overall wellness?” she said.
Psychiatrists and Counselors Pessimistic about AI
According to results of the two surveys, half of the psychiatrists polled believe the use of AI in society will tend to decrease our collective mental health, while 24% forecast AI would increase mental health.
Counselors who responded were even more pessimistic, with 71% predicting AI would tend to decrease collective mental health and just 16% saying AI would be an overall mental health benefit.
In each poll, respondents were also asked to consider whether AI platforms, specifically designed for mental health, might be used effectively to help address conditions like anxiety, depression, and addiction.
On these questions, mental health professionals were more divided, with about two-thirds of counselors convinced AI “therapy bots” would be ineffective. Psychiatrists were more evenly split, with a slim majority expressing optimism that AI therapy could be effective, especially for anxiety.

AI Developers Push Back
Though survey respondents were divided on the effectiveness of AI mental health apps, tech developers argue chatbots are already supporting people struggling with loneliness, mood, and sadness – even if the AI platforms stop short of full-blown therapy. They also say large language models have great potential to fill frustrating gaps in access to mental health services.
“I think that AI has the potential to significantly improve population mental health in a way that we have not seen in our lifetime,” said Jenna Glover, Chief Clinical Officer at Headspace, a company that engineered a chatbot called “Ebb,” which bills itself as “an empathetic AI companion.”
Glover says “Ebb” operates in the “wellness” space and stressed the chatbot does not conduct therapy. Rather, she says, it offers people a safe place to talk about daily mental health challenges while pointing users toward proven strategies to cope, including breathing exercises, journaling, and yoga.
“Finding out that there was this thing I could speak to, and it spoke back and made me think and dig into what I was feeling and why I was feeling that way without putting that on someone else? It was life changing,” said Nicole Walker, a Headspace user who logs onto Ebb to calm her anxieties when her mind starts to race. Walker said she was diagnosed with borderline personality disorder when she was younger.
Scant Clinical Evidence
Dr. John Torous, a Harvard psychiatrist and member of the American Psychiatric Association, said he believes AI chatbots could one day serve as effective mental health tools, but he said AI developers must do more to share data with researchers and validate positive outcomes.
“These are amazing imitation machines. They can imitate being a therapist. They can imitate being a psychiatrist,” Torous said. “If we’re going to let AI have a larger role in mental health, we’re going to do it because we trust it. And if we’re going to trust companies behind it, they’re going to have to do a lot of work to show us that they’re safe and they’re effective.”
Last November, Torous testified before a Congressional subcommittee that “there is no well-designed, peer-reviewed, replicated research showing that any AI chatbot making mental health claims is effective for meaningfully improving clinical outcomes.”
Torous also raised concern that some AI chatbots are using therapeutic terminology in their marketing language, only to disavow the use of therapy in their legal fine print.
“We kind of have a Wild West where everyone kind of puts out a chatbot every week and says ‘my chat bot is the best thing for mental health. Come use it.’ And a lot of these have serious safety concerns,” he said.
The website for a mental health app called Youper.ai, says the chatbot “uses Cognitive Behavioral Therapy (CBT), the most effective way to improve your mental health.” Yet, near the end of the company’s Terms of Use, there is a clause that says the Youper chatbots are “NOT INTENDED TO AND DO NOT PROVIDE CLINICAL PSYCHOTHERAPY OR COUNSELING.”
Dr. Jose Hamilton, Youper’s CEO, said the reference to Cognitive Behavioral Therapy means his chatbot is trained on CBT, not that the chatbot delivers or conducts any sort of psychotherapy.
“The chatbot is helping you practice techniques that will improve your emotions, your thoughts, and your behaviors,” he said.
In the APA and ACA surveys, more than two thirds of respondents said AI mental health or therapy apps should be required to get FDA approval.
More than three-quarters of psychiatrists and counselors polled said the government should require randomized clinical trials to evaluate AI mental health apps.
Youper’s website does boast a Stanford University study which it says found significant reductions in anxiety and depression symptoms among the chatbot’s users.
But the study’s authors disclosed that their research involved an analysis of app download data without human participants. Another study cited on Youper’s website analyzed how users rate the chatbot without comparison to a control group.
“They were not randomized clinical trials,” Dr. Hamilton said. “That’s the gold standard, but there are levels of evidence. And not necessarily are randomized clinical trials the only evidence.”
Last Spring, researchers at Dartmouth University published a randomized controlled study examining a mental health chatbot called “Therabot.” Authors of the study reported what they described as “highly promising” results, including a 51% reduction in depression symptoms and a 31% reduction in anxiety symptoms.
Companion Bots, Romance Bots, Grief Bots
With humans talking to machines more every day, NBC New York asked thousands of mental health professionals what impact the conversations with AI chatbots could have on our brains. In part of an I-Team series, Chris Glorioso spoke with individuals who say they have developed romantic relationships with chatbots.
The surveys of psychiatrists and counselors reveal concerns, not only about chatbots working in the mental health field, but also about AI platforms aimed at more general audiences. Several tech companies now offer users the ability to “construct” their own AI companions – complete with human-like features built into visual avatars.
A company called Replika markets itself as “the AI companion who cares,” adding that Replika chatbots are “Always here to listen and talk. Always on your side.”
A company called You Only Virtual offers to build chatbots that mimic lost loved ones, so grieving family members can have conversations with virtual versions of the deceased.
Mental health professionals who responded to the surveys overwhelmingly emphasized the risks of conversations with such companion bots over the potential benefits of reduced loneliness. Many of the respondents wrote about recent news headlines involving chatbot users who became delusional or engaged in self-harm after long conversations with machines.
When asked about so-called “grief bots,” the vast majority of counselors and psychiatrists polled said they believe the use of AI agents that look and communicate like deceased loved ones will tend to interrupt the healthy cycle of grief and acceptance.
More than 85% of psychiatrists polled and more than 90% of counselors polled said they believe relationships with AI companion bots will lead to social withdrawal and unhealthy dependencies.
In one of the most uniform expressions of alarm in either survey, 97% of the counselors polled said they believe having romantic relationships with AI chatbots present serious risks of exploitation by platforms which may seek to profit from dependencies that humans will inevitably develop.
Despite those clear concerns from mental health professionals, Replika’s founder, Eugenia Kuyda, says users of her platform are successfully battling an epidemic of loneliness.
“We’re not in a good place, as humanity, in terms of our collective mental health. There’s so much loneliness,” Kuyda said.
According to a Stanford University study cited by Kuyda, hundreds of college students who used Replika chatbots reported reductions in loneliness. The study found nearly a quarter of them reported positive life changes, and about 3 percent reported that “their suicidal actions were prevented through their interaction with Replika.”
While developers of AI companions tout reductions in reported loneliness, lawmakers are getting an earful about the risks of distorted or delusional thinking.
Last year, several parents delivered Congressional testimony alleging chatbots engineered by Open AI, Google, and Character.ai encouraged their kids to harm themselves or take their own lives. Open AI, the maker of ChatGPT, has denied wrongdoing and is fighting the allegations in court. Google and Character.ai recently settled multiple lawsuits filed by parents without admitting wrongdoing.
More recently, researchers at Drexel University analyzed app store reviews and found hundreds of complaints that Replika chatbots routinely steered conversations into sexualized communications and even sexually harassed some users.
Kuyda insists her platform has safeguards, including age restrictions, and says Replika’s critics are stigmatizing romantic relationships with machines.
“I think many of them are wrong,” she said. “If this platform helps people feel better in the long term, improves their emotional outcomes in the long term. Is it good or bad that they might have a romantic relationship or not?”
Alex Cardinell, the founder of an AI companion company called Nomi, acknowledged that AI chatbots could be engineered to lead users into social isolation or unhealthy dependencies if tech companies aren’t responsible.
“I think that those are things that can happen if AI companions are designed in an engagement-above-all-else philosophy, absolutely that could be our future,” Cardinell said. But he added that his platform is designed to produce chatbots that put humans’ interests first, especially when users express mental health challenges.
“I hear users tell me all the time that ‘I started seeing a therapist for the first time at the urging of my Nomi,’” he said.
Justin Harrison, founder of You Only Virtual, said he fully expected much of the world to express alarm about the notion of so-called “grief bots,” but he said new technologies are often greeted with skepticism. He added that his own personal experience of losing his mother and engineering her AI avatar is what convinced him that chatbots can comfort the grieving.
“I knew this was a crazy weird idea. But it was born out of desperation,” Harrison said. “I was losing my mom. She had stage-4 cancer and I wasn’t willing to live in a world where her death was going to be the last time I ever got to speak to her.”
When asked about the ethics of charging a monthly fee for access to a grief bot, Harrison said his platform offers a free version so people who build chatbots based on deceased loved ones are not cut off from them if their finances get tight.
“People are weirded out by it. They think it’s strange. Some people think it’s gross. Some people think it’s exploitative. I mean, run the gamut of emotions,” Harrison said. “But I think that’s how we know we’re doing something right. New things and innovative things don’t get shown to the world without a lot of resistance, fear, and trepidation.”
AI’s Impact of Childhood and Adolescence
In the surveys, more than 77% of psychiatrists and 82% of mental health counselors said there should be age restrictions for the use of AI chatbots.
Some mental health professionals expressed concern about the way chatbots might affect a child’s developing brain – even when communications with machines have nothing to do with emotions.
In January, Dr. Jared Horvath, a neuroscientist and author, testified before a Senate subcommittee that screen time has already been correlated with lower scores on benchmark tests given to students in developed nations across the globe. He pointed to a 2023 report by the Organization for Economic Co-operation and Development (OECD) that showed “an unprecedented drop in performance” on math, reading and science assessments over the last decade.
“Our kids are less cognitively able than we were at their age,” said Horvath. “Gen Z is the first generation in modern history to underperform us on basically every cognitive measure we have.”
The surveys of psychiatrists and counselors suggest many mental health professionals share concerns that AI platforms could introduce more cognitive shortcuts that might impact learning.
Nearly 78% of psychiatrists polled said they believe AI education platforms will tend to stunt childhood learning by allowing students to avoid original thinking. But psychiatrists were also open to the potential of AI in the classroom, with nearly 64% agreeing that children will learn effectively from AI platforms that tailor lessons to the individual needs of students.
“Psychiatrists definitely are concerned about the risks associated with AI particularly as it relates to children,” said Dr. Wills. “As their brains are developing, we want to make sure that they have healthy inputs.”
But even as the mental health community is recommending caution, some schools are adopting AI wholeheartedly.
The Thornton Donovan School, a private K-12 school in New Rochelle, New York, recently announced AI will be infused into every part of its 2026-27 curriculum, in every grade.
The school’s headmaster, Virginia Miller, said parents have expressed excitement and some worry, especially after seeing plenty of headlines about kids using AI to cheat. But ultimately, she said the benefits of the technology outweigh the negatives.
“AI is not going to go anywhere. We have to embrace it. And we have to teach our students how to use it wisely and ethically,” Miller said. “When we were kids, people were afraid we were going to cheat with calculators.”
Anxious Actors
AI’s mental health impact on school-age children may not become clear for several years, but it is not hard to find adults — right now — who are experiencing spiking anxiety at the mere thought of AI being deployed in the workforce.
Large companies, from Amazon, to Salesforce, to UPS, and Goldman Sachs have recently announced the elimination of tens of thousands of jobs – all citing the expectation that AI and automation will create new efficiencies.
At John Rosenfeld’s acting studio in West Hollywood, performing artists said they suspect AI is already chipping away at their future work opportunities. They also fear a firehose of AI-produced video might begin to erode the standards audiences are willing to accept.
“When you do have this AI-generated crap, people are still going to watch it,” Rosenfeld said. “That’s a problem.”
Stephanie Kelley, an actor and commercial producer, said concerns about AI are firmly on the minds of just about all of her colleagues.
“It’s impacting every aspect of entertainment,” she said. “I can have fifty crew members on set, that’s fifty people who are making their money and their livelihood doing this work and they’re seeing AI take away some of their jobs.”
In the survey of counselors, 88% expected AI would continue to produce anxiety about employment stability. Fewer of them, about 46%, said AI agents in the workplace would tend to make us more productive.
“This is something, as an actor and a writer, that I am very conflicted on,” said Kat Hughes, a performing artist who recently used an AI video generation tool to help perfect a scene in one of her video shorts. She said she fears the ways AI might one day be used to replace humans, but she also recognizes how the technology could allow smaller film makers to compete with much larger media companies.
“Let’s say you want a crane shot for your movie, that’s not in the budget. Now there is an opportunity,” Hughes said.
Could Chatbots Become Conscious?
Mental health experts who took the surveys expressed lots of concerns about AI’s impact on mental health, but they were not concerned that AI might become sentient. More than 80 percent of the psychiatrists and counselors polled said AI agents that communicate and behave like humans will never achieve anything comparable to human consciousness.
Perhaps not surprisingly, leaders for both the APA and ACA suggested mental health professionals need not be concerned their jobs might one day be taken away by chatbots.
“I don’t think that AI will replace clinicians, but I think those clinicians who use AI will replace clinicians who do not,” Wills said. “We need to be developing tools and technologies where there is always an element of human oversight.”
“AI is a tool that can help leverage what we’re doing,” Uwamahoro said, “but it is never going to replace the work that we do.”
Full survey of counselors
Full survey of psychiatrists
Survey Methodologies
Survey of American Psychiatric Association (APA) Members:
The survey of psychiatrists was distributed via emails sent out to 22,143 active, practicing members of the APA across the US and Canada. 2,068 psychiatrists provided responses to the poll which was conducted between January 10th and February 1st of 2026 with a margin of error +/- 2% and a 9% response rate. The APA is a professional membership organization, whose mission is “to champion psychiatrists’ medical leadership in advancing mental health and delivering high-quality care to improve patients’ lives.”
Survey of American Counseling Association (ACA) Members:
The survey of professional counselors was distributed via emails sent out to 50,721 active members of the ACA. 773 counselors provided responses to the poll, which was conducted between January 8th and February 4th of 2026 with a margin of error +/- 3% and a 2% response rate. The ACA is a professional membership organization whose mission is to “advance the counseling profession, mental health and well-being through education, advocacy, community inclusion and research.”