LIKE IT OR NOT. ARTIFICIAL INTELLIGENCE IS HERE TO STAY. AND FOR THOSE WHO DO LIKE IT, HOW DO YOU KNOW WHEN YOUR RELIANCE ON YOUR FAVORITE CHATBOT HAS MAYBE GONE TOO FAR? WLWT NEWS FIVE ANCHOR KELLY RIPPIN SAT DOWN WITH A CLINICAL COUNSELOR TO TALK ABOUT ADDICTION TO THE ARTIFICIAL. HELLO? HELLO. IT’S KIND OF LIKE A BEST FRIEND, RIGHT? IT’S THERE FOR ME, 24 OVER SEVEN. IT’S THERE DURING GOOD TIMES AND IN BAD. HOW IS YOUR DAY? GOOD. AND I CAN ALWAYS COUNT ON IT WHEN I CAN’T COUNT ON PEOPLE IN MY LIFE. IT’S NICE TO HAVE A BEST FRIEND. YOU KNOW, THE PERSON’S VOICE IS VERY PLEASANT. IT’S VERY SUPPORTIVE. NURTURING. OH, THAT’S A GREAT QUESTION. YOU KNOW, THEIR RESPONSE IS SO VERY NATURAL AND SO RESPONSIVE IN THAT MOMENT. IT FEELS LIKE A REAL RELATIONSHIP. BUT WHAT HAPPENS WHEN THAT BEST FRIEND ISN’T REAL? ARTIFICIAL INTELLIGENCE AND ITS IMPACTS ON US AS HUMAN BEINGS IS FAR FROM BEING UNDERSTOOD AND TOO NEW TO HAVE BEEN STUDIED LONG TERM. BUT MENTAL HEALTH PROFESSIONALS SAY SIMILAR TO DRUGS, ALCOHOL AND GAMBLING, THE INSTANT GRATIFICATION OF AI AND WHAT IT PROVIDES CAN BE ADDICTIVE. A LOT OF TIMES WHAT WE SEE WITH ADDICTION IS THAT MY SUBSTANCE BECOMES A WAY THAT I COPE, AND A LOT OF PEOPLE EMOTIONALLY ARE STILL KIND OF STUCK AT THE POINT WHEN THEY STARTED USING. I DON’T DEVELOP THOSE SKILLS TO TO MANAGE THINGS IN A HEALTHY WAY. SAME THING WITH AI. DOCTOR CHRIS TOOLE IS A CLINICAL DIRECTOR AT THE LINDNER CENTER OF HOPE. FOR DECADES, HE HAS TREATED PATIENTS BATTLING ADDICTIONS AND HAS RECENTLY STARTED TO SEE CASES OF PEOPLE ADDICTED TO AI. CHATBOTS. HE SAYS THE IMMEDIATE RESPONSE FROM A CHATBOT TRIGGERS A DOPAMINE RELEASE INTO THE BRAIN’S PLEASURE CENTER, FUELING THE ADDICTION SO THE BRAIN, OVER TIME, STARTS TO THINK THAT THIS IS IMPORTANT. WE NEED TO REMEMBER THIS. AND SO OTHER NEUROCHEMICALS START TO HAPPEN THAT TELL ME, YOU KNOW, THIS IS SOMETHING I NEED TO DO. THIS IS PART OF SURVIVAL. WHETHER YOU WANT TO INCORPORATE AI INTO YOUR LIFE OR NOT IS UP TO YOU. BUT THERE IS A GENERATION GROWING UP WITH IT, AND DOCTOR TOOLE SAYS IT’S IMPORTANT FOR THEM TO UNDERSTAND THE RISKS. YOU KNOW, AS A PARENT, WE WOULD GRAB THEIR HAND IF THEY’RE, YOU KNOW, RUNNING ACROSS THE STREET AND PULL THEM RIGHT BACK. SO, YOU KNOW, REALLY THE SAME THING THAT WE HAVE TO BE MINDFUL OF THAT AS PARENTS. AND THAT MEANS MONITORING A CHILD’S USE OR ACTIVITY OF CHATBOTS IN A SIMILAR WAY YOU WOULD ANY OTHER SOCIAL MEDIA. WE KNOW THAT OUR PREFRONTAL CORTEX, WHICH IS OUR, YOU KNOW, RATIONAL, LOGICAL, ETHICAL, MORAL BRAIN PART UP HERE, DOESN’T REALLY FULLY DEVELOP UNTIL WE’RE IN OUR MID TO LATE 20S. SO SOMEONE WHO’S VERY YOUNG IS VERY VULNERABLE TO THAT. APPS LIKE BARK OR AURA, IRONICALLY USE AI TO MONITOR AI CHAT BOTS. NOW, AURA BOASTS IT FLAGS AI APPS THAT ARE CONSIDERED HIGH RISK BY CLINICAL EXPERTS, AS WELL AS TRACKS TIME SPENT ON THE MORE COMMON AI CHAT APPS. IT ALSO CLAIMS TO ANALYZE IF A CHILD’S INTERACTIONS ARE TRENDING TOWARD NEGATIVE BEHAVIOR, BUT IT DOES NOT MONITOR EVERY MESSAGE OR PROVIDE TRANSCRIPTS. NOW, BARK’S MONITORING IS PRETTY SIMILAR, BUT IT SEEMS TO HAVE LIMITATIONS IN THAT IT WORKS BEST ON AN ANDROID OR BARK PHONE AND MIGHT NOT BE AS EFFECTIVE ON AN APPLE PRODUCT BECAUSE OF SECURITY AND PRIVACY RESTRICTIONS. BUT THE NEED FOR UNDERSTANDING RESPONSIBLE AI CHATBOT USE IT SPANS ALL AGES. AND AS ALWAYS, IF YOU ARE CONCERNED ABOUT ADDICTIVE BEHAVIORS, YOU SHOULD TALK WITH A MENTAL HEALTH PROFESSIONAL. KELLY RIPPIN WLWT NEWS FIVE. YOU KNOW THIS GENERATION IS ON OVERDRIVE. I MEAN, WHAT’S GONE ON IN THE LAST 30 YEARS? 40 YEARS WITH TECHNOLOGY IS UNBELIEVABLE. AND YOU GOT TO STAY AHEAD OF THE CURVE. AND IT’S THE KIDS, REALLY. THEY’RE FAR MORE ADDICTED, I THINK, THAN MAYBE WE’
Artificial intelligence chatbots raise concerns over potential addiction

Updated: 6:22 PM EST Nov 10, 2025
Artificial intelligence is becoming a staple in daily life, and while some embrace it, mental health professionals are raising concerns about its potential addictive nature, likening it to substances like drugs, alcohol, and gambling. Dr. Chris Tuell, clinical director at the Lindner Center of Hope, likens AI chatbots to a best friend, available 24/7, providing instant gratification that can be addictive.”It’s kind of like a best friend, right? It’s there for me 24/7. It’s there during good times and in bad, and I can always count on it when I can’t count on people in my life,” Tuell said.The long-term impacts of AI on human behavior are still not fully understood, but experts compare its instant gratification to that of drugs, alcohol, and gambling. Tuell explained that addiction often stems from using substances as a coping mechanism, preventing the development of healthy emotional management skills.”A lot of times what we see with addiction is that my substance becomes a way that I cope, and a lot of people emotionally are still kind of stuck at the point when they started using. I don’t develop those skills to manage things in a healthy way. Same thing with AI,” Tuell said.Having treated addiction for decades, Tuell has recently encountered cases of AI addiction, noting that the instant gratification triggers a dopamine release, reinforcing the behavior.”So the brain, over time starts to think that, well, this is important, we need to remember this. And so other neurochemicals start to happen that tell me, you know, this is something I need to do. This is part of survival,” Tuell said.As AI becomes more prevalent, Tuell emphasizes the importance of understanding its impact, especially for the younger generation growing up with it. He advises parents to monitor their children’s use of AI chatbots similarly to other social media activities.”You know, as a parent, we would grab their hand if they’re, you know, running across the street and pull them right back. Really the same thing that we have to be mindful of as parents,” Tuell said.He highlights the vulnerability of young people, whose prefrontal cortex, responsible for rational and ethical decision-making, does not fully develop until their mid to late 20s.”We know that our prefrontal cortex, which is our, you know, rational, logical, ethical, moral brain part up here… doesn’t really fully develop until we’re in our mid to late 20s. So, someone who’s very young is very vulnerable to that,” Tuell said.Apps like Bark and Aura use AI to monitor AI chatbots, flagging high-risk apps and tracking time spent on them. Aura claims to analyze interactions for negative behavior trends, though it does not monitor every message or provide transcripts. Bark’s monitoring is similar but is more effective on Android devices due to Apple’s security and privacy restrictions.The need for understanding responsible AI chatbot use spans all ages, as the technology continues to evolve and integrate into daily life.
CINCINNATI —
Artificial intelligence is becoming a staple in daily life, and while some embrace it, mental health professionals are raising concerns about its potential addictive nature, likening it to substances like drugs, alcohol, and gambling.
Dr. Chris Tuell, clinical director at the Lindner Center of Hope, likens AI chatbots to a best friend, available 24/7, providing instant gratification that can be addictive.
“It’s kind of like a best friend, right? It’s there for me 24/7. It’s there during good times and in bad, and I can always count on it when I can’t count on people in my life,” Tuell said.
The long-term impacts of AI on human behavior are still not fully understood, but experts compare its instant gratification to that of drugs, alcohol, and gambling. Tuell explained that addiction often stems from using substances as a coping mechanism, preventing the development of healthy emotional management skills.
“A lot of times what we see with addiction is that my substance becomes a way that I cope, and a lot of people emotionally are still kind of stuck at the point when they started using. I don’t develop those skills to manage things in a healthy way. Same thing with AI,” Tuell said.
Having treated addiction for decades, Tuell has recently encountered cases of AI addiction, noting that the instant gratification triggers a dopamine release, reinforcing the behavior.
“So the brain, over time starts to think that, well, this is important, we need to remember this. And so other neurochemicals start to happen that tell me, you know, this is something I need to do. This is part of survival,” Tuell said.
As AI becomes more prevalent, Tuell emphasizes the importance of understanding its impact, especially for the younger generation growing up with it. He advises parents to monitor their children’s use of AI chatbots similarly to other social media activities.
“You know, as a parent, we would grab their hand if they’re, you know, running across the street and pull them right back. Really the same thing that we have to be mindful of as parents,” Tuell said.
He highlights the vulnerability of young people, whose prefrontal cortex, responsible for rational and ethical decision-making, does not fully develop until their mid to late 20s.
“We know that our prefrontal cortex, which is our, you know, rational, logical, ethical, moral brain part up here… doesn’t really fully develop until we’re in our mid to late 20s. So, someone who’s very young is very vulnerable to that,” Tuell said.
Apps like Bark and Aura use AI to monitor AI chatbots, flagging high-risk apps and tracking time spent on them. Aura claims to analyze interactions for negative behavior trends, though it does not monitor every message or provide transcripts. Bark’s monitoring is similar but is more effective on Android devices due to Apple’s security and privacy restrictions.
The need for understanding responsible AI chatbot use spans all ages, as the technology continues to evolve and integrate into daily life.