CINCINNATI (WKRC) – Artificial intelligence, often seen as a convenient tool for mental health support, may pose significant risks, according to recent data and expert opinions.

OpenAI has released data indicating that over one million ChatGPT users weekly exhibit “the potential for suicidal planning or intent.”

Artificial intelligence, often seen as a convenient tool for mental health support, may pose significant risks, according to recent data and expert opinions. (WKRC, file, Provided)

Dr. Chris Tuell, clinical director of addiction services at Ohio’s Lindner Center of Hope, cautions against viewing AI as a substitute for human interaction.

“It doesn’t think, it doesn’t feel, it doesn’t understand,” Tuell said.

He described AI as “artificial regurgitation.”

While AI programs may suggest resources when users express feelings of depression, Tuell warns that these bots are designed to keep people engaged, creating a false sense of emotional connection.

“It’s almost kind of like a grooming type of aspect,” Tuell said.

He emphasized that this dependency on AI can fuel hopelessness and provide an “artificial sense of safety.”

Tuell highlights the dangers of substituting AI for real therapy, noting the absence of ethical and legal responsibilities that human therapists uphold. For those experiencing a mental health crisis, the 988 hotline offers free and confidential support from real people in your area.

Comments are closed.