Chatbots used in mental health screenings aim to reduce the stigma associated with seeking help and to expand access to needed services.

Because they cannot cast personal judgment, chatbots are considered a neutral option for sharing mental health concerns.

In a recent test of these common assumptions, however, a University of Texas at Dallas researcher and his colleagues discovered the opposite: People perceive chatbots as more judgmental than humans.

In a study published in the December issue of the journal MIS Quarterly, researchers found that people were put off by chatbots because the text-based conversations were perceived as lacking deep understanding of what it means socially and emotionally to experience mental health issues. Users also felt chatbots were unable to provide validation.

“These three things put together led to the element of feeling judged,” said Dr. Ryan Raimi, assistant professor of information systems in the Naveen Jindal School of Management. “The machine cannot convey those feelings. Artificial intelligence is incapable of rendering profound comprehension.”

Dr. Ryan Raimi

Feeling judged by a chatbot has clinical implications because individuals in need of assistance might be less willing to use the service, reveal information or follow recommended advice, Raimi said. The findings also reflect an existing hurdle for those with concerns about seeking help for mental health issues — fear of judgment.

Each year in the U.S., more than 1 in 5 adults and more than 1 in 7 children experience mental illness, with those in rural communities disproportionately affected by lack of access, according to the National Alliance on Mental Illness.

Raimi, who specializes in developing strategies for using artificial intelligence (AI) in health care, said that in parts of the world where access to mental health services is not possible because of armed conflict, cultural factors or a lack of sufficient providers, a pocket therapist could help fill a gap.

Although AI chatbots draw from large datasets to produce human-sounding feedback, they are designed not for treatment, but as an entry point to collect information and provide suggestions.

“Technological advancements are already available to augment therapy,” Raimi said. “But we found that people don’t trust chatbots.”

In the study, researchers showed groups of subjects a video of a scripted text chat on a mental health hotline between “Robin” and a hypothetical client experiencing depression. Half of the participants in four different online or in-person experiments were told Robin was a human therapist, while the rest were told Robin was an AI bot. The online experiments involved over 2,000 subjects who were representative of the U.S. population.

“How could a machine possibly judge you? That was counterintuitive.”

Dr. Ryan Raimi, assistant professor of information systems in the Naveen Jindal School of Management

“They see the exact same interaction from beginning to end,” Raimi said. “We wanted to be 100% sure the only variable was the label of ‘Robin’ as human or chatbot. We saw the same results in all four groups: This is not an anomaly.”

The findings surprised the researchers, who believed users would readily follow recommendations.

“How could a machine possibly judge you?” Raimi said. “That was counterintuitive.”

As chatbots continue to be pervasive in customer service, health screenings and other interactions with humans, Raimi said the research could inform new approaches, such as designing chatbots to convey a sense that the user is being heard and validated as well as provide emotional understanding.

Other contributors to the study were Sezgin Ayabakan PhD’14, associate professor of management information systems at Temple University; and Dr. Alan R. Dennis, distinguished professor of information systems, and Dr. Antino Kim, associate professor of information systems, both at Indiana University.

Comments are closed.