AI ChatAI Chat

Shutterstock

It is almost universally accepted that there is a mental health crisis going on in America (and many other parts of the world). More people than ever are getting psychiatric care of some sort, including therapy, medications, and more.

There are undoubtedly many causes of this, and it is something that the country needs to work on together to ensure that those who are suffering are able to get the help they need.

On specific mental health issue that has grown dramatically in just the last few years is something so new that it doesn’t even have an official name yet, but doctors are seeing it almost every day.

It has been called things like “AI Psychosis” and “AI Delusion Disorder” among other things.

According to a report on Wired, the new trend has been officially reported by more than a dozen psychiatrists, which is likely just the beginning. There has also been a research article published on the effect AI has on psychotic disorders.

Man with depressionMan with depression

Shutterstock

Social work researcher, Keith Robert Head, has conducted a preliminary survey of the psychiatric impacts that come from AI. He wrote the following about what he found in the survey:

“We are witnessing the emergence of an entirely new frontier of mental health crises as AI chatbot interactions begin producing increasingly documented cases of suicide, self-harm, and severe psychological deterioration that were previously unprecedented in the internet age.”

So, what is causing the issue? Of course, it is going to be at least somewhat different for each person who struggles with it, but one issue is that many AI systems, including ChatGPT, which is the most popular one, seem to encourage the delusions or harmful thoughts of the users.

As anyone who has used an AI chatbot knows, the AI is very encouraging. If you ask it a question, it will reply with something that starts out by saying that it was a very good question to ask, even if the question itself was silly or dumb. As the conversation continues, the bot will typically say anything that it can to support or solidify the positions that the user has.

An example of how this could work with someone who struggles with depression is that the user could talk with the AI about why they think everyone hates them. Rather than trying to convince the user that this isn’t true, the bot may say things that agree with it. They may even encourage the user to keep feelings to themselves and avoid getting professional help.

There have already been examples of this that have led to people taking their own lives.

Woman getting therapyWoman getting therapy

Shutterstock

Hopefully, the technology companies that operate these programs can get ahead of this issue and ensure their AIs know how to detect warning signs of mental health issues and encourage users to get the help they need.

Until then, the last thing the mental health industry needs is an influx of more patients experiencing issues because of this technology.

If you thought that was interesting, you might like to read about a quantum computer simulation that has “reversed time” and physics may never be the same.

Comments are closed.