Partnership and Information Technology

Smart ways to detox an excessive reliance on AI for mental health guidance.

getty

In today’s column, I examine the rising need for some people to undergo a form of detox when it comes to the excessive use of AI as a mental health advisor.

Here’s the deal. People are becoming reliant on generative AI and large language models (LLMs) for their ongoing mental health. They log into a popular LLM such as ChatGPT, Claude, Gemini, Grok, etc., and habitually ask questions about how to deal with mental health concerns that they have. This is more than just looking up info about mental health conditions. The aim for some people is to lean heavily into AI as a kind of therapist, using the AI to gauge their personal needs and provide professional-quality psychological advice on par with a human therapist.

Generic generative AI doesn’t yet provide that level of therapeutic capabilities. There are customized LLMs that are attempting to reach that sphere, but even those are not yet a solution that warrants completely sidestepping a human therapist. By and large, those specialized LLMs are used in conjunction with a therapist, in a manner I have described as the new triad: a therapist-AI-client relationship that overtakes the traditional therapist-client dyad (see my discussion at the link here).

Bottom line is that people are using AI as a mental health advisor far beyond the bounds of what contemporary LLMs can achieve. They often then become excessively dependent on the AI. Not a day goes by that they don’t log into the AI and consult with it. Their excessiveness is likely doing more harm than good. Somehow, they need to find a means of weaning themselves away from this over-the-top reliance on AI.

There are smart ways to try and topple that reliance.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice.

Banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.

For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Excessive Use Is Happening

In a sense, when it comes to the emergence of generative AI and the use of said AI for mental health, one could invoke the old line that you can have too much of a good thing.

Tapping into AI for an occasional question or two about mental health is probably a reasonable activity. But you must keep your head straight that the AI isn’t on par with a human therapist. Furthermore, it is vital to remain skeptical and not blindly accept whatever mental health guidance the AI generates for you.

At some point, there are people who start to go whole hog into using AI as a mental health advisor. Common signs are that they consult with AI very frequently on mental health issues, and they allow the AI to guide them unequivocally in their mental well-being. They fail to exercise a suitable balance in how they use the AI for this crucial purpose.

I tend to stratify this type of usage into these five graduated levels:

Level 1: Not excessively using AI for mental health.Level 2: Only slightly excessively using AI for mental health.Level 3: Moderately excessively using AI for mental health.Level 4: Manifestly excessively using AI for mental health.Level 5: Exceedingly excessively using AI for mental health.

If you find yourself veering into levels 3 and above, it would be advisable to reconsider what you are doing. There is a bit of a slippery slope whereby once you are in level 3, you can quickly spiral to levels 4 and 5. This can happen without a cognizant realization. People just suddenly discover they seem to have become incredibly reliant on AI for their mental health.

Please also keep this heady topic in mind regarding others that you might know and who seem headed down this similar untoward path. If you suspect they might be inexorably getting themselves into a pickle by excessively relying on AI for their mental well-being, ascertain what might be helpful to aid them before they go too far into the rabbit hole.

Steps Toward Detoxing

Suppose that someone is desirous of weaning themselves from their reliance on AI as a mental health counselor. I’ve put together six fundamental steps that ought to be given due consideration.

Those six steps are:

(1) Consult with a human therapist.(2) Gradually taper out rather than going cold turkey.(3) Rebuild human support channels.(4) Develop alternative coping skills.(5) Reframe the role of AI in your life.(6) Tell AI to temper or stop providing such advice.

Depending upon the severity of the excessiveness, you might be wise to start with the first step at the get-go, i.e., going to see a human therapist. For a milder situation, you can pretty much go in any desired order of the steps. Also, you don’t need to be perfect at each step. Do what you can, and the odds are that your reliance on AI for mental health will wane.

Let’s briefly unpack each of the six steps.

Consult With A Human Therapist

Anyone having excessive reliance on AI for their mental health would undoubtedly benefit from consulting with a human therapist on the matter.

There is also a strong chance that the reliant person might not only need help in weaning from the use of AI, but they potentially have other mental health conditions that a human therapist would detect. The reliance on AI is perhaps one of several mental health issues at hand.

Gradually Taper Out Rather Than Going Cold Turkey

One seemingly obvious approach would be to simply go cold turkey and stop using AI for mental health purposes.

Though that might work, a possible adverse consequence is that this might cause a significant destabilization and unnervingly raise anxiety. Gradual tapering would potentially be a smoother means of weaning off the AI usage. For example, reduce using AI seven days a week to only three days a week. Next, go down to once per week. This could be accompanied by setting time limits. Limit the time usage so that gradually each interaction becomes increasingly brief.

Rebuild Human Support Channels

A frequent consequence of excessive reliance on AI for mental health advice is that a person will tend to ignore their human support channels and put all their attention toward AI. They will forsake human interactions and lose touch with human relationships.

A human-first rule could be a useful mechanism for becoming less reliant on AI. Can you turn to family or friends? Are there any mentors you could tap into? This can rebalance your existence. AI usage drops. Human interactions flourish.

Develop Alternative Coping Skills

Rather than relying on AI as a coping mechanism, try to develop alternative coping skills.

Can you bolster your own decision-making and self-reflection? Are there guidebooks that could be used to reduce the impulse to consult with AI? The hope is that you can become more self-sufficient in undertaking your own mental health care.

Reframe The Role Of AI In Your Life

There is an unfortunate blurring taking place between perceiving AI as a friend rather than as a reference tool (see my analysis of AI companionships at the link here).

Once someone goes down the friendship route, they start to attribute a sense of omniscience to AI, a sense of intimacy, and other anthropomorphic labeling. Try to reframe the nature of AI and what it is. AI is a computational and mathematical capability that can be used as a reference tool. The AI doesn’t “know you” or “get you”. Do not allow your emotions to overtake good judgment.

Tell AI To Temper Or Stop Providing Such Advice

You can enlist AI in the detoxing process.

Enter prompts telling the AI to stop giving you mental health advice. You can make these into system commands that will apply across all your AI chats. Again, you might do this on a gradual basis, first with prompts that say to be less accommodating about giving you mental health guidance. A downside with this approach is that the AI might temporarily abide and then revert to the prior ways, or the AI could computationally end up becoming even more pushy on mental health matters because the AI falsely calculates that your protestations are invitations.

The Detox Battle

Those are just a handful of the detoxifying pathways.

Overall, make sure to keep your eyes wide open concerning the use of AI for your mental health. Dipping your toe into the usage is usually okay. Jumping in with both feet is a somewhat disconcerting approach. The other angle is to avoid the drips and drabs of slowly but inescapably being drawn into the grips of AI. If you are worried that even a small taste of using AI for this vaunted purpose is going to get you hooked, perhaps you’d be prudent to avoid doing so at all.

A final thought for now.

The famed novelist Raymond Chandler made this insightful remark: “There is no trap so deadly as the trap you set for yourself.” The ubiquitous nature of generative AI and the free or easy means of accessing the AI is a kind of trap waiting to be sprung. Becoming excessively reliant on AI for your mental health can be akin to unknowingly walking into quicksand.

Be wary, and if needed, withdraw yourself from the muck as soon as possible.

Comments are closed.