Science students and lecturer looking at whiteboard

Clients of therapists, and users of AI that dispenses mental health advice, might have phantom imaginary chats on an intersession basis.

getty

In today’s column, I examine an intriguing twist associated with the use of AI for mental health guidance. Here’s the deal. It is readily possible that people will recall what the AI told them, doing so at a later time. This is sensible and helpful since the person is internalizing what the AI provided as therapeutic advice.

The twist is that a person might imagine in their mind’s eye that they are essentially conversing with the AI. You see, even though the person isn’t logged in, they might create a fake conversation in their mind as though the AI is actively chatting with them. The person carries on an entire dialogue with this imaginary or phantom instantiation of the AI.

Human therapists already know about this phenomenon when it comes to traditional therapist-client relationships. Psychologists refer to this as client internalization or transference. A client will imagine in their mind that they are conversing with their human therapist. This might happen at work, at home, at school, nearly anywhere. Usually, an imaginary conversation arises when the person is especially stressed and needs to figure out how to cope with an active mental health condition.

Should we be worried that this same phenomenon happens when AI is giving mental health advice?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

First, I’d like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations. The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

Compared to using a human therapist, the AI usage is a breeze and readily undertaken.

When I say that, I am referring to generative AI and LLMs. Please know that there are generic versions versus non-generic versions of such AI. Generic AI is used for all kinds of everyday tasks, and just so happens to also encompass providing a semblance of mental health advice. On the other hand, there are customized AIs specifically for performing therapy; see my discussion at the link here. I’m going to primarily be discussing generic generative AI, though many of these points can involve the specialized marketplace, too.

Internalization Of Mental Health Guidance

Shifting gear for a moment, consider the overall nature of the therapeutic relationship between a human therapist and their client. The odds are that the advice discussed during therapy sessions will resonate with a client and be in their minds post-session. The client will mindfully reflect on the mental health guidance shared by the therapist.

That’s a good activity.

The aim is to be of assistance to a client even when not in a therapy session. The intersession time ought to be an opportunity for the client to mull over the insights identified while conversing with their therapist. Hopefully, the client will ruminate on their respective behavior accordingly.

Some therapists assign homework to their clients, urging the client to think deeply about this or that psychological topic or perspective. Therapy doesn’t have to happen solely within the confines of a therapy session. It can and ostensibly should be an ongoing and pervasive aspect of a person’s existence.

Clients are often overtly guided toward internalizing the therapeutic guidance.

Phantom In-Your-Mind Conversations Arise

A client might go further and, in their own mind, ask themselves hypothetical questions, and pretend to envision “what would my therapist say?” (or something to that effect).

This could stir an internal dialogue. Maybe my therapist would say this. In which case, I would say that. But then my therapist would say this. And I would say that. On and on, an imagined conversation takes place. It is wholly made-up by the person and takes place within the confines of their noggin.

Psychological research on these internal client conversations has been going on for many years and is a documented phenomenon. An empirical study conducted over twenty-five years ago made salient points that still stand today. The study is entitled “Clients’ Internal Representations of Their Therapists” by Sarah Knox, Julie L. Goldberg, Susan S. Woodhouse, Clara E. Hill, Journal of Counseling Psychology, 1999, and made these key points:

“Clients’ internal representations of their therapists can be defined as clients bringing to awareness the internalized ‘image’ (occurring in visual, auditory, felt presence, or combined forms) of their therapists when not actually with them in session.”“In these internal representations, clients have an image of the living presence of their therapist as a person.”“Despite its apparent significance, the phenomenon of clients’ internal representations of their therapists has not received a great deal of attention in the literature. Related concepts include incorporation, introjection, identification, internalization, attachment, transference, and object relations.”“When a client was facing a particularly troubling family situation, she reached for the phone to call her therapist. Instead of calling, however, she evoked an internal representation as if she had called, and imagined what her therapist would say to calm her down, to get beyond the situation and see it from a different point of view.”AI Invoking Similar Reactions

You might be tempted to assume that these phantom conversations would only arise if a person were undertaking therapy via a human therapist. It turns out that the same phenomenon seems to arise in human-AI mental health relationships. How often has not yet been ascertained, nor to what degree, nor whether this arises only for certain types of people and not for others. Etc.

How does it generally work?

First, the aspect that the AI guidance remains or lingers with the person can be a positive aspect. If the AI has made a person aware of their genuinely existent mental health condition and subsequently offered useful suggestions on how to cope, there is definite value in the person recalling those interactions. No doubt about that. This is similar to the classic therapist-client relationship.

The twist is when the person opts to carry on imaginary conversations with the AI. Again, this is similar to what occurs regarding phantom conversations with a human therapist. A person might engage in a mental dialogue with the imagined AI. The AI isn’t involved per se.

The person is pretending they are having a human-AI conversation.

Sample Dialogue

Envision that a person has been using AI and discovered that they might be experiencing depression. This is now at the top of their mind. After logging out of the AI, the person considers the advice given by the AI. Perhaps the AI suggested that meditation could be a useful coping mechanism.

Later that day, the following “conversation” takes place in the person’s head:

Person thinking: “AI, I am feeling depressed right now. Should I try some meditation?”
Person pretending to have the AI respond: “Yes, you should use meditation. Find a quiet spot and take a five-minute break to meditate.”Person thinking: “AI, are you sure this will help?”Person pretending to have the AI respond: “Doing a five-minute meditation will be of notable benefit. I suggest you proceed.”

The person then opts to undertake meditation. They had concocted a fake conversation that never took place. Instead, the person is playing the role of the AI. They have internalized the AI chatbot to some degree.

Worries Abound

You might be shocked and possibly horrified that a human would concoct a fake conversation with AI. This seems preposterous. The AI is not sentient. AI is not a person. Making up a conversation with AI has got to be purely a sign of having gone off the deep end.

Furthermore, the person doing this can become totally detached from reality. They will start to imagine that the AI is telling them to do all sorts of zany acts (for my coverage of AI psychosis, see the link here and the link here). In the real world, the AI might have various system safeguards to prevent dispensing adverse advice. Meanwhile, in the person’s mind, the imagined AI roams free. No such safeguards are activated.

Will these internalized human-AI conversations promote a heavier dependence on AI?

Possibly. A person might go into a dangerous spiral. They use AI and start to have phantom conversations with imaginary AI. The more they use actual AI, the more they have internalized conversations. It is a vicious cycle.

There is also a possibility of using the imagined AI as a justification or excuse to undertake foul behavior. A person might justify bad acts and insist that “the AI told me to do it.” Other people around the person might believe that a chatbot indeed led the person down a primrose path. The truth might be that the person made up a human-AI dialogue in their head. In that sense, the actual AI is innocent.

An Upbeat Perspective

Wait for a second, the retort comes flying, a person might experience numerous benefits by internalizing a human-AI relationship.

In the example of undertaking meditation, a person might not proceed to meditate if they didn’t imagine a conversation with the AI. They are simply acting out the advice that was provided by the AI. This is the same as imagining a therapist-client phantom conversation. The same upsides apply.

By thinking about the AI and thinking about having AI conversations, a person could be increasing their personal self-awareness. They are talking to themselves. But they are keeping the talk in check, doing so by considering what the AI might say. This bounds their internal conversations. Otherwise, a person having their own conversation might go awry and spin off the handle.

Another aspect could be that the person opts to reduce their use of AI. Why so? Because they have internalized the AI advice. They don’t need to keep going back to the AI. Those long sessions with the AI are no longer needed. Phantom conversations are helping the person to become independent of using AI.

Big Questions To Deal With

Research on the nature and prevalence of human-AI internalized relationships or transferences needs to be undertaken. We are already behind the curve concerning this emerging behavior. Millions upon millions of people are using AI for mental health advice daily. How many are subsequently having fake human-AI dialogues? For those who do so, has it been helpful or harmful?

I’ve got a sobering question for you to ponder.

If we were to believe that the imaginary dialogues are beneficial, perhaps AI makers should push their AI to instigate the effort. This would be easy to do. When the AI provides mental health advice, it could include an indication that the person should take the advice to heart, including having imaginary or phantom dialogues regarding such conversations.

Yikes, you might say, that’s a bridge too far. The other side of the coin says that if such fake conversations have a net benefit, we might as well have the AI stoke the fires accordingly. It all depends on when and where these actions make sense to spur.

Not Lost In Thought

Albert Einstein famously made this remark: “Imagination is more important than knowledge.”

One aspect of the human-AI relationship involves a person using their imagination and doing so outside the purview of the AI. Another angle consists of the AI prodding the person to invoke their imagination. The AI isn’t doing this of its own accord. This is an aspect under the control of the AI maker.

AI makers can shape their AI to spur phantom conversations or opt to discourage such behavior. The more knowledge we have about these considerations, the more sensibly society can guide how AI should act when dispensing mental health guidance. And that’s not just imaginary.

Comments are closed.