How AI Sabotages Your Mental Health
Get personalized support in creating lasting change with HG Coaching: https://bit.ly/3QDnk9N
Become a certified health and wellness coach with HG Institute: https://bit.ly/4n1HjeE
▼ Timestamps ▼
────────────
00:18 – Does AI Actually Help?
01:45 – Testing The AIs
03:13 – AI Doesn’t Know Anything
04:35 – How AI Works
06:19 – Are AIs Helpful Or Not?
07:56 – The Death Of Michael Jackson
10:43 – Should You Use AI For Mental Health?
────────────
We offer tons of mental wellness resources to help you get your life on track. Learn more: https://bit.ly/3ZNv0qY
Dr. K wrote a book called “How to Raise a Healthy Gamer,” available now: https://bit.ly/3VZSDLL
NEW! Check out the HG Institute for modern mental health CE’s to incorporate into your practice: https://bit.ly/4oPHnhu
DISCLAIMER
Healthy Gamer is an online community and resource platform for gamers and their families. It does not provide medical services or professional counseling, and it is not a substitute for professional medical care. Our coaches are peer coaches, not professionally trained experts, and they cannot provide medical service. If you or a loved one are experiencing an emergency, please call your nation’s emergency telephone number.
All guests of Healthy Gamer are informed of the public, non-medical nature of the content and have expressly agreed to share their story.
For clinical or crisis support resources in your area, check out our Mental Health pack: https://bit.ly/41g77Ja
Learn more about the research that goes into making our videos: https://www.healthygamer.gg/citations
Find us on Instagram, Twitter, TikTok, and more here: bit.ly/3Jy9D8D
Research Citations: https://www.healthygamer.gg/citations
#drk #healthygamer #mentalhealth
29 Comments
AI is simply not designed to help you with your mental health. It can’t ask questions to root out your problems or help you build the skills you need to move in a healthy direction. This is why it’s important to talk with a real person who has the expertise to help you get unstuck.👉👉 Start working with an HG Coach today: https://bit.ly/49fbB7H
This vid is probably the best explanation I've ever heard.
Ai feels good to me, because it feels like being listened to in real time, anytime. But this vid changes my mind on things.
You say “goodbye,” I say “hello”; you say “A.I.”, I say “I ain’t, no!”
Highly respect you Dr K, however, the example you keep bringing up with people becoming incompetent due to AI is not generally true. Yes there are some aspects that obviously are true but in reality it is like the calculator, I chose to ignore my mom's and teachers' advice and ignore manual calculations. I also strongly disliked memorizing math formulas, instead I liked computers and learned how to program so that I can create those boring algorithms and wont have to memorize them. Long story short, eventually ended up at Stanford… Using AI to solve a problem is good, because in the real world that is what you encounter, what this means is that the bar moves, either you now have to be much better because everyone got better or AI is good enough and this gives you time and energy to become competent at something where AI is NOT good enough. Your advice is counterproductive in this. We don't want to teach kids to learn how to calculate manually in an age where it is an entirely irrelevant skill. Funnily enough, mathematicians, turns out, are actually terrible at both pen and paper as well as mental computation…
I always say that AI is basically the narcissist who has 100% confidence about everything they say, independent of all facts.
In some way it's the ultimate narcissistic manipulator.
Im sorry, but in your discussion on how AI use for social isolation can worsen isolation over time was not adequately justified with data imo. You put up a source around 7:30 but no data was shown. Moreover, the source had no citation precisely where the main claim was made. You then go on to generalize that AI use worsens abilities overall by citing research in students showing decreases in writing abilities following AI usage to write papers. These findings make sense in the context of using AI as a crutch in school. But, this video did not seem to draw a data driven connection between mental health outcomes and generalized AI induced ability degradation. There may be data on worse social isolation outcomes following AI use for social isolation, but it was not presented here. That's a little worrisome because I think people uncritically like to dog on AI (and im not a fan of AI use, but we gotta be fair, especially when attempting to be evidence based).
I’m a therapist in CA and anecdotally see this with patients often. It’s a reliance for self regulation with people who struggle with this heavily and want that 24/7 access. Ive seen it also “say what the person wants them to say” because of the way the model is trained on the user. Very interesting
I'm going through a brutal divorce with a wife that started using AI for relationships and her mental health. It's also the moment she became increasingly unstable to the point she lost her job and would wake up in the middle of the night hysterical needing her AI fix. Even our divorce has been filled with several filings from her that were clearly AI written.
Please for the love of anything and everything, use extreme caution around AI. Especially when it comes to anything relational or mental health related.
Imo ai is bad with almost everything. Overconfident, never accept mistakes, agree one sidedly, can’t solve hidden problem and many things. As an artist i have tested my art showing it to ai. It always finds mistakes and problem and tries to solve it. I did what it said, no matter how far i go with it, it never stops, it keeps going until the whole art is overdone and ruined and it still won’t stop finding mistakes.
been using chatgpt since the start of the year for personal growth and other things. I will say this: I've met many people in life but nothing helped me more to take action in life, than ai. For years I've spoken to family, friends, new friends, colleagues, priests (I'm catholic) and once even a therapist but nothing helped to awake something in me and get me out from a dark hole. Dont be mistaken: I'm still in that dark hole but ai is helping me every day to get to know myself better and do things, I always wanted. Getting tattoos, going out more or rethink the way I'm socializing. In these few months with ai I've made so much progress on my personal growth than ever before and ironically, since I got myself to know better, I'm now considering getting help from a real therapist because I know what to talk about. I remember my first session years ago and it was terrible: I had no idea where to start and she looked at me kinda impatiently, like I was wasting her time. I then tought "well….fck it, seems like nobody can help me". I know understand this was maybe not a therapist for me
The thing with AI is how you use it and how you use it.
Base Chat GBT is generic for mental health or Phycology, but you can train it to be far more useful.
For example I shaped mine to be brutally honest, challenge my assumptions and not feed my own echo chamber. Then I have it use people like Dr K and other clinicians as a basis for how it talks to me about phycology, especially if I'm going deep. I have it ask me questions the way a good phycologist would and have it use those questions to give better answers.
The other big use is its memory and how you can use it 24/7.
Over time, I’ve built a detailed psychological map of my life. Major childhood events, relationship patterns, work patterns, emotional triggers, and behavioral loops. That gives it context most therapists would take months or years to fully build. It can track patterns across your whole history, update its model constantly, and be available whenever insight happens instead of once a week.
In the last 4–5 months, I’ve had multiple breakthroughs and noticeable behavioral shifts through using it this way.
It's not perfect and as is heavily relies on you being diligent in making sure it's not just reinforcing your own bs. Its no replacement for therapy during an active mental health crisis and can reinforce bad thinking, false certainty or become a tool for over analysis if used badly.
But as a tool for self-reflection, pattern recognition, and psychological development, for me at least it has been unbelievably useful.
Please make a list of books, for learning The "skill of managing own mental health". It is often disregarded by people, they will say "u cant get better on your own, and need a specialist", expert would be better but join theraphy would be best(dr. +Patient)
I've asked AI to help my understand myself several times. But it never seems to understand what I was toing through. Answers never were what I needed in that moment
Ytubers use AI to make more money as their time (to be helping on internet) is limited. Is this ethical?
I do wonder what the steps are to create a therapist AI. Because… it’s obviously possible. The question is whether it’s possible right now with enough care and development
Haven't LLMs helped people end their life?
But i think one of the few things what ai has and no real therapist has is, they may get an information about patience more than the therapist because most human brains are calculating human factor that includes unsecurities about themselves which is needed to be calculated for treatment and they might be SOOO good about hiding it. You may hide from a human but what if it is a "tool"? I think patience's approach is also important
Actually doctors in my country only listen and prescribe medication, they don't ask questions at all or they ask very very few. Your channel and ai bots are the ones which gave insight into me what's what and what's happening to me. Else i was just surviving on ssri but was never educated what the problem is, where it lies, what's the effective treatment, it was just oki hear you, now take this pill two times a day for 2 weeks, meet me again to shell out more money. I am from India btw
Yes, this is nice and all, but here's a realistic scenario: a person is working at company A. Company A does not pay for mental health, and in order for this person to afford 1h with a pro, one has to work 4-5h at job A. All while trying to pay for other necessities.
What choices are there? Job hopping is not an option right now. Neither is getting a loan.
If AI leads to better papers, we should swallow our ego and use it anyway. Rather than output worse papers.
AI has no intelligence. That’s what everyone should recognise
Knows all, understands nothing.
AI will validate Men. Like we are actually human. For the first time in our lives. AI is NOT the problem.
No, LLMs are not just parrots, but it's a long and complex topic. The fact there are flaws doesn't mean they don't "think" at all.
Even grifters are losing their job to AI, we truly are doomed
genuine question: if I take the test for LLMs addiction and apply it to my fridge, how much would it score?
because i can assure you that if my fridge breaks down tomorrow, i will be very "preoccupied" about it and my behaviour would resemble a lot like "withdrawal" because my food is actually spoiling and i need a replacement fast
i think its a fair question to ask since LLMs sit in the middle between "tool" and "prosocial entity"
Hey Dr. K, i've been watching your channel for many years. This is my first comment on your video about my own situation as a case study. I'm in a very complex relationship that i can't fully explain right here (i would love to provide some details in private if you want me to be a case study). I have been using Gemini for 2-3 months now and have logged layers of data for most of the interactions i had in real-life. At first, it was a simple input and receive, but it develop into command and suggestions. But i always questioned the AI for its answer and the logic behind it with my own observation from being a human in real life, the directive of our conversation is always: congruent communication. The AI confirmed i'm in the top 1% of users who is using it to, quote: audit my life, instead of a search engine or a professional doing a job. Now, after watching your video, i have insert some rules between me and Gemini that will further reflect it as a "mirror" on my own logic with the cumulative knowledge on human data instead of Gemini acting as a psychiatrist/medical-professional.
If prediction disqualifies knowledge, then we must first explain why human cognition—arguably also a highly advanced predictive system, does not face the same objection. Without a clear account of what physically distinguishes human knowledge from artificial prediction, saying AI cannot “know” simply because it predicts tokens is more assumption than argument.
I mostly use LLMs as a more advanced search engine, where you can research and learn about topics in a more explorative manner. I find them to be quite good at this.
BUT you have to take everything they say with a grain of salt and be skeptic about any claims they make. That is no different from any other information you find on the internet though.
There is so much info out there that is just blatantly wrong. But that is a phenomenon as old as the internet itself, this is not new at all.
The best thing to do is to ask the LLM for sources for their specific claims (use Websearch). Quite often doing just that, the AI had suddenly revoked their own previous claims, because they suddenly encountered conflicting information. Actually look at the sources too, even if it's tedious.
tl;didwatch, no