Patients are using generative artificial intelligence tools for mental health support, and clinicians should routinely ask about this use as part of clinical assessment, according to a clinical review article published in JAMA Psychiatry.
Shaddy K. Saba, PhD, of New York University Silver School of Social Work, and William B. Weeks, MD, of New York University School of Global Public Health, synthesized emerging evidence on how patients use large language models for mental health support and outlined a structured, patient-centered framework for clinicians.
Cited data showed that more than 5 million US youth, or 13%, have sought mental health advice from artificial intelligence (AI) tools, with use reaching 22% among those aged 18 to 21 years. Among adult patients with mental health conditions who use large language models, nearly half reported using them for support, including for anxiety, depression, and personal advice. Reported uses included emotional support, companionship, psychoeducation, and help processing difficult experiences, often between clinical visits or in place of care.
Dr. Saba and Dr. Weeks described three main clinical implications. First, AI use may reveal concerns patients do not share with clinicians, including stigmatized thoughts or questions perceived as trivial. Second, these tools may shape how patients interpret experiences; prior analyses cited in the article found that large language models can provide overly validating responses, generate incorrect information, and offer guidance that may not apply to a patient’s situation. Third, without awareness of patient use, clinicians may be unable to address misinformation or integrate these experiences into care.
The article also summarized risks associated with these tools, including inaccurate or harmful outputs, failure to respond appropriately to suicidal ideation, reinforcement of harmful behaviors, and bias affecting patients with serious mental illness or those from racial and ethnic minority groups. Privacy concerns were also noted, as information entered into consumer tools lacks the protections of clinical settings.
To address these issues, the researchers outlined a patient-centered framework: normalize use, explore benefits before concerns, elicit patient perspectives, provide information with permission, and maintain ongoing dialogue. These steps are grounded in established clinical communication strategies and are intended to integrate AI use into routine care rather than treat it as a one-time screening topic.
The researchers noted that evidence in this area remains limited and evolving. The article was based on previously published studies rather than new primary data, which may limit generalizability and the ability to quantify outcomes.
“Without routine assessment, patients are relating to these tools in ways clinicians cannot observe, developing habits they cannot shape, and potentially encountering harms they cannot prevent,” Dr. Saba and Dr. Weeks wrote.
The researchers reported no conflicts of interest.
Source: JAMA Psychiatry