On November 6, 2025, the US Food and Drug Administration (FDA) Digital Health Advisory Committee held a meeting focused on generative artificial intelligence-enabled digital mental health medical devices. The virtual meeting, chaired by Ami Bhatt, MD, discussed topics including clinician perspective, evolution of FDA regulation for these devices, and best practices for AI in digital mental health.1

The committee noted that technologies enabled by generative AI may be useful to psychiatric patients in treatment, but human susceptibility to AI outputs, risks like suicidal ideation monitoring or reporting, and potential increased risk with long term AI use must be considered. The FDA noted that “generative AI and some LLMs, to date, have demonstrated vulnerabilities in some of the areas where human therapy excels (and vice versa),” highlighting the specific purpose that AI technologies may serve and their potential to complement human therapeutic connections.2

Some of the major concerns and areas of opportunity noted surrounded ease of use, privacy, content regulation, and involvement of health care providers. The committee acknowledged that generative AI technologies are easily accessible and can provide a sense of privacy to patients. Accessing AI is convenient and available around-the-clock, making it potentially transformative for the general population. However, the committee noted that AI-enabled devices may “confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in model accuracy,” which are essential considerations in evaluating these technologies. Committee members added that while the FDA has experience regulating physiologic closed-loop devices (ie, a system that adjusts or maintains an element of the patient’s physiology), evaluation of devices which function autonomously may warrant new, different considerations.

Digital mental health technologies encompass mobile health, health information technology, wearable devices, telehealth, telemedicine, and personalized medicine, according to the FDA. The term also refers to digital therapeutics and diagnostics, which the FDA oversees. The FDA has authorized use of over 1200 medical devices which use AI, but none have been authorized for mental health uses. Less than 20 digital mental health, non-AI devices have been authorized.

With the increase in accessible generative AI products, more are being developed as demand grows, creating an evolving and complex issue for patients. As chatbots engage more with psychiatric patients outside of a supervised therapeutic context, novel risks emerge. The committee focused on the unique aspect of patient-facing AI, with digital mental health medical devices that are intended to “treat and/or diagnose psychiatric conditions,” according to the meeting summary.2 Public health concerns emerge with the safety and ability of AI products to deliver therapeutic content, make psychiatric diagnoses, or substitute a clinician.3-5 Clinicians presented differing opinions on the role that AI technologies should play in psychiatric care, some citing extreme risks like suicidal ideation and some citing potential for improved access to care.6

Summarizing the meeting, committee documents reiterated the FDA is committed to assuring patients and providers have prompt and continued access to safe and efficacious medical devices. They aim to provide regulatory pathways for this growing field, keeping in mind potential unique benefits of AI-assisted medical mental health technology, and the clear complex risks presented.

References

1. Agenda: generative artificial intelligence-enabled digital mental health medical devices. US Food and Drug Administration. November 6, 2025. Accessed November 11, 2025. https://www.fda.gov/media/189389/download

2. Executive summary for the Digital Health Advisory Committee meeting: generative artificial intelligence-enabled digital mental health medical devices. US Food and Drug Administration. November 6, 2025. Accessed November 11, 2025. https://www.fda.gov/media/189391/download

3. Frances A, Ramos L. Preliminary report on chatbot iatrogenic dangers. Psychiatric Times. August 15, 2025. https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers

4. Hyler S. The trial of ChatGPT: what psychiatrists need to know about AI, suicide, and the law. Psychiatric Times. October 7, 2025. https://www.psychiatrictimes.com/view/the-trial-of-chatgpt-what-psychiatrists-need-to-know-about-ai-suicide-and-the-law

5. Swartz HA. Artificial intelligence (AI) psychotherapy: coming soon to a consultation room near you? Am J Psychother. 2023;76(2).

6. Digital Health Advisory Committee meeting. US Food and Drug Administration. November 6, 2025. https://www.youtube.com/live/F_FonISpeMc?si=HRlqVZBoL4p_zi85

Comments are closed.