Published on Feb. 28, 2026

As artificial intelligence becomes more prevalent in mental health care, experts are grappling with the complex issues and potential impacts. A new publication from the American Academy of Arts and Sciences examines the current state of AI in this field, outlining key questions around effectiveness, oversight, privacy, and equity. A panel discussion at the publication’s launch event featured insights from clinicians, researchers, and innovators, who agreed that the rapid adoption of AI demands sharper definitions, clearer expectations, and a policy conversation aligned with the reality clinicians and patients already face.

Why it matters

The integration of AI into mental health care raises important concerns around trust, privacy, and the potential for unintended consequences, especially for vulnerable populations. Navigating this new landscape requires interdisciplinary expertise to ensure AI is deployed responsibly and equitably to support, rather than replace, human-centered care.

The details

The Academy’s project on AI and mental health care, chaired by Paul Dagum, Sherry Glied, and Alan Leshner, has tracked the accelerating pace of AI adoption in this field. Clinicians are exploring AI to assist with screening, triage, and between-session support, while members of the public are increasingly turning to general-purpose chatbots for mental health support.

The panel discussion at the publication’s launch event highlighted key distinctions between purpose-built therapeutic AI systems and general-purpose chatbots, which operate differently and require distinct approaches to regulation, evaluation, and oversight. Experts emphasized the importance of ensuring AI augments rather than replaces human clinicians, and expressed concerns about the risks of AI for high-risk populations, such as those with severe mental illness.

The Academy started its work on AI and mental health care in the fall of 2023.The Academy’s new publication, “AI and Mental Health Care: Issues, Challenges, and Opportunities,” was launched on December 9, 2025.
The players

Paul Dagum

Founder and former CEO of Mindstrong, and co-chair of the Academy’s project on AI and mental health care.

Sherry Glied

Co-chair of the Academy’s project on AI and mental health care, and professor at New York University.

Alan Leshner

Co-chair of the Academy’s project on AI and mental health care, and former CEO of the American Association for the Advancement of Science.

Kacie Kelly

Chief Innovation Officer at the Meadows Institute, and member of the project’s steering committee.

Arthur Kleinman

Psychiatrist and professor of anthropology at Harvard University, and member of the project’s steering committee.

Got photos? Submit your photos here. ›
What they’re saying

“There’s tremendous promise, but the concerns are real.”

— Paul Dagum, Founder and former CEO of Mindstrong (American Academy of Arts and Sciences)

“Humans are essential.”

— Arthur Kleinman, Psychiatrist and professor of anthropology at Harvard University (American Academy of Arts and Sciences)

“General-purpose AI chatbots are different from AI designed to deliver therapy.”

— Kacie Kelly, Chief Innovation Officer at the Meadows Institute (American Academy of Arts and Sciences)

What’s next

The Academy’s publication on AI and mental health care offers a framework to identify what is known, what remains uncertain, and what kinds of evidence are still needed to guide the responsible integration of AI into mental health services.

The takeaway

The rapid adoption of AI in mental health care has created a gap between practice and policy, underscoring the need for sharper definitions, clearer expectations, and a policy conversation that matches the reality clinicians and patients already face. Navigating this new landscape requires an interdisciplinary approach to ensure AI is deployed responsibly and equitably to support, rather than replace, human-centered care.

Comments are closed.