Newswise — Salt Lake City, UT — As generative AI rapidly expands into mental health care, Utah has emerged as a national leader in developing a pragmatic, forward-looking regulatory framework that balances innovation with patient safety. In a newly published npj Digital Medicine commentary, researchers and policymakers detail the state’s early regulatory review of mental health AI agents and the legislative approach it helped shape.

Authored by Nina de Lacy, MD, of Huntsman Mental Health Institute at the University of Utah, and Zachary Boyd, PhD, of Utah’s Office of Artificial Intelligence Policy, the paper synthesizes lessons from a statewide, multi‑stakeholder review involving clinicians, people with lived experience, technologists, academics, and regulators. The findings underscore both the promise of AI-driven mental health tools and the urgency of regulating them thoughtfully.

“We are at a regulatory inflection point,” said de Lacy. “Chatbots that offer mental health support are already in millions of people’s pockets — the question is no longer whether to regulate them but whether we can regulate them intelligently enough to preserve the genuine access benefits while protecting the most vulnerable users.”

The doctor is not in – but the chatbot is 

Mental health chatbots and AI-enabled agents are increasingly used to address longstanding gaps in behavioral health care, including workforce shortages, long wait times, and affordability barriers. Half of U.S. residents live in mental health professional shortage areas, and many individuals with anxiety or depression remain untreated. 

Utah is one of these shortage areas, and although the University of Utah is continuing to prioritize training more mental health professionals, the immediate need is undeniable. Evidence suggests that well-designed mental health AI tools can meaningfully reduce symptoms for some users, particularly those with mild to moderate conditions.

“The doctor is not in — but that has always been true for the majority of people who need mental health care globally,” de Lacy said. “The unstoppable arrival of chatbots to fill these service gaps is a symptom of much older issues. We talk about concrete best practices to safely integrate chatbots into the continuum of mental health care.”

“The demand signal for this kind of service is certainly there,” said Boyd. “Mental health support remains a top use case of AI in the larger population. It is possible that in the coming years we will see a profound and durable change in how people access care and support themselves in their mental health journey. The government should provide a clear pathway for this technology to develop and potentially benefit our residents.”

Learning from stakeholder tension 

One of Utah’s most striking insights was the degree of divergence among stakeholder perspectives. Clinicians tended to emphasize potential harms and professional risk, while people with lived experience often highlighted empowerment, accessibility, and real-world benefits. Academics raised concerns about bias and long-term effects, while everyday users voiced both enthusiasm and unease about emotional dependence on AI tools.

“What Utah’s regulatory review taught us is that stakeholder divergence is not an obstacle to good policy — it represents important signals,” de Lacy said. “When clinicians, patients, technologists, and legislators want different things from the same technology, that tension is telling you something important about the risks and benefits.”

Utah’s review intentionally elevated underrepresented voices, particularly those of people with lived experience, in a policy environment traditionally dominated by professional and institutional stakeholders. 

“The professional societies are organized and active in politics, but the people who are actually served, those with lived experience, are so easy to leave out of the conversation. We spoke with adults living independently with mental illness, people in care facilities, the parents of affected children, and others who would not ordinarily be asked and who articulated their own interests very differently from the caregivers. It deeply informed our perspective,” Boyd said.

A safe harbor for responsible innovation 

Rather than banning or tightly constraining mental health AI, Utah adopted a novel regulatory approach. The state reinforced consumer protections around data privacy and advertising while creating a “safe harbor” for mental health AI agents that implement clearly defined safety guardrails. These include pre-deployment safety testing, crisis escalation protocols, clinical oversight, and ongoing monitoring.

This approach aims to encourage responsible innovation while avoiding unintended consequences — such as pushing consumers toward riskier, general‑purpose chatbots in the absence of safer, specialized tools.

“Adaptive regulation is not a euphemism for weak regulation. Let’s learn from social media and build oversight frameworks for Generative AI that incentivize safe innovation and are capable of keeping pace with a technology that changes faster than any legislative cycle.” Nina de Lacy MD A model for policymakers nationwide 

The authors argue that Utah’s experience offers a roadmap for other states and countries grappling with how to regulate fastmoving AI technologies in sensitive domains like mental health. Core recommendations include shifting from risk-only to risk-benefit analysis, developing detailed and evidence-based best practices, and designing regulations that can evolve alongside the technology.

“There is no option to entirely stamp out the use of AI for mental health,” the authors conclude. “The challenge is to guide its development toward safer, more effective tools that genuinely improve population mental health outcomes.”

About the authors 

Nina de Lacy, MD, is with Huntsman Mental Health Institute, and the Department of Psychiatry at the University of Utah. 

Zachary Boyd, PhD, is the Director of Utah’s first-in-the-nation Office of Artificial Intelligence Policy, at the Utah Department of Commerce and is an associate professor of mathematics at Brigham Young University.

Comments are closed.