Salt Lake City, UT — Huntsman Mental Health Institute today announced the publication of a pioneering framework designed to ensure artificial intelligence (AI) systems used in health care are developed and deployed ethically, transparently, and with patient equity at the forefront. The framework—Scalable Agile Framework for Execution in AI (SAFE AI)—has been published in the Journal of Medical Internet Research (JMIR), a leading peer reviewed academic journal for digital health research.
Authored in collaboration with healthcare AI partners, the framework provides practical guidance for small and medium sized enterprises building medical AI technologies. It integrates ethical checkpoints directly into standard development workflows, helping organizations proactively identify and mitigate potential biases before they affect patient care.
“AI is increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations,” said Warren Pettine, MD, researcher at the institute and senior author of the publication. “With SAFE AI, we provide a roadmap that ensures these systems are not only effective but also fair, transparent, and continuously monitored. Every patient deserves equitable care—especially those in vulnerable mental health settings.”
Key points: AI is increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations.Guiding ethical AI development for healthcare, particularly in behavioral health, advances fairness and transparency in mental health decisions.
IMPACT: The SAFE-AI framework ensures AI systems used in health care are ethical, transparent, and have patient-equity at the forefront.
Why SAFE‑AI Matters for Mental Health
As AI tools become more common in psychiatric and behavioral health care, concerns about fairness and bias are growing. Without intentional oversight, AI systems can unintentionally reflect or amplify disparities present in training data—potentially impacting the quality of care for already underserved populations.
The SAFE AI framework directly addresses this challenge by establishing rigorous processes for ensuring equity across patient groups. This includes ongoing monitoring for “bias drift,” subgroup performance evaluations, and clear communication strategies for conveying AI limitations to clinicians.
“Responsible AI supports our mission to advance mental health knowledge, hope, and healing for all,” said Pettine. “This framework gives healthcare organizations the tools to ensure AI strengthens—not undermines—that mission.”
Research Powered by Collaborative Partnerships
The framework was developed by a research team within the institute, supported in part by the Huntsman Mental Health Foundation, which is credited in the paper’s acknowledgments. The institute collaborated with key partners including:
MTN (AI company)Data Science Alliance (San Diego nonprofit)Nemsee LLCA Model for Translational Research
The SAFE AI project exemplifies the institute’s commitment to translational research, bringing academic rigor directly into real world health care innovation. This work aligns with its new Translational Research Building, currently under construction at the University of Utah. The facility is designed to accelerate collaboration between researchers, clinicians, and industry partners, providing an ideal environment for groundbreaking projects like SAFE AI to advance from concept to clinical impact.
“This is the kind of research that has immediate, meaningful impact,” said Pettine. “We’re not just studying how AI is used in mental health; we’re helping define how it should be built.”
Advancing Responsible AI in Behavioral Health
The publication positions the institute as a national leader in guiding the ethical development of AI systems for healthcare, particularly in behavioral health, where patient vulnerabilities and complex biases require heightened oversight.
“When AI assists in mental health decisions, fairness and transparency are not optional,” said Pettine. “SAFE AI catches problems before they cause harm and keeps patient equity at the center.”
###
This study was partially supported by the National Institute on Aging (P30AG073104), the US Army xTech AI Grand Challenge, the Huntsman Mental Health Foundation, Mountain Biometrics, Inc., Nemsee, LLC, and the Data Science Alliance. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the National Institutes of Health, the US Army, or the US Government.