In May 2025, the National Academy of Medicine (NAM) released An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, a framework intended to guide the development and use of trustworthy, human-centered artificial intelligence across health care. The Code arrives as AI tools—ranging from documentation assistants to clinical decision support—are becoming more common in telehealth and behavioral health settings.

For mental health and substance use clinicians, the NAM framework offers a practical lens for evaluating whether AI tools are ethical, equitable, and appropriate for clinical use. This article analyzes the Code’s core principles, explains why they matter for behavioral health, and outlines concrete steps clinicians can take to align with the guidance.

Understanding the NAM AI Code of Conduct

The NAM framework outlines ten principles: Engaged, Safe, Effective, Equitable, Efficient, Accessible, Transparent, Accountable, Secure, and Adaptive. Together, they emphasize patient-centered design, clinical safety, fairness, clarity, responsibility, data protection, and continuous improvement.

The Code also articulates broader commitments, including advancing humanity and ensuring equity, intended to align developers, health systems, regulators, and clinicians around shared expectations for responsible AI. While the framework is not regulatory, NAM positions it as a reference point for ethical alignment across the health sector.

More details on the framework are available directly from NAM’s overview of the AI Code of Conduct and the full report published by the National Academies Press.

Why the Code Matters for Behavioral Health

Although the NAM framework applies broadly across health care, its implications are particularly significant for mental health and substance use treatment delivered via telehealth. Behavioral health settings increasingly use AI-enabled tools such as risk screening algorithms, clinical documentation aids, conversational agents, and scheduling or triage systems.

These tools can influence clinical decision-making, patient engagement, and access to care. The NAM principles give clinicians a structured way to assess whether such technologies are validated for relevant populations, transparent enough to support informed consent, and governed in ways that protect patient safety and privacy.

Applying the NAM Principles in Practice

Engaged and Accessible: Centering Clients

The Engaged and Accessible principles emphasize involving people affected by AI systems throughout their lifecycle. In behavioral health, this can include explaining when AI is used, inviting client questions, and ensuring tools are usable by individuals with cognitive limitations, disabilities, low digital literacy, or inconsistent internet access.

Safe, Effective, and Adaptive: Protecting Clinical Integrity

Clinicians are encouraged to confirm that AI tools have been evaluated in populations similar to those they serve and to monitor outcomes over time. The Safe and Effective principles reinforce that AI should support—not replace—clinical judgment. The Adaptive principle highlights the need for ongoing evaluation and refinement as real-world data emerge.

Equitable and Transparent: Addressing Bias and Building Trust

AI systems trained on limited or nonrepresentative data may perform unevenly across demographic groups. The NAM framework encourages transparency about training data, performance limitations, and known biases. For clinicians, this means asking vendors how systems were tested and disclosing AI use to clients as part of informed consent.

Accountable and Secure: Governance and Privacy

Behavioral health AI tools must comply with HIPAA and, for substance use treatment, 42 CFR Part 2. The Accountable and Secure principles emphasize clear governance, documentation, and mechanisms for addressing errors or harm. Clinicians should be able to explain how AI contributes to care decisions and what safeguards are in place.

Practical Steps for Clinicians

To align with the NAM AI Code of Conduct, clinicians may consider:

Identifying where AI already functions in their practice, such as screening, documentation, scheduling, or patient communication.

Updating informed-consent materials to describe AI use and clinician oversight.

Requesting vendor documentation on clinical validation, bias testing, and monitoring processes.

Tracking patient outcomes and satisfaction to identify disparities or unintended effects.

Pursuing continuing education on AI ethics, governance, and regulation.

Designing implementation strategies that account for equity and access.

Keeping human judgment central in all AI-informed decisions.

Substance Use Treatment Considerations

Substance use treatment presents heightened risks related to privacy, stigma, and surveillance. AI tools used for relapse monitoring or risk assessment may unintentionally create coercive dynamics if not carefully governed.

Clinicians should ensure that AI use is voluntary, clearly explained, and supported by strong human oversight. Telehealth platforms integrating AI must comply with both telehealth regulations and substance use confidentiality laws. When supervised appropriately, AI may support stepped-care or triage models.

Requesting Transparency From AI Vendors

One practical way clinicians can operationalize the NAM principles is by requesting clear documentation from AI vendors. Asking about validation, bias testing, governance, and privacy protections helps determine whether a product aligns with ethical expectations before it is integrated into care.

Clinicians can also request patient-facing materials that support informed consent, consistent with guidance outlined in prior Telehealth.org coverage on AI-informed consent in mental health practice.

Challenges and Limitations

The NAM AI Code of Conduct is guidance, not enforcement. Evidence supporting many AI tools remains limited, and smaller practices may lack resources for formal governance or auditing. Regulatory, licensing, and reimbursement questions continue to evolve, requiring clinicians to remain attentive as AI capabilities expand.

Conclusion: Needed AI Guidance for Clinicians

The National Academy of Medicine’s AI Code of Conduct provides a structured reference point for clinicians navigating the growing role of AI in telehealth and behavioral health care. By applying its principles—engagement, safety, equity, transparency, accountability, and adaptability—clinicians can better evaluate whether AI tools support ethical, patient-centered care.

As AI adoption accelerates, frameworks like the NAM Code may help distinguish tools that meaningfully enhance care from those that introduce unnecessary risk. For mental health and substance use professionals, early alignment with these principles can support more responsible and trustworthy use of AI in clinical practice.

Disclosures:

AI tools may have assisted in drafting or editing; the author or editorial team reviewed and approved all content. 

This article is not legal or medical advice.

Comments are closed.