Current efforts to provide care often are hamstrung by disconnected systems that silo data and don’t allow essential coordination.
Mar 18 265 min read

The quality, reach and effectiveness of American healthcare depend on the technological infrastructure supporting behavioral health. When that infrastructure fails, the consequences extend far beyond psychiatry, undermining the broader system’s ability to reduce costs, manage chronic disease and prevent avoidable harm.
I encountered this constraint repeatedly as a charge nurse on an inpatient psychiatric unit. The information needed to identify risk earlier often existed, but it was scattered across systems that never communicated with one another.
A primary care physician documented worsening anxiety, but that note never reached the patient’s psychiatrist. A pharmacy record showed a lapsed refill, but it lived in a system connected to neither. When the patient arrived in the emergency department in crisis, clinicians started from scratch. Psychiatric history, treatment plans and critical context were locked inside a behavioral health record the hospital could not access.
That experience shapes how I think about AI in behavioral health today. Much of the public conversation reduces AI to consumer-facing tools, particularly large language models and chatbots. These tools represent one approach, but they do not address the underlying problem that surfaces repeatedly.
Healthcare systems lack the clinical infrastructure to integrate data, surface risk and support decisions before patients reach crisis. AI-enabled clinical tools could address these gaps, but only if we build them on the right foundation.
Less visible, but far more consequential, are clinical applications designed to support care teams at the front lines of the mental health crisis. These tools can surface warning signs earlier and connect the fragmented data that have failed patients for years.
The problem is that AI in behavioral health will be shaped by the same policy frameworks that historically excluded it. Decisions being made now risk recreating the infrastructure gap that created the broken, fragmented system we live with today.
Policy created the infrastructure gap
The mental health crisis in the U.S. persists despite widespread recognition. Workforce shortages have pushed behavioral health care upstream to primary care, where clinicians now manage the majority of mental health conditions.
Integrated care models, including psychiatric collaborative care, emerged in response. Their effectiveness depends on clinical infrastructure to identify risk early, apply evidence-based practices and manage limited time across large patient panels. Without that infrastructure, behavioral health integration remains aspirational.
Policy decisions have played a central role in shaping these limitations. Federal health IT reforms such as the HITECH Act and MACRA accelerated electronic health record adoption across medicine, while largely excluding behavioral health providers from incentive programs and modernization efforts.
Federal confidentiality regulations and fragmented oversight meant behavioral health providers were largely excluded from the incentive programs and interoperability requirements that drove health IT modernization elsewhere.
The consequences are still felt today. Behavioral health data remains trapped in proprietary and often antiquated systems, largely isolated from hospital and primary care records. Feedback loops that are routine in other areas of medicine remain broken in behavioral health, leaving clinicians without the longitudinal context needed to intervene earlier.
This did not happen by accident. Successive waves of federal health IT reform modernized data infrastructure in physical medicine while leaving behavioral health behind. That history matters now. As AI policy takes shape in 2026, similar patterns are emerging. If these policies reinforce existing infrastructure gaps, behavioral health risks being excluded once again, and the conditions driving today’s crisis will persist. AI tools will fail for the same reasons health IT reforms did.
The cost of repeating past exclusions
Concerns about AI chatbots in mental health are legitimate. Consumer-facing systems that interact directly with patients without clinical oversight require careful scrutiny and safety net provisions. The potential for harm is real, and thoughtful regulation is appropriate.
But the intensity of that debate has created a blind spot. As policymakers and the public focus on chatbots, AI investment frameworks are taking shape. The tools that support care teams managing behavioral health conditions are largely absent from these discussions, putting them at risk of being excluded from critical infrastructure investments once again.
Clinical decision support tools in behavioral health are already showing what’s possible. These include risk models embedded in clinical workflows, systems that analyze language to surface suicide risk missed by standardized screening, and algorithms that predict treatment response, identify patients likely to disengage from care, or signal worsening symptoms between visits. Patterns in symptoms, language and engagement often signal rising suicide risk or clinical deterioration before it appears in formal diagnoses or crisis encounters. In many cases, these signals create opportunities for earlier intervention that current systems often miss.
Importantly, these AI models operate under provider oversight as clinical decision support systems, augmenting judgment rather than replacing it. Their impact depends heavily on whether behavioral health data can move across care settings and be acted on in time.
These tools matter because we cannot staff our way out of the mental health crisis. These AI tools can extend the reach of primary care clinicians – your local doctor’s office – and other community providers already managing mental health conditions with limited support. But those tools can only function if behavioral health is embedded in the infrastructure investments that enable data integration and interoperability. Payment models must reward early identification and quality outcomes, not just service volume, to make those investments viable in practice.
AI will not strengthen behavioral healthcare if policy excludes it from foundational investment and frameworks. We have seen this pattern before, and we are watching it unfold again. Advocates must ensure that behavioral health receives explicit inclusion now, while those decisions are still being made.
Matt Miclette is the head of clinical operations at NeuroFlow, a digital health technology provider in integrated behavioral health delivery models.