From Bottlenecks to Breakthroughs: Innovations in Behavioral Health Integration

– Me, it works. (laughing) I’ll start again since the
recording is in progress. Thank you. So thank you for coming
to the webinar today from the AHRQ Academy. My name’s Anne Roubal. I am the project director
of the AHRQ Academy, and I’m really happy and excited to present this webinar today, From Bottlenecks to Breakthroughs: Innovations in Behavioral
Health Integration. This is the first in our series, and so there’ll be more coming,
more webinars coming out through here through us. But we’re really excited today
to have Doctors Anna Ratzliff and Dr. Kari Stephens on to tell us some of their
really important research and work that they’ve been
doing in the last few years. So I’m gonna just quickly, I
know a lotta people are on here ’cause they already know what integrated behavioral health is, and we wanna get to our speakers, but I know there’s a few
new people and new faces, so we just wanna give
a quick overview here of what is integrated behavioral health as well as what we do at the Academy. And so integrated
behavioral health is a team of primary care physicians and
behavioral health clinicians who work together to really
improve the care of patients. And you can see on the left there, this is our lexicon from the AHRQ Academy. And you can see in the green,
there’s clinical functions, and then, in the blue, you can see some organizational supports
and community functions. And when all of these
work together really well, we get really good integrated
behavioral health care, and that results in improved patient care, lower costs, other improvements, such as better health outcomes,
lower mental health rates, and better care for that, as well as more effective
health care utilization overall. Sorry, I’m waiting for my
slides to catch up here. There’s just a slight delay. Okay, so there we go. Why is integrated behavioral health challenging to implement? So integrated behavioral
health care is great, and it works really well, but there’s a lot of reasons that it’s challenging to implement, and that’s sort of what our two speakers are gonna talk about today,
some of those reasons that we’ve found to overcome
some of these barriers. So there’s numerous barriers. I wrote numerous here, and we limited it to five on the slide. There’s a lot more as well. But often, we hear people say that, you know, it’s
really tough for financing and payment models that underlines to integrated behavioral health care. It’s often easier to
just refer a patient out versus actually building
integrated care or doing it well. There’s stigma associated with that, as well as workforce shortages. We hear a lot that there’s a shortage of primary care providers, and so, how are we gonna
train primary care providers who are also doing integrated
behavioral health care when there’s already such a shortage in primary care overall? And so, within this space,
we’re all doing our best to improve and bring out
integrated behavioral health care, but acknowledging that
there’s all these forces that make it challenging to do so. And so, where do we come in? We’re the Integration Academy, and we are sponsored by
AHRQ or funded by AHRQ. We provide a slew of
resources and tool kits. This shows you our website at the bottom, which I know is how you all registered for this website or webinar, so I hope that you take
a little bit of time or took a little bit of time
to play around on our website if you haven’t been there before. We have lots of tool
kits, other resources, issue briefs on specific topics,
tools and resources really for all levels of providers, patients, facilities trying to start
integrated health care, or if they’re already starting it, resources on specific things. And we provide a lot of that for behavioral integrated
health care as well as MOUD. And we update this regularly. At least once a month, there’s
an update on our carousel, which is what you’re seeing there, which means there’s a new story or a new resource that we’re highlighting. We do a lot of partnerships
with other agencies and other organizations working
in behavioral health care. And so we’re always looking for new ideas, so if you go to our website and you think that there’s a tool missing or a resource missing, we’re
happy to hear from you. Or if you just think do you have any tools about this, we’re happy to hear from you. Or if you love something that we’ve done, we’re also happy to hear
things that are useful. And so we really do
hope you find some time to spend on our website. We also do these webinars,
like we’re hosting today, as a resource, as well as
visit many conferences. So you’ll see us around at some of those, presenting some of the
resources that we provide. Check back ’cause we have a lot of exciting stuff coming up this summer, including other webinars as
well as some updated tool kits and really exciting
issue briefs coming out. So we’re gonna transfer
over to the webinar now. I’ll provide a brief speaker introduction, and then our two speakers
will give their presentations. Each will have about 30 minutes, and then we’ll do an open
question-and-answer forum. So please submit your questions through either the question, the chat box or the Q&A, and we’ll work on our
side to organize those and hold them for the end
after both speakers have talked but so that we can ask
questions for both speakers. But please ask your question in the moment as you’re thinking about it, and we’ll keep a rolling list of those. The PI on the project, Garrett
Moran, will also join me on that time to help
filter those questions, so you’ll hear his voice
a little later as well. And then, we’ll wrap up. So that will take us
through the webinar time. Sorry, there we go. There’s a slight delay on the slides. And but I’m really excited, as I said, to start this webinar series off with Dr. Anna Ratzliff
and Dr. Kari Stephens. They’ve done a lot of work in
integrated behavioral health. Dr. Ratzliff’s gonna talk to us about the NIH HEAL-funded
CHAMP study, excuse me, trial, and then Kari Stephens is gonna talk to us about some digital tools and implementing that in the workforce. Their research is really exciting. I’ve gotten a sneak preview, but I’m really excited for you all to hear about the work that they’ve done. So I’m gonna introduce
each one a little bit before each section here. So, when Dr. Ratzliff is done, then I’ll introduce Dr. Kari Stephens a little bit more as well. So Dr. Ratzliff is a
professor, psychiatrist, and national expert on collaborative care, and specifically on
training teams to implement and deliver mental health treatment in primary care settings. And she’s gonna present to us today on the NIH HEAL-funded
Collaborating to Heal Addiction and Mental Health in Primary
Care, the CHAMP trial, addressing dual diagnosis
mental health disorders and opioid use disorders in primary care. So I’ll transfer it over to Dr. Ratzliff. Thank you for being here today. – All right, I think I’m good, and I will just see if
I can advance my slides. Okay, perfect. So really nice to be here. Thank you for the invitation. I am at the University of Washington, and I’m really excited
to talk about these data because they’re pretty
hot off the presses. So we will go ahead. And before I jump into
the details of the study, and I will talk about both the results as well as some of the
implementation experiences of the teams that participated,
I just wanna make sure to acknowledge all of the
people who supported this work. We had an MPI team that’s listed here, as well as numerous groups involved in really driving the work
that I’ll be presenting, as well as acknowledging the funding from the National
Institutes of Mental Health. So this study was really part of a large initiative
called the HEAL initiative. And so I just wanted to acknowledge that this was one of a number of studies that were funded during
this period of time, from basically 2019 to 2024,
that were really focused on trying to both
understand and manage pain and also improving the treatment for opioid misuse and addiction. Trying to make sure the slides advance, and, oh, there it goes. Okay, so this was really
the state of the world around opioid use disorders during the time that this
project was initiated. And you can see on this slide that one of the big
concerns that people had was that there were a lot of
people dying every year from opioid overdoses. So the goal of this
study was really to see what are effective
strategies to engage people who are at risk for death from overdose, especially those who have
an opioid use disorder. And there were a couple of big ideas that were incorporated into this trial. The first was that when
we can actually get people onto medications for opioid use disorder, it really helps retain them in treatment. And that, basically, almost every day that someone is in treatment and not using, it lowers
their risk for overdose. So that was a really important concept, and one of the goals of
this study was to see how can we actually keep
people more supported and staying engaged in medication for opioid use disorder or MOUD. The other big concept was that this was really a trial looking at co-occurring disorders. So one of the big ideas was that we know that
collaborative care model, which is a model of
integration, is effective for treating mental health disorders in primary care settings. So I’m gonna briefly review what the collaborative care model is. I know there’s an audience here who many of you will probably
know about these trials, but I wanna make sure it’s really clear what we were testing in our study before I present the results. So the model of collaborative care that we were really adapting
was based off of the work of Dr. Jürgen Unützer,
and these were trials that were published in the early 2000s. The basic concept of collaborative care is that you’re taking that relationship between the primary care
provider and the patient and really adding in
additional team members and a structured approach
to mental health treatment in primary care settings,
basically utilizing the ideas in chronic care management. So, in this case, you’re adding in someone who’s a behavioral health care manager. That person has two functions, both to help coordinate
care of the entire team and also to be able to deliver brief behavioral interventions to that patient right there
in primary care setting where they’re already coming, ideally, and have an established relationship with the primary care provider
and a psychiatric consultant. And in this case, the
psychiatric consultant often is not seeing patients in person but is rather providing support and indirect consultation to the team through a process called the
systematic caseload review, where they meet weekly
with the care manager, talk about each patient,
and help with both diagnosis and treatments, offering
patients that whole range of treatments right there in primary care. So either brief behavioral interventions from the behavioral health care manager or medications prescribed by
the primary care provider. So some of the tools and resources that the team really
uses to provide this care include a registry to track the population of patients that have been identified as needing mental health
treatment, active treatment with evidence-based approaches in that full range, both medications and brief behavioral interventions, and the regular use of measures to track response to treatment over time. So those are some of the core features of the collaborative care model that we were going to
adapt for this study. Trying to move it forward. Not sure it’s working. There we go. Oops, skipped over one. Okay, so I just wanted to
say that the evidence base for the collaborative care
model is well-established, with now over 100 randomized
controlled trials showing that this works for various conditions, which I’ll show in the next slide. It also has been shown that this model of care improves really
across the quintuple aim. So improved access for
population health outcomes, reduces total cost of
care, has been associated with provider satisfaction,
patient satisfaction, and has been shown to have
equivalent or better outcomes across a wide range of
different populations. The data shown on the slide was actually from that original IMPACT trial and showed that at every site that
implemented collaborative care, patients with depression, in
this case, had better outcomes when using collaborative
care compared to usual care. So this is a model that
has robust evidence but really hadn’t been tested
for co-occurring disorders, which is what we’ll get into talking about in the next few slides. It has been shown, though, to be effective across a range of different
conditions, including a couple of papers looking at
substance use disorders. Okay, so what did we
actually do in this trial? What we did, the aims as for
the University of Washington is we really wanted to test adaptation of the collaborative care team to address co-occurring disorders. So this is a slide that
really describes the team and the functions in the
intervention arm of the trial. So, in this case, we adapted that role of the primary care provider to be able to prescribe both medications for mental health disorders
but also medications for OUD. Again, remembering that that’s one of the most effective treatments we have to really save lives
in opioid use disorder. We had the care managers trained up to be able to coordinate care
for both those disorders, including behavioral activation
for opioid use disorder that was adapted to help
support those patients. And then, the psychiatric
consultant being able to support the team in the work of addressing both
mental health disorders, in this case, it was anxiety,
depression, and PTSD, as well as opioid use disorder. So we used all those same key principles that I’ve already described. And then, the measures that
were used in this study were whatever the appropriate
mental health measure was depending on the diagnosis and a new four-item opioid
treatment response inventory that we really created for this study to be able to track response
to the MOUD treatment. So this is the basic design of the study. It was a multisite
cluster randomized trial. We had pairs of clinics from the states that you see here on this figure. All of them were a matched pair of clinics in the same organization, and they were randomized to either provide collaborative care for both mental health disorders and OUD or provide collaborative care for mental health disorders on its own with the opioid use disorder being treated however they normally would’ve done it in the clinic before the trial. So, in some cases, that was referring out. In some cases, that might still be a PCP prescribing that MOUD. All of the clinics had to be willing to have a provider that was willing to prescribe medications
for opioid use disorder, had to have a behavioral
health care manager and a psychiatric consultant. So really had to have those
foundational pieces in place to deliver collaborative care. And we had 24 clinics
that enrolled patients. Okay, so study aims. We had three major areas that we were interested in
studying as part of this trial. The first was does systematic screening for opioid use disorder help us identify more people with OUD? So that was a real question. We were curious if universal screening
would make a difference in being able to recruit
patients for this trial and also be able to offer
patients access to treatment. Then, the main question
was is collaborative care for OUD and mental health
disorders more effective for patients with co-occurring disorders than if we were just
delivering collaborative care for the mental health disorders only? So we’ll talk a bit
about those data as well. And then, what kind of sustainment
supports helps maintain this high-quality collaborative care for co-occurring disorders, you
know, once the trial’s over? So, for aim one, these results are actually
already published. And the answer to the question is that we did not find
that systematic screening for OUD helped us identify
more people with OUD. So this was a little bit
of a surprising finding. Essentially, what we did is we had the clinics basically say, who’s your population of patients? How many of them in the six months prior to initiating screening
had a new diagnosis of OUD? And then, we looked after the initiation of screening was put in place. You know, look at your
population of patients. How many new diagnoses of
opioid use disorder are there after we have routine screening in place? So what you can see on this is that there was, you know, some of the clinics actually
had a negative number, so fewer people were actually
identified afterwards. And that the, you know, median pre-post increase
was only 1.5 patients, so it was a really small number
and not a meaningful number. So we really concluded that while all patients had
the goal of implementing this, most also had some barriers around this and that we really didn’t
see in this case that the use of regular screening
had, you know, any impact and probably not enough
benefit to outweigh the costs just in terms of time and effort that the clinics put into it, at least in the study sites that were included in this trial. We were really interested in learning a little bit
more about why that might be, and so one of the things that we did is actually
interviewed a lot of the clinics. So we had a formative evaluation throughout this entire trial. And these are some of the experiences of the people that were participating. I think everyone was a little surprised. I think when we first started, people were like, we’re gonna have a
million people showing up that now have a new diagnosis of MOUD, or of OUD, excuse me, and
we didn’t really find that. So I think that a lot
of people really felt like they were surprised that, you know, you had to
screen a lot of patients before you maybe found a
patient that needed attention. And a lot of times, they felt like they were gonna find undiagnosed OUD, but actually, they generally
knew in their practices who were the patients
that had this diagnosis and maybe needed help. Okay, I’m sorry, these slides
are kind of slow in advancing. I’m trying to do it there. (laughing) I can’t, okay, there we go. Oops, now I skipped too fast. Okay, so aim two was focused
on, you know, the question of does the collaborative care team, if focusing on co-occurring disorders, actually more effective than if you focus on the patient’s mental
health disorder needs only? So, for this, we recruited 254 patients. We had really good follow-up
engagement with patients. So these were the survey completions, so we were really pleased with how we were able to retain patients over the six months of the trial. And our primary outcomes that we were looking at
is the number of days of non-prescribed opioid use. Again, really saying even a couple of days might mean a big
difference in people’s survival given the lethality of overdoses
in opioid use disorder. And then, we also were looking
at mental health functioning. These are a few of the key demographics. You know, these were largely
middle-aged group of patients. I think it’s important to
know, and I’ll highlight, that most of the patients actually that ended up being included in the study were on medications for opioid use disorder at the beginning. We were surprised about this. We actually thought we would
be initiating a lot of MOUDs, so this was actually a slightly
different patient population than we anticipated recruiting. But they all had mental
health symptoms burden at the beginning of the trial,
so were eligible for that. So these are the results. These are new data. They’re under review
right now for publication, so you’re getting a bit of a preview. But this is the main outcome. And you can see that there
was a small difference, but it was statistically significant between the control and the
intervention at both three and six months, showing
that there was less use in the population that
received the intervention, again, that collaborative
care that was really modified to address both their opioid use disorder and their mental health disorders. And even though these numbers
are small, again, remembering that this is a very lethal
disorder when not treated, these are clinically meaningful
data, we believe, as well. I’m gonna show just a couple other views of these results just
’cause I think there’s some interesting data here to consider. You know, you can look at that also as the number of patients with any use in the last 30-day window. And you can see that another
way of looking at this is that our intervention
lowered the total number of people that were using in the intervention group
compared to the control. And you can also see that
the number of patients with no opioid use at baseline who returned to use was
actually also lower. So this seems to be really that the collaborative care
intervention was really helpful in patients maintaining their engagement in their MOUD treatment. Okay, I’m trying to move forward again if it goes. Sorry, I’ve tried to move
the slide forward. (laughing) Okay, all right, great. So this, however, we did
not find a difference between the two groups in the mental health
functioning of patients. So what we found is that both groups basically had pretty good,
small, modest improvements in their mental health functioning. And, you know, in some ways,
I’m not super surprised in these results because we
know that collaborative care, even collaborative care
on its own, is effective for treating mental health disorders. You know, it was interesting to me that perhaps addressing
the OUD did not have as much impact on these results
as we would’ve expected. So that’s one of the reasons
why science is important is that you learn new things. In the last couple of
slides for my presentation, I’ll just talk a little bit
about some of the experiences of the teams that were involved
in delivering this care, some of the things that
they found challenging, and some of the ways in which
they addressed those barriers. So I think the first thing
that’s really important to note is that it was really challenging to keep these patients
engaged in treatment. You know, it was really
hard to find patients, and then it was hard to keep them engaged to actually initiate treatment, or we ended up
under-recruiting for the study. And I think part of it just
was both fewer patients than we expected being taken
care of in primary care, but also real challenges in actually even if a
patient was identified, getting them engaged in care. And that there was some sense that perhaps stigma, you
know, both in the community as well as in the clinic
systems might be contributing to some of these challenges. Another thing that really came
up was just the challenges around the structure of primary care. This study started in 2019, so
you can understand that most of the trial was actually
conducted during the pandemic, and primary care really
struggled at times to be able to have the kind of access
open that they hoped to have to be able to rapidly meet
the needs of these patients. So, you know, if somebody
was identified on a Tuesday, it might be hard to find an appointment with that primary care
provider in a timely enough way to actually get them started on treatment or really be able to address
their treatment needs, and that some of that
appointment scheduling processes were really thought to
make it more challenging to actually get patients
into treatment as quickly as people would’ve ideally wanted. And that some of that was also driven just with the reality
that, in primary care, it’s a very productivity-driven
environment, and the sort of longer appointment times or more complex care needs of
these patients was in tension with the needs for
clinics to be productive and seeing a lot of patients. Oh, we also found that
some of the training that we provided was
really, really important and that people needed a lot of support around becoming comfortable
really both making a diagnosis of, you know, OUD and also
having those conversations with patients when that
screening came up positive and that patients were really complex and that a lot of the people were saying that the mentoring and
support that they got during the trial was really helpful in sort of, you know, kind of
getting over that barrier of being comfortable engaging
in this work regularly as part of their primary care practice. And I think that even with
that, there were some providers in some of the practices
that were still concerned or, you know, had hesitation about getting engaged in this work. You know, I think people
calling this sort of not necessarily resistance but overwhelm and sort of struggling
with the idea of taking on another complex patient,
I think, could be helpful, was one of the challenges in that. You know, over 65% of the
clinicians actually perceived that providing this care
was really time-consuming, and just, if you’re a busy
primary care provider, that can really feel like a major barrier in wanting to engage in this work. And that we also found lots of people who were really deeply
committed to their patients and connected to their communities and wanting to offer these
services in their clinic and really focused on we’re
just gonna make it work even though it is really challenging. So you saw sort of that
whole range of attitudes when you looked at the
clinic team members. So the third aim is what kind of sustainment makes
high-quality collaborative care for co-occurring disorders,
helps maintain that? And due to our challenges
with recruitment, we really didn’t get
to meaningfully engage in this aim during this trial, but we did, towards the
end of the study, engage in assessing the teams that
were involved’s commitment to sustainment and really
what barriers they perceived or challenges they perceived and provided some support
around addressing those. So I think it was, even
though all of those challenges that I just described came
up, I think people felt like when they did have a patient and were able to engage them, that that was really meaningful work and that they really wanted
to maintain that capacity. And most of the time, what that
looked like is it was really that they had a collaborative care team that was addressing mental
health disorders generally and now could have that capacity to take on a patient with OUD when needed. So this wasn’t that they built
a whole separate capacity for these patients, but
really broadened the scope of what their collaborative
care team could really address in their primary care setting. And I think that they really, that commitment to sustaining
this was really driven by the comments that you see here, which is that the interventions
provided were beneficial and lifesaving and that
there was really a commitment to patient care in that way. Some of the things that
clinics really identified as needs for sustainment
included continuing to stay clear on their vision for why
they were doing this work, continuing to maintain access
to the collaborative care that could include opioid
use disorder treatment. There’s a lot of questions on how to maintain clinical skills, especially because one of
the things that we found is that there was a smaller number of patients than we really expected. In general, this was like
between one and five patients on a person’s caseload of
collaborative care patients. And so maintaining all the skills and comfort in really addressing the needs of patients with OUD was challenging ’cause there just wasn’t a
lot of opportunity to iterate and work with patients
consistently with those needs. And then, of course, and this was mentioned at the beginning, how do we pay for this care? Most of the clinic systems
were either billing or were trying to work towards billing for their collaborative
care team in general and including these
patients in that strategy. I’ll just mention a couple of limitations. I mentioned this, but I think
it’s important to acknowledge that we really do have
to consider these results in the context that most of the patients in both arms actually
were already on MOUD. So it’s interesting to
think about we don’t know if these same results
would be generalizable to patients that didn’t
start out on medications for opioid use disorder. And I think that’s really
some of the future directions that we see would be
important in this work is, you know, do we see similar results with patients that are newly identified with opioid use disorder and might need that
access to the medications. And I think I will stop there. I did include all the
published paper references in our slide deck. So I think that the
slides get put up later if you need that. But these are some of the publications that have already come out of this trial. And as I said, the main outcomes paper is currently under review. So, I look forward to answering questions during the question-answer section. Can hand it over. – Thanks, Dr. Ratzliff. So we are gonna hear
from Kari Stephens next. That was fantastic. I should pause for a second and say that. And then I also wanna remind people to throw their questions in the chat ’cause we will take those after
Dr. Stephens’ presentation, so we can sort of do a round-robins style. So Dr. Kari A. Stephens is a practicing clinical psychologist in primary care, clinical
research informaticist, and a vice chair of research at the University of Washington Department of Family Medicine. And she’s gonna share with us many challenges
facing primary care for meeting the needs of
mental and behavioral health, as well as how some digital apps and solutions can increase quality of care and address the human bottleneck. So thank you, Dr. Stephens. – Thank you, Dr. Roubal. And thank you, Dr. Ratzliff, for a great update on that big trial. I know I’ve got into your interim results, so that was fun to get to see
where you guys are winding up at the end of it. So thank you to the Academy
for inviting me to come. I’ve got lots of things
to share with you today. I’m also a practicing psychologist in an integrated behavioral
health team within primary care, which I absolutely love
doing Tuesday afternoons. So I get to be a clinician, but most of my time is
spent being a researcher. And I’m gonna see if I can
try to master the delay in the slides that Dr.
Ratzliff was talking about and figure out exactly how we hit this advance going forward. But if you hear me pausing, it might be because I’m waiting for
slides to move forward. So, here we go. So learning objectives today. I wanna touch on a few different things. And I’ve got kind of a
packed set of slides, but I’ve tried to scale back,
which is hard for me to do, so I might go through a few
things a little bit quickly. But as Dr. Ratzliff said, I think that we’re expecting the
slides to get shared out, where I provide a lot
of references to things if folks wanna learn more. But by the end of this talk, I’ve got two sort of
main goals for you all, which is, first of all, to have
you all walk away being able to describe some of the
challenges that we have in primary care to meet
these complex mental and behavioral health needs, as well as how are we
measuring in the level of integration within these practices. What are some advances we’ve been making
recently related to that? And then, I also wanna help you all, oh, it actually advanced quickly. Oh, we’re fixed now, great. And then I also wanna describe
some recent innovations that we’ve been working on in several different study
contexts that we’ve done that both talk about how
we’re advancing the level of integration in behavioral health in very practice-centric ways and what’s going on with
digital health solutions and how might that fit into
the picture of integration. So, as we jump ahead,
I’m gonna kind of breeze through a couple of background things because Dr. Roubal and Dr.
Ratzliff have talked to you both, have both talked to you all about some of these
stark facts that we have. But you’ll see on the left, this is a Center for
Workforce Studies study that the center within our
department has done here at University of Washington
that really profiled nationally the behavioral health provider shortage. And I just think it’s worth
noting right here, you know, when you look at rural
counties versus urban counties on the left there,
there are huge shortages across all of our counties and particularly in rural counties. So it is just really paramount that we figure out how to
do this integration better for the sake of all the suffering going on amongst very real people in communities. And I think all of us have
either been hit ourselves or have loved ones,
people within our circles that we know that have
been hit with both medical and physical health chronic conditions that are pretty devastating for folks. On the right side, you’ll see that the CDC is also showing
just this steady increase. You know, and, to Dr. Ratzliff’s point, with the pandemic coming in, it’s disrupted lots and lots of services. And you’ll see here the trend going up across all age ranges
within patient populations with chronic conditions in
general that are both medical and mental health-related. So, you know, the problem
is increasing, essentially. We also know that the occupational burnout of our primary care
workforce is quite high. So the study on the left
is a study we published where we looked at almost
700 folks that account for every type of person that
works in a primary care clinic within that sample across 42 clinics in over a dozen states
in the United States, and the burnout’s just really high. And you can see the
different types of people that work within a primary care setting, and you can see the levels
of burnout that we found. And this is just one
little study that we did, but, you know, this is repeated amongst many burnout studies. The good news side of it is, on the right side, a study we didn’t do, but one that was published
a couple years ago or in 2023, does show that
if we do team-based care in a multitude of ways, not
just necessarily the ones that Dr. Ratzliff and I
are talking about today, but in all kinds of ways,
just team effectiveness within these clinics actually help to really improve the
provider’s experience, lead to less burnout, and then potentially better
retention of those providers where they intend to stay
longer in those positions. So team-based care,
really, really important, and we need to figure out how to do that in whatever shape or form is the takeaway. We did a study also nationally looking at what’s the trend look like in terms of integrated behavioral health as one particular umbrella
of how we do team-based care. So I’m gonna jump into what are we doing kind of in this space
around measuring this. A first stab that, you know, we’ve taken at this was a study that
we did in partnership with the American Board
of Family Physicians of a sample of over 25,000
boarded family medicine docs and asked them “do you work in a co-located way in some fashion with somebody who’s doing
behavioral health as a specialty?” So it’s exciting to see the
blue coming forward on this map where you can see that most
states have some presence of that, but there’s also huge
geographic variation in that. Another thing you’ll find in the study that I have below here as well is that that trend is increasing. And so, we’re at a point now
where we were pretty close to half a couple of years ago of all clinics represented
by folks getting boarded, and we had 100% response rate because it was part of
their boarding activity to fill out the survey,
are now working co-located with some kind of
behavioral health provider, which is really exciting to
see that trend increasing. And it was working. I’m gonna kind of in
parallel look at our slides so I can start at least
speaking while we wait for the delay and the
slides to come forward. See if that might help some. Okay, so the models of
integrated behavioral health do differ potentially across. And you heard Dr. Ratzliff
talking about collaborative care, which is that third column of X’s there. The column on the left,
this is from a paper that we published a few years ago that I’ll talk about
in a couple of slides, which is a cross-model framework that we developed that sort of stretches across all these different
models of integration. And you can see on the
left different kinds of particular activities within practices that these models tend to address. I just wanna make the point here that there are different forms of doing this in different ways, and so measuring this
is really complicated. One of the things to know is that there are several
different common ways to measure how integrated a practice is. There are four that
kind of float to the top as being the most popular in terms of having been
used, having been published, in terms of folks using this
as a self-assessment tool that you can ask practices to fill out. So we’ve got the IPAT at the
very top, the PPAQ, the MeHAF, and then AHRQ’s Integration Playbook Self-Assessment Checklist. These are four examples, but I will say, kind
of broadly across them, without going into lots
of detail on each one, that they tend to lack a behavioral anchor or really capture what
practices actually do when they’re doing
integrated behavioral health. Some of these measures,
too, are very particular to only certain models of integrated care, and none of these particular ones have been psychometrically validated. So one of the innovations that’s come out in the last few years is the
Practice Integration Profile or the PIP. And I bring this up because
I’m gonna show how we used it in a large national pragmatic trial, and so I wanna get you guys familiar with this particular tool because it tended to
address those weaknesses or at least aimed to. And you can see on the
right the different domains that it tries to capture and actually have behavioral anchors that help us understand what
a practice is actually doing. And I wanna give homage that the PIP really
grew out of the lexicon for behavioral health and
primary care integration that the Academy, AHRQ, has
helped support over the years, and it’s a great resource to look at that really gives a deep, in-depth set of definitions and descriptions of what integrated behavioral
health is all about. And then, the PIP is really bringing that pragmatically forward
as a measurement tool that also has been
psychometrically validated to be able to really see what
a site’s self-assessment is of how integrated they are. The PIP itself does have these
five different categories, and you can see a little bit here around what each of them tend to measure. Clinical services are asking things like do you address
certain types of diagnoses within the practice and what
kind of workspace you have? Practice workflow tends to look at different behavioral
anchors around that flow between the behavioral health provider and the medical provider and how well they’re interacting and really joining together on the care. The patient engagement category
is looking at what kind of strategies are being
used within the practice to keep patients engaged in
care and not lost to follow-up. And then integration and
sharing methods are really about what kind of systems
are you using together. Do you use the same electronic
health record system? Are there communication strategies and shared treatment
plans that are occurring across the team? And then case identification as well. What kind of screening are you doing? What patients have unmet needs? Do you have registries in place to track these things, et cetera. And what you can see on the right, and I’ve also cited Dr. Mullin’s webinar. He has a recorded
publicly available webinar at the link below that’s
actually a really lovely overview where you can get more
in-depth information about what I just talked about across all of these different measures and how the PIP fits in, and look and see that
they’re now evolving them from PIP version 1.0, which
is what we used in the study because PIP 2.0 didn’t quite exist yet. But essentially, this
is a quick description that they moved from the six domains, did some more psychometric validation, the PIP1 was also
psychometrically validated, refined it though into the five domains, and really took the workspace domain, which was the sixth one from version one, and pushed that into clinical
services within the PIP2 because they just tended
to hang better together rather than be separate subcategories. So, I think, at this point, if anybody is interested in
using this going forward, really the PIP 2.0 is
the more robust resource. Another innovation that
we’ve been working on to try to measure integrated care is to actually bring the
lexicon forward, too, to create some pragmatic definitions of what is integrated behavioral health in terms of what it looks like. We took a mixed methods approach with this and published this a
few years ago, working with actually a lot of collaborators who also built the PIP and others to do a qualitative and
quantitative joint methods approach to really talk to
experts and practitioners and various community engaged
folks that were working with us on this large pragmatic trial, and then, nationally,
folks that were working in all kinds of settings,
from the Department of Defense to federally qualified health centers, academic institutions, training
environments, et cetera. And also interview within
focus groups what do we define, what do we call this based
on Donabedian’s model of quality care, which really
is about having processes and having structures that
add together to really lead to the positive outcomes
that we’re shooting for. So what you see here is
just a high-level summary of the five different
groupings of processes. Each bullet represents a unique process that’s important to understand. If you’re doing any kind of
integrated behavioral health, these are processes that we would expect to have some kind of presence. Now, of course, every
practice doesn’t have each and every one of these, nor does it necessarily have
all of the structures you see in that yellow column on the right, which are really about what
kind of physical structures or financial, et cetera,
structures are really in place to allow for this sort of work to happen. So we also did a crosswalk, and you can see all that in
the paper I referenced below if you wanna learn more about this, with the PIP itself and looked and found that about 80% of the processes you see in these columns are
represented and measured in that PIP version one and two, and then about 44% of the
structures were also included in the PIP. We took that cross-model framework and did a couple of things with it that I think were pretty cool to see. On the left side, we
collaborated with ICSI on a Minnesota Health Collaborative that they were working on. And really, they formulated this model into their call to action where they were trying
to define a gold standard across the state of Minnesota
and all of the health systems that treat about 70% of the
population of the state. And they were really clamoring
for what are they aiming at, what are they trying
to create when it comes to advancing integrated behavioral health? And we were very humbled and excited that they thought that
this cross-model framework that we were working on could
potentially fit that bill, and sure enough, it did. So, you know, I’m gonna
call that a little bit of a real-world pilot test to see that this really did have salience for real community-based
work that was going on just around quality
improvement that was really key to help the populations
of their communities. We also published another
paper that looked at the MeHAF, a measure that I mentioned
before, that got used within that statewide effort that measured over 100 clinics using
that particular measure of integration and mapped that measure to the different processes and
structures that are related to our cross-model framework. And what you’re seeing on the right side is a graphical illustration
of latent classes that we found across those clinics. Essentially, each line is a group of mutually exclusive primary care clinics out of those 105 clinics that show you how fully
integrated they are across the dimensions
you see on the bottom. And I know the text is really small. So, really, the takeaway
here is we were trying to naturalistically see what
are clinics tending to do around integrated
behavioral health naturally. We know that we’ve published
all of these wonderful models. We know that we have, you know, really lovely definitions of them, but we don’t know what
actually that involves in the real world. So it was really fun to be
able to apply these two things to see there were sort of four groupings that tended to fall out, at least in the state
of Minnesota as well. So one of the things we
did from moving from ways in which we wanna measure
integration is really figure out how to meet practices where they’re at. There is this giant umbrella
of behavioral health that we’re trying to impact when we treat folks in primary care. In primary care, we see everything. We see the whole person, and
we see from birth to death. Every possible problem that
could hit a person physically presents somehow in primary care. So when you look underneath
this behavioral health umbrella, there are many different dimensions in which we have evidence-based behavioral health interventions that could help all of
these different areas. And practices are each sort
of a family of their own. I’m in family medicine, so I really think in terms of community
and family quite a bit, and we really need to
address each of these groups with the respect they deserve to sort of meet them where they’re at. So there’s different ways in which we’ve spent time doing that. One way is to try to put up
free materials, for example, and really adapt a lot of the evidence, behavioral skills training into pragmatic snippets of free training. Here’s examples on the
left of three of those that I did in collaboration
with the AIMS Center and other folks in Dr. Ratzliff’s Department of Psychiatry
and Behavioral Sciences at University of Washington, where I used to be faculty as well before I moved into family medicine. And on the right, Dr.
Ratzliff and I worked with Dr. Unützer and Dr.
Katon to put a book together, and this was really meant for
a primary care target audience to understand better how
to treat specific kinds of disorders in a step-by-step fashion. These are sort of, you know, ways in which we’ve done this for a long time. I think moving forward,
we’re really interested in trying to figure out
how to give something to the practice on their
own that’s a bit more robust to help them target something specific and improve their integration
in a way that they chose, in a way that they felt
empowered and enabled to do. So we pitched a project to PCORI, the Patient-Centered
Outcomes Research Institute, and got funded for an $18
1/2 million pragmatic trial to do this across the country. This was a two-arm, parallel, superiority, pragmatic cluster-randomized
trial, a mouthful for a really large trial,
that really tried to work in the real world and say
here’s a tool kit, essentially, a lean management-based
tool kit that we wanna test and see if we can actually improve integrated behavioral
health within the clinics and can we also improve
patients with multiple of these chronic conditions
that I just mentioned. And you can see on the
right side the whole list of what these different
chronic conditions were that we were targeting. And I’ll talk a little
bit in a future slide about what the requirements
were to get into the study. But at the bottom, the reference
here is the protocol paper, so you can learn more
about the study there if you wanna know more about
exactly how we executed. The intervention itself, if I sort of describe
this tool kit to you, these were really, again, based
on lean principles that come through manufacturing for
how you try and try again until you really get there
and make a difference and find that right size solution. And this also took advantage of quality improvement
methods, including PDSA cycles. So you see on the left, there really are these four
components within the tool kit, four different stages, if you
will, that we worked through. So these kind of come sequentially in time with each practice where, at
first you, you really need to have leadership
engagement within the clinic. You know, we really found over
time through a lot of studies that if you don’t have
leadership engagement within clinics to make
these kind of changes, you’re sort of dead in the water, so you need to start there. And then plan a planning
phase of really scoping and figuring out the
boundaries of the workflow and the redesign that you wanna do based on these lean principles. So the tool kit really
gives these sort of methods for how you go about doing that, but each clinic is gonna
discover their own answers in that phase. And then redesign that workflow in the next stage of the tool kit with recommended tactics
that we also provided. There were many of these tactics that they got to self-choose. None of them were prescribed as required. And then, implement those changes again through these PDSA
quality improvement cycles that, you know, most practices
are somewhat familiar with. And I will say the
practices loved this trial. We got very high enrollment
for those that we approached, you know, I think, much because they knew that they could learn out of
this tool kit lots of methods and strategies that they could reuse for other targeted activities, not just the one we were asking for. A second part of this intervention was the education module that we had. There were 70 of these
asynchronous modules for every single member
of the primary care team. So regardless of what your
role was in the clinic, there were online materials
for you to watch and look and learn about what is
integrated behavioral health. Because our belief really
is that integration is done by the entire group of
community-based folks within that clinic itself. It’s not just done by kind of
this person or that person, or it doesn’t really fully
kind of meet its fruition across the whole clinic. We also had interprofessional courses for all practice member roles, so anybody that was clinically practicing, some courses that targeted
different practice members in different ways that
were individualized. And the trainings range from 14 to, I’m sorry, four to 14 hours. The third and last component that we had of this intervention was
really a coaching aspect where we had both a lean management expert and an integrated behavioral health expert and who was a psychologist work together with each of the clinics that were in the intervention arm
for up to two years. So, for anybody that wants access to the tool kit itself,
it is published online, and that’s what you see on the right if you wanna be able to access it. So any clinic currently can now pull up this tool kit at will. There were four particular
research questions that this study overall
was trying to address. The first was does using this tool
kit actually help adults with multiple chronic medical and behavioral health conditions? And does it increase the level of integration in the practices? Those were kind of the two
primary questions of the study. We also wanted to look
at what factors support and impede successful
integration and then costs. And some of that’s already been
published in three and four. A primary outcomes
paper was also published that you see referenced
below for one and two. What I’m gonna talk about
today is hot off the press of a study I just published
that actually looked at whether or not integration improved once we looked at the pragmatic nature of the way in which we
implemented the tool kit. So that’s what I’m excited
to share with you today. But there have been several studies at this point that have
come out of this trial. And we’re waiting. Okay, and so in terms of methods, over this trial, we had 121
practices that were invited. We had 43 randomized practices. In the end, 22 control
and 20 active practices, with one that withdrew. And we had close to 20, let’s see, 23, almost
3,000 patients as well. So you can see on the left, our inclusion criteria
really included practices that had at least one
primary care provider, a co-located behavioral health provider that was at halftime or more, a shared electronic health system, and they needed to be able to provide their EHR data for the study. And then, on the patient
side, we only were looking at adults who were seen more than twice in the last two years to make sure that they were actually engaged
in care within the clinic. And to be eligible, they had to have either more than one
chronic medical condition and a behavioral health condition or three chronic medical conditions. So they didn’t necessarily have
to have a behavioral health or mental health condition as long as they have three
or more medical conditions. I’m not gonna go too in-depth ’cause there’s a lotta slides
here to keep working through, but you can certainly
read more in the paper about all of the analytic methods. But, essentially, we used a
multilevel mixed-effects model to look at both the
intervention outcomes related to improvement in practice integration and whether or not we actually
improved patient outcomes. And then, we adjusted for
a few different things that you see on the left and the right. So, essentially, both of them
were three-level mixed model with these repeated measurements and then nested, as you
see here in the slides. So let me not spend too much time getting into the weeds on that and just jump into where were we looking
at patients from here and then get straight
to the results, I think, which is the most exciting part. So there were 42 practices, as I mentioned, across 12 states. And this study was done
much like Dr. Ratzliff’s. We did bridge over into the pandemic, although the majority of our
study was done pre-pandemic, but a lot of our final
outcomes, unfortunately, came after the pandemic hit as well. We also, to give you some description of who we included, we
mostly had family medicine or mixed family and
internal medicine practices throughout the study. The mean number of
behavioral health providers at the site were about 1 1/2, and the mean number of primary
care providers was about six. And those are in FTEs,
not physical human bodies, but the amount of full-time
equivalent people working in the practice in those roles. In terms of patients, we had, you know, about 1/2 of them were married, about 1/3 of them were
disabled or unemployed, about 2/3 had a household
income of $50,000 or less. A little over half had a high
school education or less. And these demographics overall were generally pretty representative of the health system regions
in which we had recruited from. And then the mean number
of total chronic conditions that our patients had, most of
them had over four in total, and over 80% of them
either had chronic pain, and over 80% of them
also had hypertension. And then you see about
half had depression, close to half had diabetes,
40-ish percent had arthritis, and then about 1/3 had anxiety as well. Our patient, or I’m sorry,
rather practice outcomes, as I mentioned earlier, was the Practice Integration Profile. The way that we used this
particular measure was not to ask one person to rate that, but actually have four people
at the practice do that, and then we averaged across without weighting any one
opinion more than the other. And in particular, the
four people we asked each of the practices to
capture this measure with was a medical primary care provider, a behavioral health
provider, an administrator. It was suggested to use a clinic manager or somebody equivalent or
appropriate on that level. And then someone of their choice that they thought was somebody who would have a worthwhile opinion. And we let folks know that, you know, oftentimes people respond to that questionnaire from their own lens, and so it’s important to get
these multiple perspectives to really get that measure best used. So results for those practice
outcomes were very exciting for us as we did this nested modeling, and what we essentially found
that you’re seeing here is if you look at that vertical black bar at the zero, any lines that cross that is not a significant result, essentially. And what we were assuming and hoping to see in our hypothesis is that these numbers would
shift to the right, which indicated improvement in integrated behavioral
health at the clinic. At the very top row, what you
see is the total PIP score, and then what you see below
that for the PIP version 1.0 are the six different
subcategories, essentially, that the PIP measures. And so, you know, what you see are actually pretty, pretty big increases that we think are meaningful
within the practice for not just increasing
the PIP total overall but across the majority of
all of those subcategories that we were looking at. So it’s pretty exciting ’cause this was actually, you
know, a pretty different kind of intervention that you see out there. Much less prescriptive about exactly what practices need to do, but more empowering them to figure out themselves
what they needed to do and seeing those changes occur. The patient outcomes we measured across three different scales. We used the PROMIS-29,
which is a global set of actually several
different function measures that are included, and you see those listed
in the bullets there in that first box. But overall, we were looking
at physical function, fatigue, sleep disturbance, social participation,
your pain interference. As you can see, we had a lot
of folks with chronic pain, and that’s a very common plaguing issue within our primary care population. Depression and anxiety. And then, in addition, we
sort of doubly measured because the PROMIS-29
does include a depression and anxiety measure,
the PHQ-9 and the GAD-7. And I’ll just mention the PHQ-9, which measures depressive symptoms, is just very commonly used in practice for value-based contracting,
for general delivery of care, so we thought it was important to be able to have that measure as well. And the Generalized
Anxiety Disorder-7 scale is similarly pretty well
deployed out in primary care. So jumping to the same
way to read this graph for the patient outcomes, you’re looking at that vertical black line, and anything that crosses
over that vertical black line is showing you non-significant results. So, in looking across all
of these different measures of a patient population
of close to about 3,000 and these 42 clinics,
what you’re seeing is that we really didn’t find
patient improvements per se. With them all crossing, essentially, scores were not significantly different except for a slight exception
to that with anxiety. But I would say I don’t think
it’s clinically meaningful. It was such a tiny difference
that, you know, I’m not sure that that’s really worth
interpreting necessarily. So I think that, you know,
our challenge that we had in looking at this is
one of the pieces of data that I wish so much, of course, in retrospect, that we had was what level of service delivery did
all of these patients have? So, it’s possible that some part of this cohort never
went back to the clinic during our observation
period, in other words. Because the design of the trial was an
intent to treat design, and we wanted to really
look at that cleanly across, we weren’t able to account in these analyses, again,
level of service utilization. So I think future studies
really need to look at those patient outcomes more closely in relationship to just
the pragmatic delivery of who’s actually
accessing these services. Okay, I will mention one
last thing about this study, which is, and this is in
the primary outcomes paper that Dr. Littenberg and company, that we all published back in 2023, that higher integration, according to the PIP, was
associated with higher levels of patient outcomes in
the clinics at baseline. So this isn’t necessarily
associating anything with the intervention itself, but this is an interesting
finding to see that association. Now, we can’t say that’s causal per se, and there’s lots of X factors that could potentially account for that, but I do think it’s interesting to see. And, again, all of these
that are not crossing that dotted line vertically
are significantly either higher or lower if your
PIP score tends to be higher. So, again, correlation only, but we still need a lot more
research work to understand how does integrated
behavioral health overall in these various different
forms it takes really relate to improved patient outcomes. So I promised, as I turn for my last 13 minutes, it looks like, of this is to talk to you
about digital behavioral apps. Where do digital
therapeutics fit into this? How can they help when
it comes to integration? So let me first start with we’ve just completed
a feasibility study. We presented last summer
some of the results. We haven’t published the
final paper for this yet, but I can share with you the poster that we nationally presented of some of the preliminary results. This was a feasibility study we conducted with a non-FDA-approved, so a brand new, a hot off
the press digital health app. We don’t know how well
it could perform or not, but it was worth trying to
test out to see is it feasible to actually ask patients
in primary care who suffer with chronic pain to try out an app and see if an app that has
intended to embed things like cognitive behavioral therapy and acceptance and commitment therapy into the way that a
patient can daily interact with it actually improve
pain interference over time. And so, this was not a study
that is an efficacy trial. It was not powered for that, so please, you know, interpret
it only as far as you can. But what we did see was a trend. And, essentially, if you look in the box that I’ve highlighted here in our table, there’s a little negative
two number in there. And then, again, I know
the text is fairly small. But essentially, what you’re
seeing are we measured at baseline one month, three month, and six months after using the app, and you see a drop in the PROMIS score. And a difference of two or more tends to be clinically meaningful,
a meaningful difference, and we did see a drop about a
month after usage of the app, which also lines up with
most apps are only used for about a month. And I wanna say, I attended in our pain clinic for many years. I think I’m a pretty
good pain psychologist, if I don’t say so myself, on some level, but I don’t think I’m that good. I don’t think I necessarily
get my patients better in a month. So I think it’s exciting to see that people might experience
kind of a real difference and drop with just using
a mobile app potentially. But again, we have to be very cautious in how we interpret these
very preliminary results. So, my team’s been excited to look at digital therapeutics overall, and I wanna spend the rest
of the time talking to you about what’s going on
in that field right now. We’re seeing a lot in the news. We’re seeing a lot about AI. And what does all this mean? Digital health apps,
what do I mean by that? There’s sort of three different boxes that I want us all to be thinking about. On the left-hand side, we’ll kinda start with the highest vetted types of digital health apps
out there in general, and those are what we
call digital diagnostics and digital therapeutics. These are approved by the FDA. So those are technical terms. Digital diagnostics, digital therapeutics is a technical term that the
FDA actually uses to approve and say we have vetted this as being safe, and we have vetted this as having efficacy data of high quality. What they mean by digital is software as a device of some kind. And what’s interesting
is that more than 50% of these digital apps that
the FDA has approved tend to be mental health-related in some way. Others are also related to
common behavioral issues like insomnia and chronic pain
and things like that as well. So behavioral interventions are often now getting represented
in what FDA’s approving. A second category are
health and wellness apps, and these are what I’m
putting in yellow here, use at your own risk, so to speak. So these are direct-to-consumer products. A lot of us have them on our phones, like the Calm app that I
put on here as an example. A lot of us as providers do
talk about some of these apps with our patients even in clinical care, but these are not vetted per se. And they can change at any time because, you know, these
are all owned generally most of the time by for-profit companies. And they’re not really supposed
to be treatment necessarily and, therefore, are not
necessarily evidence-based, so we kind of need to be cautious
about that to some degree. But it certainly has
helped us extend tools in various ways like
progressive relaxation or activity tracking and things like that. Then, there’s the third category, and this is probably what
you hear the most about in the media, which are
these entertainment apps. You know, and a real
question mark, I think, for all of us in health care
are, are they causing harm? You know, is TikTok being
shut down, sold, who knows? You know, what’s going on there? And chatbots like
Character.AI are getting sued from very scary stuff. And when I was looking at this talk even just in the last month or so and seeing what the latest was with the Character.AI company itself, there’s actually a lawsuit now because a child’s died by
suicide related to that company, and they’re now being
sued for an actual death. Whereas just a few months ago, when I looked at it, there
were several lawsuits that were claiming harm to their children. So, you know, these
things are getting scary out there for sure and
warrant a lot of attention. So I thought, for this talk, I would include just a
slide here on chatbots and throw out this question for us all. Are they good, or are they bad? And just give you a couple of things that have been published here so that you can kind of
keep up on what’s going on ’cause, I think, for all of
us, this is moving faster than any of us can keep track of. The American Psychological Association has published an article
that’s up at the top left, and it’s really talking about the use of generic AI chatbots
for mental health support and the dangerous trend
that’s coming out of that. And so I think that’s a great article to, you know, get you more
clear on what the FTC, the Federal Trade
Commission, is talking about in terms of risks there and
what the APA says in terms of what they would recommend
you talk to patients about, you know, particularly parents
of teens who are tending to increase more and more
use of these types of things. “Reuters” also published
on that particular lawsuit that I just mentioned about this family who very, very sadly lost
their son to suicide. On the right side, you
can see a little bit of the summary of what’s being said about these chatbots at a glance overall. So the FDA approving
mental health chatbots has not happened at this point. There is no AI chatbot that
has gotten through FDA, so let’s just be really clear about that. These direct-to-consumer
mental health chatbots are totally unregulated at this point, and they obviously might not be grounded in psychological science. You might’ve heard about Woebot or Therabot are examples of those. And then, you know, direct-to-consumer entertainment chatbots are particularly dangerous because they’re sort of
extremely unregulated, claiming and feigning themselves
as therapists, you know, and again, particularly to kids or folks that are really
suffering with mental illness who are looking for a
connection, you know, quickly that are so freely available now. If you’re having trouble
finding digital therapeutics, you’re not alone there. So I’ve up a couple of resources to try to help us out to track on this. There is a group called the
Digital Therapeutics Alliance. They’re a nonprofit. They can’t keep totally up to date, but I would say, you know, I’ve found them to be a fairly good resource to be a place to start to
go see what they’re touting. They do not have a
comprehensive product library, but the apps that they do
have seem to be well-aligned with what FDA has approved. And so it’s kind of a
quick way to get a summary of some of the more hot ones, particularly from the companies that are putting out
multiple of these apps. And then, on the right side, the American Psychiatric Association has actually put together an initiative they call the App Advisor, and this is really just
an educational tool. It’s not a library of what these are, but it’s a nice page that
sort of helps educate people about what should you look for
if you’re trying to determine if a mental health app
is actually good to use. So it’s a great resource
for clinicians certainly and potentially for the community as well. So, in general, FDA-approved digital
therapeutics do a lot of things. And I thought, you know, I’ve been looking at a lot of these. We currently are getting
signaled by the NIH that we might be receiving a large grant to actually study how we can
integrate these digital apps into integrated behavioral
health settings, so I’m very excited to potentially get that project launched. And so, as we’ve been
looking and kind of scouring what are the apps are that are out there, I kind of put together, this is sort of Kari’s
version of six bullets of what are the kinds of
things we see these apps do. So, at the top, you see a lot
of them retrain things for us. They retrain our brain,
sort of how we think, but even retrain some of the
neurology and interaction that our neurons are
having across the brain. They can help us do things
to retrain our muscles and our nervous system. A lot of them also offer psychotherapy and biofeedback in
various digital formats, and some of those are through apps, some of them are using
virtual reality technologies and wearables. Some of these are using
integrated wearables to do things that I thought were
pretty fun and creative, like interrupting PTSD-related nightmares before they escalate. So wearing an Apple Watch on your arm. This is not an Apple Watch. This is my regular watch. But you can wear one on your
wrist while you sleep at night, and it will sort of be able to
read your biometrics to know that you might be likely
experiencing a nightmare as you train that Apple Watch to know what that looks
like for you at night. And then it will give a
bit of a sensational burst to wake you up enough
that you don’t fully wake, but it interrupts that
nightmare from progressing. Then, there are others that
do a lot of adjunctive support for psychiatric medications that people might be
taking for various issues. So, in other words, targeting populations that are taking medications and not necessarily responding
as robustly as we would hope, and then being able to use these apps and actually seeing those
results step up in improvement. Some are helping promote
lifestyle changes, like being able to take
medications more on a regular basis that are more closely adherent to what providers are prescribing. And also potentially doing
more movement in their life. I don’t wanna necessarily call
that straight-up exercise. It can be all kinds of
movement from small to large. And also doing various kinds of tracking of behaviors in general. And then there are also video games. For example, games out there
that are helping people with ADHD, both for adults and kids, improve their lives living with ADHD. And, of course, more. So this is also a picture of what several of these look like. I just mentioned to you NightWare, which is in the purple at the bottom. That’s that Apple Watch for potentially how you can
get woken from nightmares. But there are several that you see here. Dr. Ratzliff just talked about, you know, late-breaking
research being done to help deliver MOUD
for opioid use disorder. RESET, RESET-O actually
addresses trying to help people who are on various opioid
replacement therapies, such as buprenorphine,
be able to succeed better at being on those treatments for longer. You see various things
for diabetes management and other chronic
condition that affects lots and lots of people in the United States. Rejoyn on the top left
is one of those apps that’s just emerging that’s
helping link together two parts of our brain neurology essentially, one that’s short-term memory
versus emotion recognition, and doing exercises on your phone to actually get those two parts of your brain lighting up together, which we know tend to separate and not interact when
we’re majorly depressed. And then others like Wysa
that help people reach, and AI that helps them get
resources better in these arenas. Oh, and then Endeavor on the bottom right, which are the video
games I just mentioned. All right, so as I wrap up
here, I wanna just leave you all with there’s huge opportunity, I think, for these digital therapeutics because these lists are growing in terms of what the market is developing and what FDA now is
starting to put forward that are worth our attention. You know, close to $7 billion is predicted just for this year to be
invested in this kind of market. There are over 350,000 of
these digital health apps at all those different
levels that I talked about, but the vast majority of
them have very few downloads in the world of apps, right? And then 3/4 of adults do
turn to their digital devices for health-related information, and a few years ago, so
it’s probably higher now, about 2/3 of teens have used
a health app of some kind. So our kids are definitely
getting into this quite a bit as phones have proliferated
amongst the youth at kind of all household
income levels as well. But we still have a lot of
reluctance of physicians to wanna use these technologies,
particularly around AI. And most of these apps do
actually address mental and behavioral health conditions. So I think it’s worth us paying
a lot of attention to this and thinking about how we can
get our workforce really ready to embrace this part
of treatment potential. Then, on the right side, I also thought it was worth mentioning, this was an article that
recently came out in “Nature,” where they profiled over 800 of these digital health validation studies have been out there, but almost half of them had
less than 100 enrollees. So this is very, very new science, and we really need to
be looking a lot more at what are the barriers to
dissemination in various ways and get much higher samples
testing more robustly. That said, there are two
meta-analyses to be aware of that have come out
recently in this last year. On the left, this was a
compilation of 143 studies of digital mental health
interventions that have been tested for depression and anxiety, and they found, by and
large, the vast majority of those studies found high acceptability, so that’s also exciting. Patients really do find these kinds of interventions acceptable to use. The meta-analysis on
the right is 28 studies of the effects of digital
therapeutics overall, and we do see that there’s
high potential coming out of this meta-analysis for the improvement in quality of life. It seems to be growing
potentially exponentially. And there are very specific barriers and needs that have come out that I think, you know, we’ve talked about that are very parallel with the same kind of barriers we have within
integrated behavioral health in addition to some new
ones with these technologies that we really need to be
thinking about what kind of science we need to innovate
and reduce those barriers. The last couple points I’ll make, and I know I’m out of time here, is just looping us and circle back. So that integrated behavioral
health cross-model framework that we had, where do digital therapeutics
fit into that model? And I’ve taken the liberty to
highlight in purple a bunch of these different bullet
points across processes as well as a bullet
point across structures to show you there’s lots of places that we could see digital therapeutics potentially help improve
the processes that we have for clinics that are
working hard to maintain and improve their integrated
behavioral health. So I think that this is
just, again, potentially if you walk away from
this talk with one thing, that there’s a lot of exciting potential to have better reach,
better access for all to the evidence-based treatments that we’ve developed
over those many years. There’s also a paper that I did not write, but it’s trying to actually look at how do we get health
apps into clinical practice. So it was really neat to
at least find a paper. I couldn’t find any others, but there was one paper that sort of lists how would you go about doing that. So, again, a place we really need to work. So I’ll leave us with this final word of just saying, going forward, we have this giant umbrella
of behavioral health that we’re trying to
address in primary care. We need to take practice-centric
approaches to that. It’s very possible, I think,
to do that in lots of ways. And we need to really be embracing AI and tech solutions despite
how scary they are. All right, so thank you again. It’s been a real privilege
to get to be here. And I look forward to helping
with any questions folks have. – Thank you, Dr. Stephens,
and thank you, Dr. Ratzliff. And thank you for ending
on that positive note ’cause I was feeling a little
scared, too, (laughing) as you were chatting. So I don’t know how everyone else was. We have a couple of questions in the chat, and we also have a couple
of questions that came in before the webinar, so
I’ll just raise those. And but people can continue,
participants can continue to put questions in the
chat or in the Q&A box. I know it’s almost 4:30 on the East Coast, so thank you all for
joining us and hanging out. And if you can’t hang out till the end of the Q&A,
that’s totally fine. Thank you for joining
for the presentations. So two questions that came in through the chat were
actually for Dr. Ratzliff, so I’ll just start with
those so I don’t forget them. But the first one was about the clinics who were in your trial, and it was about their insurance makeup, and so I don’t know if you wanted to talk a
little bit about that. But particularly, they asked about Medicaid population as well. – Yeah, it’s a really important question because that often contributes
to things like engagement and ability to regularly engage in care. So I put the link to our protocol paper in the Question and Answer
section, and, you know, tell me if there’s somewhere else I can put it. It is in the references, and that does include a really nice table that actually shows each of the clinics and what their payer mix was. However, like the big picture answer is it varied substantially. Three of the systems were FQHCs, six of the systems were
private, not-for-profit, and those probably had a little more of a commercial payer mix or, you know, commercial
representation in their payer mix. And then, one was a
publicly traded company. So they were all actually
managed Medicare. So, it was variable. And I think it’s an important question, so those data are available if you wanna go look at
them in some more details. I guess I’ll say I didn’t
see a huge difference in some of the things
that we talked about, at least anecdotally, when I
was supporting the practices and like which clinic
system struggled more to get people involved. I think it was challenging in general to engage this patient population. – Thank you. And I’ll ask the follow-up
question there, too, or I guess, not a follow-up, but another question that was for you- – [Dr. Anna Ratzliff] Yeah. – which was in the Q&A, but
I’ll just read it out loud in case not everyone can see it. So, “One of the core
features of the CoCM model is that treatment is
done in episodes of care and with the goal that
once goals are achieved, this episode is completed. So, if we’re treating patients with OUD, and the goal is to maintain
sobriety, abstinence, minimal use, does that change the model to being on one that
doesn’t include episodes but could be continuous?” – Yeah, I think that’s
a really good question. I think, for our particular
patient population that was studied in the CHAMP
model, care was continued for six months and then
stopped, so, you know, if the patient had completed
their episode of care. And, in general, because this was for
co-occurring disorders, we were often treating, getting people on a stable dose of MOUD, and then having patients have reductions or remission of their
mental health symptoms as sort of the goals
of the episode of care. I didn’t go into detail. There’s like so many things
I wanted to talk about, but I was trying to be
thoughtful about the time. So, in this particular case, we really did conceptualize that as what we were defining
the episode of care. I think that the sense is that especially if you can get people engaged
early and get them supported and get their mental
health symptoms addressed, we were hopeful that they
would actually be able to then move to maybe more of a sustainment-type model of their care, where they would probably
be still being seen by that primary care provider in that primary care
clinic to get their MOUD. So, would still be getting treatment but might not need the intensive services of the collaborative care team throughout, you know,
the rest of their life. So really conceptualizing that
still as an episode of care. It’s a great question, though. We don’t have data on it, so, I mean, it would be maybe
another future direction that would be really
important is understanding, you know, are there other dimensions of how we define an episode
of care that are important to have people maintain those gains that we saw in this trial. – Thank you for that answer. – [Dr. Anna Ratzliff] Yeah. – That’s really helpful
to conceptualize it. I’m gonna pause and ask Garrett, our PI on this project as well,
if he has questions ’cause I’m gonna guess he might
have some for Dr. Stephens ’cause he’s all about the
therapeutics things recently. But I’ll let you ask whatever questions are brewing, Garrett, and then we can go back to
a couple other questions that we got before the webinar. – Okay, thank you, Anne. And I see we are almost out of time, so I’ll go very quickly. Dr. Stephens, I was
curious what you thought about the recent report from the Peterson Health
Technology Institute, which seemed to provide
some very encouraging news about the potential of apps, but did you? – You know, I’m not an expert on what that report says per se, so can you say a little bit
more about some of the points? – Well, just briefly, it looked at three different categories, apps that are to support
ongoing in-person therapy versus apps that are used independently, and then just sort of wellness apps, and generally, had positive things to say about each of them. And positive returns on investment were the
first two categories, so, that that was…
– Yes. – And actually, I think
we need to talk offline ’cause I’ve got a lotta
topics I’d love to talk with you about, but (chuckling). – I would be happy to do that. I can say generally, so I’m sorry. I need to look closely at
that and see what they say. I can tell you my own knowledge base about that is I think that’s right, that assessment of those
especially first two categories. I think we started out doing
a lot of digital solutions to try to advance care
that’s actually happening where we call it sort of
having a human avatar, someone that you’re seeing that’s
helping you do that. There was a great example of one down in Southern California. A great psychologist who was
working on campus with students and really frustrated that
they were turning away all of these young people from care because they just didn’t
have capacity built a piece of software that said
let’s give these students who are coming in for
mental health care homework. And they went from seeing one patient for the 15 minute an hour to four, and they did 15-minute visits. And they were using the app outside of it and got really great results in four times capacity to reach folks. And that was years ago. So I think we also have seen, over time, you need sort
of that human avatar, but we’re starting to
figure out a little more of this independent piece of work, and I really do wonder if
we can help push it out through primary care where
we touch most people. We could sort of have a proxy of that human avatar that
doesn’t have to be as heavy as that example I just gave you. So I think that’s right. There’s a lot of potential, but we really have to
figure out the safety issues and how to exactly, you know,
uniformly advise each other on what to recommend to patients. – Great, thank you. Annie, I’ll let you close us up here.
– Yeah, I’ll do, yeah, I’ll do one more question. I know we’re hanging on, but I think it’s sort of related to that. It was a question we
got before the webinar that someone had submitted. But, “Thinking about
these, the implications of these devices for
graduate medical education and training behavioral
health professionals.” So and just, you touched on
it a little bit, I think, in some of the papers, but I was sort of curious
if you might wanna highlight some of the opportunities there. – Yeah, that’s a whole
other arena that’s exciting. So, you know, I think that
there’s some establishment of some software solutions that, again, for-profit
companies have been doing. One of which grew out of
the University of Washington from a colleague of both
Dr. Ratzliff’s and mine that’s really trying to
promote a digital AI solution to looking at motivational
interviewing training is kind of where they seeded
and started a lotta work through R01 grants from the
NIH that are pretty promising. And some of that technology now, and not just from that company but, you know, across the board with LMMs now really moving
forward in the AI world. Large language models. LLMs, I should say. I think I said LMM. Maybe I’m thinking about
M&M’S with it being lunchtime. But anyway, large language models that are making this so
much better and easier that there have been some
pilot pieces of work done, including some folks in my
department trying to see if we can put together a tool to train family physicians
how to get coached by an AI about how to talk to
patients better, for example. So it would listen into
that whole private session and then give direct feedback
privately to the provider to say you did really good with this part, but maybe not this part so much, or you could’ve followed up with that. So I do think those things are gonna be emerging in education, and it’s certainly aligned
with ACGME requirements that we have within our family
medicine residency program to have our behavioral scientists actually help assist
training family physicians how to have that bedside
manner, so to speak, with patients, for example. So lots of potential applications
in education, I think. But I would say, you know,
I know we’re out of time, but, Dr. Ratzliff, you
might have some ideas there ’cause you do a lot more
education than I do. – Yeah, I mean, I think
there’s a lot of interest. And could instead of, you
know, the AI be the therapist, could they be the patient so that we can really
practice being the therapist or the treating provider. And I think there’s a
lot of interest in that and that maybe that is a
really important strategy to think about training. In many other worlds, we do simulators before we do real live
human kind of activities like flying and things like that, and maybe we should be doing that for behavioral health, too. So lots of interest there. And we could probably have a whole, you could probably have a
whole topic on training, which I think is actually
really, really important. I mean, Dr. Stephens talked
about that being a big part of her intervention in
the study she designed. It became a huge factor in the trial that I was talking about. And I think it’s a really important topic to be thinking about around
integrated behavioral health. – Well, thank you, Dr. Ratzliff. Thank you again, Dr. Stephens. Both of these presentations
were fantastic. I just moved to our final slide. Thank you, everyone,
for attending as well. And we will post these on our website, the slide deck and the recording as well. So feel free to share it and reach out. Here’s our QR codes if you wanna, but you can also visit
our website as well. So thank you so much, and I wish you all a happy rest of your Wednesday, wherever you are. Thank you.

This June 18, 2025, webinar with Kari A. Stephens, Ph.D. and Dr. Anna Ratzliff, M.D., Ph.D. of the University of Washington explores cutting-edge strategies to meet the growing demand for behavioral health support in primary care settings. Dr. Stephens, a practicing clinical psychologist and Vice Chair of Research in Family Medicine, will discuss how digital tools can help address workforce constraints and enhance care quality. Dr. Ratzliff, a psychiatrist, Professor, and Vice Chair for Faculty Development in Psychiatry and Behavioral Sciences, will share insights from the NIH HEAL-funded CHAMP (Collaborating to Heal Addiction and Mental Health in Primary Care) trial, highlighting Collaborative Care approaches to treating co-occurring mental health and opioid use disorders.

For more information, go to https://integrationacademy.ahrq.gov/video/23443.

Comments are closed.