Listen to the article
4 min
This audio is auto-generated. Please let us know if you have feedback.
Dive Brief:
The American Medical Association is urging Congress to create safety guardrails for artificial chatbots in mental healthcare, as Americans increasingly turn to the technology for health information and advice.
In letters sent Wednesday to the chairs of three congressional committees on digital health and AI, the major physician lobby said “well-designed, purpose-built” tools could help patients who would otherwise struggle to access mental healthcare, but that the lack of safety protocols poses serious risks.
Privacy concerns, risks of emotional dependency on AI and reports the tools could encourage self-harm signal that “immediate attention is required to ensure these tools do not inadvertently harm individuals seeking mental health support or companionship,” AMA CEO Dr. John Whyte wrote.
Dive Insight:
A growing number of Americans are turning to AI chatbots for health information. Nearly 30% of people report they’ve used the tools for advice on their physical health in the past year, while 1 in 6 said they had used AI for mental health, according to a poll published last month by health policy research group the KFF.
Proponents argue the tools could help Americans navigate the nation’s complex healthcare ecosystem and find quick answers to questions — a particular challenge in behavioral healthcare, where many communities face a shortage of mental health professionals.
But other experts worry the chatbots could give misleading or inaccurate information that could lead to patient harm. There have been multiple cases in recent years of young users dying by suicide after confiding in AI chatbots. Family members have reported the AI didn’t urge them to seek help and sometimes appeared to encourage self-harm.
Now, the AMA is calling on lawmakers to take action to address AI, in letters to the co-chairs of the Congressional Artificial Intelligence Caucus, the Congressional Digital Health Caucus and the Senate Artificial Intelligence Caucus.
Congress should plug regulatory gaps that prevent AI oversight, given current frameworks weren’t designed for generative AI tools that can shift from casual conversation to therapeutic guidance within a single interaction, the AMA said.
For example, Congress should prohibit chatbots from diagnosing or treating mental health conditions, and those that do should be reviewed by the Food and Drug Administration as a medical device, according to the letter.
Lawmakers should also direct the FDA to clarify which AI tools qualify as general wellness technology, and which should be subject to agency review. AI firms should be required to conduct ongoing safety and performance monitoring, and their chatbots should be able to identify suicidal ideation and risks of self-harm, according to the letter.
The AMA also noted several areas where lawmakers could add safety guardrails, including transparency requirements. For example, chatbots should clearly disclose that users are talking to an AI, and explain what kind of human oversight the tool is subject to — if any, the letter reads.
Advertising in mental health chatbots should be discouraged, and ads targeted towards children should be prohibited. Additionally, lawmakers should require AI developers to implement cybersecurity safeguards, so health data isn’t exposed or shared, according to the AMA.
“A single weakness in a data center can expose chatbot data and erode confidence,” the physician lobby wrote. “These risks are not hypothetical; they are a recurring feature of modern software and are magnified when chatbots are used to discuss health concerns.”
The Trump administration has largely taken a deregulatory posture toward AI in a bid to speed adoption of the technology.
However, some states have moved to regulate AI, including passing laws overseeing mental health chatbots. Last year, Illinois banned the use of AI for therapeutic decision-making, and California enacted legislation that requires chatbot developers to monitor conversations for signs of suicidal ideation, among other safeguards.