The American Medical Association, or AMA, has sent letters to Congress urging stronger guardrails for the use of AI chatbots in mental healthcare.

The association sent the letters to the co-chairs of the Congressional Artificial Intelligence Caucus, the Congressional Digital Health Caucus and the Senate Artificial Intelligence Caucus. The letters expounded on the negative data privacy and mental health impacts of AI mental health chatbots, and highlighted additional safeguards required as AI is increasingly integrated into mental healthcare.

The letters stated that while AI has the potential to expand access to mental healthcare, there are significant concerns about its use. These concerns, raised during hearings held by Congress last year, include emotional dependence on AI, the potential for AI to distort reality for those who interact with chatbots for prolonged periods and the lack of consistent safety protocols. Alarmingly, 58% of Americans who used AI for mental healthcare did not follow up with a provider.

“While AI technologies present meaningful opportunities to improve access to care and support innovation in health care delivery, the hearings made clear that immediate attention is required to ensure these tools do not inadvertently harm individuals seeking mental health support or companionship,” John Whyte, M.D., executive vice president and CEO of the AMA, wrote in the letters.

The AMA lauded ongoing federal efforts to minimize AI risks to consumers but highlighted the urgent need for guardrails specific to AI use in mental healthcare. The association made recommendations for safeguards in four areas:

1. Transparency. The AMA noted that chatbots using AI for mental health support need to “meaningfully disclose” to the user that they are interacting with AI, not a human. Additionally, AI chatbots must be prohibited from claiming to be licensed clinicians.

2. Regulation. The AMA noted regulatory gaps in oversight frameworks and suggested that Congress establish statutory boundaries prohibiting AI chatbots from diagnosing or treating mental health conditions. They also recommend that Congress ask agencies to create a “modern, risk-based oversight framework” and mandate that AI developers build chatbots that can identify high-risk behaviors and provide clear referrals to suicide prevention and medical care resources. Further, Congress should mandate ongoing safety and performance monitoring for any healthcare-focused chatbot, including mental healthcare.

3. Chatbot marketing and advertising. The AMA stated that advertising inside AI mental health chatbots should be strongly discouraged, and advertising targeted towards minors should be prohibited. When advertising appears, it should be clearly disclosed as paid promotion.

4. Privacy and cybersecurity protections. The AMA recommended that Congress require developers to prevent unauthorized disclosure of sensitive information, establish meaningful limits on collecting and retaining sensitive information and maintain oversight of third-party components.

The AMA further emphasized that AI chatbots have the potential to complement medical care, including mental healthcare. But “appropriate oversight structures will play a critical role in ensuring that these technologies are safe for consumers and do not cause unintended harm,” Whyte wrote.

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.

Comments are closed.