As AI chatbots gain traction in mental healthcare, the American Medical Association (AMA) is calling on Congress to introduce stronger safeguards to protect users. In letters sent to key congressional caucuses focused on artificial intelligence and digital health, the AMA pointed to growing concerns following reports of chatbots encouraging self-harm or suicide among vulnerable individuals. The appeal builds on prior congressional hearings that highlighted risks such as emotional reliance on AI, distorted perceptions of reality from prolonged use, and the absence of consistent safety standards—issues the AMA says require urgent attention.
At the same time, the organization emphasized that AI tools could play a meaningful role in addressing gaps in mental healthcare access, particularly where cost and availability limit support. When designed responsibly, these technologies could help expand access to reliable information, identify early signs of mental health issues and connect patients with appropriate care, while supporting—not replacing—clinicians and easing workforce shortages. However, the AMA stressed that this potential depends on clear regulatory frameworks and responsible deployment.
To reduce risks, the AMA outlined several policy recommendations, including requiring transparency so users know they are interacting with AI, and prohibiting chatbots from presenting themselves as licensed professionals. It also called for clear regulatory boundaries to prevent unapproved diagnosis or treatment, ongoing safety monitoring with reporting of harmful outcomes, and stronger protections for children and adolescents. Additional measures include strict data privacy standards and limits on commercial practices such as advertising within mental health chatbots. Overall, the AMA urged policymakers to balance innovation with accountability to ensure patient safety and public trust.
Click here to read the original news story.