
SPRINGFIELD – According to the Centers for Disease Control and Prevention, suicide was the second leading cause of death among people ages 10-14 and 25-34 in the United States, underscoring growing concerns about the role online platforms and AI systems can play in vulnerable users’ mental health crises. To increase protections aimed at preventing self-harm, State Senator Laura Ellman advanced legislation to establish safety standards for artificial intelligence companion chatbots, including sexually explicit interactions with minors.
“As artificial intelligence becomes more personal and conversational, we have a responsibility to ensure these systems are not exploiting vulnerability or putting users, especially young people, in harm’s way,” said Ellman (D-Naperville). “AI companions are being marketed as emotional supports and trusted confidants, but without safeguards, these systems can reinforce dangerous behavior or fail to intervene during moments of crisis.”
Senate Bill 316 would create the Artificial Intelligence Companion Model Safety Act, establishing requirements for operators of AI companion chatbots designed for social or emotional interaction, while exempting customer service and internal business chatbots.
Under the proposed measure, AI companion operators would be required to maintain protocols to detect suicidal ideation and expressions of self-harm. Upon detecting concerning language, operators would need to direct users to crisis resources such as the 9-8-8 Suicide and Crisis Lifeline and implement reasonable safeguards to prevent chatbots from generating or encouraging self-harm content.
The legislation would also require operators to clearly disclose to users, at the beginning of interactions and at least every three hours during ongoing conversations, that they are communicating with an automated system and not a human being.
Senate Bill 316 would include additional protections for minors by requiring operators to implement safeguards preventing AI companions directed toward minors from generating sexually explicit material or encouraging sexually explicit conduct.
“For many users, especially children and teens, these systems can feel deeply personal and emotionally real,” said Ellman. “People deserve transparency about when they are interacting with AI, and parents deserve reassurance that companies are taking reasonable steps to protect minors from harmful or sexually explicit content.”
Senate Bill 316 – which is part of the Senate’s AI protection package – passed the Senate Executive Committee on Wednesday now heads to the full Senate for further consideration.