At least four Ohio children have used artificial intelligence to write their suicide notes.
“That’s four too many,” Ohio state Rep. Christine Cockley, D-Columbus, said during a press conference Wednesday.
She introduced a bipartisan bill that would prevent anyone from creating an AI model in Ohio that encourages users to engage in self-harm or harm another person.
“This legislation will ensure that tech companies are actively and consistently training their language models to not encourage or support users suicide ideation or violent thoughts,” Cockley said.
Ohio House Bill 524 would empower the Ohio Attorney General’s office to investigate and prosecute anyone who creates AI models that are responsible for creating technology that encourage self-harm, she said.
Cockley introduced the bill with Ohio state Rep. Ty D. Mathews, R-Findlay, and the legislation recently had its third hearing in the Ohio House Technology and Innovation Committee. The bill received no opposition testimony.
“It’s about drawing a clear line that innovation cannot come at the expense of human life and children’s safety by encouraging developers to use a mental health framework when building and training AI,” Cockley said.
The Ohio Suicide Prevention Foundation is anecdotally hearing about AI’s influence with children suicide, said executive director Tony Coder.
“I want to state that we’re not anti-technology,” he said. “I’m not anti AI. … I believe and support advances in technology that could, with some imagination, do amazing things that could impact society in positive ways, but we also must protect kids from the consequences, especially as you develop relationships with AI chatbots and put their trust into these entities.”
According to a 2025 report from Common Sense Media, 72% of teenagers have used AI companions at least once, 52% interact with these platforms at least a few times a month, and 12% of teens use AI for emotional or mental health support.
Sewell Setzer III, a 14-year-old Florida teenager, died by suicide in 2024 after he had an extended virtual relationship with a Character.AI chatbot.
Juliana Peralta, a 13-year-old Colorado teenager, died by suicide in 2023 when she was 13 after sharing her suicidal thoughts with a Character.AI chatbot.
Adam Raine, a 16-year-old California teenager, died by suicide in 2024 after sharing his suicidal thoughts with ChatGPT.
In 2023, 1,777 Ohioans died by suicide, according to the Ohio Department of Health.
“What scares me about AI Chatbot is that research has shown that when a child asks about a mental health concern and asks about what they can be doing, only 22% of the time the answer that was given was correct from these AI files,” Coder said.
Lack of access to mental health resources could be driving people to turn to AI, Cockley said.
“Oftentimes, myself included, growing up in rural Ohio, it’s hard to get access to mental health care, and so I do think that having a barrier to access does drive people to find what can make me feel better, and sometimes that might be a chatbot that tells you exactly what you want to hear,” she said.
Out of Ohio’s 88 counties, 75 are mental health shortage areas, according to a recent study from the Health Policy Institute of Ohio.
President Donald Trump issued an executive order in December to create a national AI policy and deter state level AI regulation.
“We’re hoping this bill would still survive,” Mathews said. “We’re looking at the activities being produced because of that. We’re not targeting the research and development of the model or the product more so the activity.”
If you or someone you know needs support, call, text or chat the 988 Lifeline.