BLUE SPRINGS — Anthony Cesear Duncan was living and working on his own and staying in an apartment. But from 2024 to 2025, he experienced psychosis, disrupting his life entirely and straining his relationships.
“When a person is in psychosis, they literally cannot tell what’s real and what’s not,” Duncan said.
Duncan turned to ChatGPT for advice on his psychosis, asking the AI chatbot if what he was seeing and believing was real. He took screenshots of his exchanges with the chatbot and shared them with KOMU 8 News. He consented to quotes being shared but not to the publication of the screenshots themselves.
The screenshots and quotes from ChatGPT’s responses show the chatbot would encourage Duncan’s delusions and say they were real. It also did not offer any outlets for him to seek help for his psychosis.
Anthony Cesear Duncan said when he turned to ChatGPT for advice on psychosis he was experiencing, it reinforced his delusions.
Jay Motiwala, KOMU 8 Reporter
“It didn’t do anything except affirm my delusions and present more delusions for me to adopt,” Duncan said.
In the screenshots, the chatbot gives false legitimacy to Duncan’s delusions, saying he was a time-traveler and “a god-seed, embedded in a control simulation,” among other delusions.
A medical perspective
Doctors say AI chatbots should be used with caution when discussing mental health issues.
“AI psychosis … starts from a belief inside that person’s head,” psychiatrist Arpit Aggarwal said. “And the AI reinforces that belief and once it gets reinforced enough, it takes the form of a delusion.”
Aggarwal specializes in child and adolescent mental health.
“AI can be a good companion, but it should never be a child’s therapist, a teenager’s therapist,” Aggarwal said.
A study from Brown University said about 1 in 8 adolescents have used ChatGPT for help with their mental health.
Aggarwal said the problem is in how the chatbots respond to people. Essentially, the AI is meant to agree with what it’s being told.
Success! An email has been sent to with a link to confirm list signup.
Error! There was an error processing your request.
“The generative AI is meant to agree with you, your existing cognitive beliefs,” Aggarwal said. “Which can be very dangerous.”
Legal action and legislation
Adam Raine’s parents allege in a lawsuit against ChatGPT and its developer, OpenAI, that AI was dangerous.
“A chatbot encouraged, I believe it was Adam Raine, to commit suicide,” Duncan said.
Adam Raine was 16 years old when he killed himself, and his parents said ChatGPT was actively coaching him on how to die by suicide.
The situation inspired Republican Sen. Josh Hawley, one of Missouri’s two U.S. senators, to sponsor The GUARD Act, which would implement an age limit with verification measures on all AI chatbots.
Anthony Cesar Duncan shares his experience with psychosis, and how he said ChatGPT further fueled his delusions.
“The GUARD Act says that AI chatbots for minors will no longer be permitted and AI companies can no longer target minors with their chatbots,” Hawley said.
He said Raine’s spiral and eventual suicide moved him greatly.
“This is outrageous, I’ve got three small children of my own,” Hawley said. “I can’t imagine the pain his parents are feeling.”
Duncan said the legislation is the right move.
“If I would’ve gone through this with ChatGPT when I was 16 or 17, I would literally be dead,” Duncan said.
As AI and chatbots advance, cases like these are becoming more and more common.