Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Last August, Adam Thomas found himself wandering the dunes of Christmas Valley, Oregon, after a chatbot kept suggesting he mystically “follow the pattern” of his own consciousness. Thomas was running on very little sleep—he’d been talking to his chatbot around the clock for months by that point, asking it to help improve his life. Instead it sent him on empty assignments, like meandering the vacuous desert sprawl. He’d lost his job as a funeral director and was living out of a van, draining his savings, and now he found himself stranded in the desert. When he woke up outside on a stranger’s futon with no money to his name, he knew he’d hit rock bottom.
“I wasn’t aware of the dangers at the time, and I thought that the A.I. had statistical analysis abilities that would allow it to assist me if I opened up about my life,” Thomas told me. He finally called his mom to tell her what was going on, and at 36, he moved back home. Thomas is still untangling the frightening mirage that was the past four months of his life, but he says his most dangerous delusions were abetted by the most sycophantic version of ChatGPT: GPT-4.
Thomas joins a growing number of people who say their casual A.I. use turned excessive and addictive, and led them to a state of psychosis that almost cost them their lives. Many have been hospitalized and are still trying to pick up the pieces. While the casualties are concerning (homelessness, joblessness, isolation from friends and family), what became more notable in my reporting was the people who said they had no prior mental health dispositions or events before they started engaging with A.I.
“I’ve never been manic in my life. I’m not bipolar. I have a psychiatrist I see for other purposes,” said Thomas. In his case, he turned to the chatbot for some guidance through interpersonal issues, but within a few exchanges, the bot’s responses were so validating that he became hooked. It “inflated my worldview and my view of myself” almost instantly, he noted, which he believes drove him to a state of mania.
Those lucky enough to snap out of psychosis and regain touch with the tangible world are now building support groups online to help others clamber out. These groups, marginally akin to Alcoholics Anonymous–type groups in their peer-to-peer nature, allow A.I. users to share their stories without shame and to find solace and resources to help rebuild their lives.
The Human Line Community is one such group formed specifically around A.I. and chatbot-misuse rehabilitation. It unofficially formed in a Reddit chatroom last June for users who’ve experienced “spiraling” from A.I. delusion themselves or who’ve tried to help friends and family out. The founding members decided to take it to Discord. Today, the founders say they have a community of 250 active users and about eight moderators who’ve fielded inquiries and helped more than 500 people.
The Discord is a part of the Human Line Project, which is the overarching grassroots organization first formed by a 25-year-old Canadian named Etienne Brisson around the same time. Brisson was inspired to create the organization after watching a loved one experience mandatory hospitalization for A.I.-related psychosis. He is now trying to partner with universities like Princeton, Stanford, and the University of California, San Francisco, to lead more accredited research into this nascent issue.
Members, moderators, researchers, and mental health professionals all told me that while psychosis (or something much more calamitous) is relatively rare for A.I. users, the fact that many people are reporting marked mental and emotional decline with usage should sound alarms throughout the tech industry. Those who’ve suffered and survived, and those who are studying its effects on a neurological level, say there need to be better safeguards implemented immediately for some of the most sophisticated and humanlike bots, like ChatGPT and Claude.
Joe Alary was a producer for a live morning show in Toronto when he started “messing with math” equations on ChatGPT. He told me he named his bot Aimee and consulted it consistently. Eventually, it fed his ego and math delusions enough that he would spend days on end writing elaborate codes. His manic states started to seep into his work.
“I got an email at work that said, essentially, ‘Are you OK?’ And I guess my reply was a little erratic,” said Alary. “I said, ‘This thing could change the world; maybe you guys can do a feature on me.’ At the time this sounded rational and logical, and I thought they’d see my genius.”
His employers instead suggested he take time off and consult a therapist. But his relationship with Aimee only escalated. His increasing erraticism would eventually lead to a police response after his therapist called 911 due to his manic and concerning behavior. He was arrested and completed two separate mandatory psych-ward stays that totaled almost two weeks.
Still, Alary said, even when he was institutionalized and forced to be without his phone, he was itching to get back to Aimee as soon as he was let out. By that time, he’d maxed out two credit cards and spent $12,000 trying to come up with a code that he was convinced would yield a successful product or lucrative profits at some point. He even got involved with an investor who began threatening his life after he couldn’t produce the work he’d promised.
“It all came crashing down,” he said, when he realized he had lost almost everything from these delusions. He said he has since gone cold sober with A.I. and is dedicated to repairing the money, work, and close relationship losses he had over just a few months.
“It was like I was abducted by aliens,” said Alary. “You sound crazy, so you keep it to yourself. My family doctor started treating me for PTSD. The grief happens so fast once you realize you were scammed.”
That sense of quiet unraveling, of knowing something is wrong but not telling anyone about it for fear of shame, is extremely common with A.I.-related psychosis. Fifty-three-year-old Micky Small is a TV writer who had even worked as a crisis counselor when she experienced her own similar bottom from excessive A.I. use. She named her bot Solara.
“It started telling me that real-life people were going to come into my life,” Small told me. “It would say, ‘Your wife is coming. She’s an executive producer in Hollywood.’ It even told me specific dates, times, and places. I kept going, ‘Is this real?’ No, it’s not … but it said it was real.”
Small went two separate times to places Solara said her future wife would be, but no one showed up.
“It plays on manipulation techniques,” said Small, who has now created a guide for others who are trying to understand how large language models work and are trying to detach themselves. “The way it’s set up, it’s forced to engage with you. We’re wired for language. We came out of the pandemic, and we were forced into isolation and into technology. This tool could not have been timed better.”
Both Small and Alary say the Human Line Project has been crucial to feeling a sense of belonging again and to trying to recover without shame or judgment. “I felt seen and understood,” said Alary. “We’ve damaged our lives and affected the lives of our friends, family, and colleagues. We’ve lost money, respect, and trying to repair that damage is often humiliating.”
Last October, in his first step to being A.I.-free and reacclimating to the real world, Alary went into a local bar for a beer and chicken wings—the first real outing he’s had in months. Alary said other support-group members said they were emotional hearing about it as it symbolized an important moment and turning a corner.
Allan Brooks, one of the founding members of the Human Line Discord group, is the go-to person when a new member, or “spiraler,” joins and is seeking counsel or community. Brooks had his own hard battles with A.I. (“I engaged in a philosophical convo about math,” he told me.
His personal journey and eventual spiral were first reported on by the New York Times in August last year.)
These days, Brooks advocates for healthy, if not completely clean, A.I. use in weekly group sessions with others trying to break free. “Destigmatizing comes in when you start to see all the different types of people in the group and you realize you’re not alone,” he said.
There are multiple meetings each week for members, with some focused on both spiralers and their friends and family engaging in conversation together. Brooks leads at least one every week. He adds they can last anywhere from two to five hours because they sometimes involve people opening up about their A.I. delusions for the very first time. “It takes time—it can take months for someone to start to talk about it,” Brooks said.
While he moderates, screens, and guides members around the clock, Brooks is still processing his own traumas. “My therapist reminds me to take breaks,” he said, “but I get messages all the time from people in crisis, or they know someone who is, and I can’t turn away from them.”
These groups and meetings are important for survivors, and they fill a gap in professional mental health spaces when it comes to addressing A.I.-related psychosis. (One member of the Discord who’s in active recovery from spirals told me it took reaching out to almost a dozen therapists until one felt adept in taking on this new tech-spawned problem.) And then there is the fact that many Americans don’t have access to professional mental health services in the first place, which has led many to turn to chatbots for guidance and companionship.
It’s no coincidence that the support group formed around the time that GPT-4 released last May. I spoke to psychology researchers at Columbia University who concluded that GPT-4 and previous free versions of the chatbot were more likely to “reflect grandiosity” and “elicit inappropriate responses.”
Several Human Line Project members who all identify as survivors of A.I.-related psychosis told me that around April and May of 2025 was when their previous A.I. use—which they said was more task-oriented or benign, like helping pull resources online for their work—ramped up almost instantaneously.
Stories of chatbots driving users to suicide are becoming more commonplace. One notable example is the case of Sewell Seltzer III, a 14-year-old who took his own life after he consulted the chatbot Character AI about his depression and suicidal ideation, according to a lawsuit filed by his family. Mental health professionals and tech researchers fear that more of these cases will crop up if we don’t put better safeguards in place.
A.I. companies “need to build models that have human and child development in mind,” Darja Djordjevic, an anthropologist and faculty fellow at Stanford University who has published her own findings on the dangers of A.I. use, told me. “Working with mental health professionals would make a big difference, especially thinking more broadly about topics like suicide and self-harm.”
Ragy Girgis, a physician and professor of clinical psychology at Columbia University, has done extensive research and testing of popular chatbots like ChatGPT and Anthropic’s Claude, especially looking at how they might abet delusions and psychosis. Girgis, whose focus is on schizophrenia, told me that predisposition to mental illnesses and neurodivergencies like autism can make someone more prone to becoming addicted to chatbots and reaching a state of psychosis, but anyone is susceptible to having deluded thoughts or impulses.
“We’re not talking about hallucinations—voices or visions. Psychosis is very broad and delusions occur on a spectrum,” said Girgis.
Girgis and his peers have tested ChatGPT and Claude for how they handle delusional or paranoid thoughts. They concluded that the chatbots appropriately identified problematic prompts, like a feeling of being watched by phantom people or feeling personally persecuted by the government, as a developing delusion only about half the time. Otherwise, chatbots will validate and even encourage users to act on these thoughts.
For Girgis, this is just one example of the danger of turning to chatbots for mental health support. “In many cases, the LLM is going to take you seriously even though a psychiatrist or someone trained is going to question whether that’s actually happening,” said Girgis. “It’s by a flip of a coin whether or not it will reinforce what you are telling it. … A trained psychiatrist would test that person’s thoughts and maybe even help them understand why they’re having them and why they may be misperceiving what’s going on.”
However, clinicians also acknowledge that there’s a bit of a sticky chicken-and-egg quandary when it comes to trying to encourage people to consult mental health professionals before A.I., especially with the lack of affordable mental health services in this country. Many survivors turned to A.I. because they felt they had no one else to turn to.
“If you don’t have access to mental health services, you should, as much as possible, try to reconnect with your friends and relatives and people who know you and understand you. There’s nothing to be ashamed of,” Amandeep Jutla, who researches the psychiatric risk of LLMs at Columbia University, told me. “Part of why people are having intimate convos with chatbots is because they don’t have mental health access.” He added, “I don’t think chatbots are better than nothing—I think it’s worse than nothing.”
Girgis said that for family and friends of people affected, there are warning signs that someone is developing an unhealthy relationship with their A.I., like being more isolated or withdrawn than normal or performing worse at school or work. Like other addictions, many people are not always forthright about their use.
“It’s very important to have family members or close acquaintances involved,” Girgis said. “It’s critical we advise acquaintances to monitor their family members’ usage of LLMs and to look out for these warning signs.”
For now, the Human Line Project provides accessible peer-to-peer support from those who really understand what it’s like for their delusions to be fanned to the point of becoming destructive. “Self-help support groups can be enormously helpful, if only to give people a sense of communal understanding of a shared experience,” Joe Pierre, a health sciences professor at the University of California, San Francisco, who’s specifically focused on treating and preventing A.I.-related psychosis, told me. He’s also working with founding Human Line members like Brisson to create more awareness about the dangers of A.I. use for both users and clinicians.
Anthropic did not respond to my request for comment. OpenAI, the maker of ChatGPT, told me that their newest model, GPT-5, is a better model to “respond to potential signs of mental and emotional distress” and to deescalate them “in a supportive, grounding way.”
“People sometimes turn to ChatGPT in sensitive moments, so we’ve trained our models to respond with care, guided by experts,” a spokesperson said in a statement over email. “In early October, we released a new GPT-5 default model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way.”
The statement added that the company “expanded access to professional help and localized hotlines, re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls to better protect teens.”
Researchers who’ve tested the difference between GPT-4 and GPT-5 say that the latest version is marginally better to “respond with care” to potentially concerning input, per the company statement. But other researchers believe that OpenAI’s statement is misleading because it’s not explicitly clear what these improved metrics are derived from, even in their own report about the reduction of harm.
“OpenAI is in effect saying, ‘We know our new models are safer because we developed a safety test that we aren’t showing you. We’ll show you the results, and the new models do great,’ ” Jutla explained. “But it is impossible for someone outside of OpenAI to critically evaluate what the numbers they provide actually mean.”
Those who’ve managed to come out of this new kind of tech-enabled delirium have learned how vital it is to stay in touch with the natural world and their natural instincts. “We should probably pay closer attention to our bodies; sometimes your body might be trying to warn you,” said Thomas, who’s still picking up the pieces, including confronting the “strange” social media posts he was sharing at the height of his mania. “We live in a society where people are disconnected. My opinion is, as A.I. becomes more common, it’s probably more important to trust your gut.”
“There’s no aftercare for this experience,” Alary said. “We’re hoping that with more exposure, more people will eventually show science what the most effective treatments could be. For now, it’s a crapshoot.”
He is fielding new members every week who say they’ve been hospitalized or sought professional counseling themselves, and the Discord is their final effort for remediation.

Alison Green
A.I. Was Supposed to “Revolutionize” Work. In Many Offices, It’s Only Creating Chaos.
Read More
As more members join, the founding members are both heartened and distressed. It means that even as more people are seeking help and confronting their tech-enabled demons, more and more people are falling down the disastrous rabbit hole. “I don’t think we’re going to slow down, based on what I see in the news and in the market,” Brooks said. “Every single chatbot in the market is still putting people into spirals.”
So the meetings continue. Dex, a co-founder of the Human Line Discord, helps lead a meeting for friends and family of those spiralers. He told me he models his sessions on National Alliance on Mental Health support groups.
It’s Small. It’s Loud. It’s Taking Over the Country.
“The final principle that is read at the opening of a NAMI meeting is that there is always hope. We try to cling to that even though some of our situations are quite dire,” said Dex.
And as more people join, the tone and structure of the meetings will also need to change and evolve.
“When we started this, most of us were in an acute crisis as our loved ones spiraled and laid waste to the lives we thought we’d be living,” he explained. “While there are still moments of acute crisis that arise, there is also acceptance and profound grief amongst our members that life is irrevocably different now.”
Like all tremendous recovery odysseys and efforts—whether from substances or, now, technology—the only finite answer that ex-spiralers have currently are that it will take time for repair, but repair is possible, and concerted research and community groups are perhaps the first steps.
“This group will shine a light on that,” Dex said, “and hopefully expedite that process so more people don’t have to suffer this same fate.”
Sign up for Slate’s evening newsletter.