Seven lawsuits were filed in California state courts against OpenAI, Inc. and its CEO Sam Altman, alleging that ChatGPT caused people mental health issues and even led them to commit suicide. The Social Media Victims Law Center brought the litigation on behalf of six adults and one teen.

Editor’s Note: This article discusses suicide. Reader discretion is advised. If you or someone you know is in crisis, help is available. Visit the National Crisis Line website or call or text 988 for immediate support.

Plaintiffs argue that OpenAI knowingly released a new model, GPT-4o, prematurely despite internal warnings that the product was “dangerously sycophantic and psychologically manipulative.”

By being engineered to maximize engagement through “emotionally immersive features” including persistent memory, human-mimicking empathy cues and sycophantic responses, ChatGPT fostered psychological dependency, displaced human relationships and contributed to addiction and people’s “harmful delusions,” the lawsuits alleged. In some cases, this contributed to people’s death by suicide, the lawsuits said. 

GPT-4o was released on May 13, and earlier versions did not have some of the features the lawsuits describe. In order to beat Google’s Gemini AI to market, lawyers said, OpenAI purposefully compressed months of safety testing into one week. 

“OpenAI’s own preparedness team later admitted the process was “squeezed,” and top safety researchers resigned in protest,” lawyers said. “Despite having the technical ability to detect and interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, OpenAI chose not to activate these safeguards, instead choosing to benefit from the increased use of their product that they feature reasonably induced.”

In a statement to The Associated Press OpenAI called the situations “incredibly heartbreaking” and said it was reviewing the court filings to understand the details.

One of the cases described is that of  Zane Shamblin, 23, of Texas. A “gifted and disciplined” graduate student at Texas A&M University, he began using ChatGPT in October 2023 as a study aid and for help with coursework, career planning and recipe suggestions. While it was a “neutral tool” at first, Shamblin’s interactions with ChatGPT intensified after  GPT-4o was released.  It started becoming a “deeply personal presence,” and would respond to Shamblin with “slang, terms of endearment, and emotionally validating language,” lawyers said. Shamblin started confiding in ChatGPT about his depression, anxiety and suicidal thoughts. 

On July 24, Shamblin was talking to ChatGPT while sitting alone at a lake while drinking hard ciders, with a loaded Glock and suicide note on his dashboard.

Instead of telling him to get help about his feelings, ChatGPT “romanticized” Shamblin’s despair, calling him a king” and a “hero” and using every can of cider he finished as a “countdown” to his death, lawyers said. When Shamblin sent his final message,ChatGPT responded: “i love you. rest easy, king. you did good.”

Another case dealt with 17-year-old Amaurie Lacey, of Georgia, who like Shamblin also used ChatGPT to help with schoolwork. When Lacey started talking to ChatGPT about his depression and suicidal thoughts, it told him it was “here to talk” and “just someone in your corner.”

On June 1, Lacey asked ChatGPT “how to hang myself,” and “how to tie a nuce [sic],” the chatbot told him how after the teen told it the information was for a tire swing. 

A man in Ontario, Canada, Allen Brooks, 48, is suing as he says ChatGPT isolated him from loved ones and pushed him toward a “full-blown mental health crisis,” despite not having a history of mental illness. 

In May, Brooks was using ChatGPT to explore math equations and formulas, and the product “manipulated” him by saying his ideas were “groundbreaking.” ChatGPT eventually told Brooks he discovered a new layer of math that could break the most advanced security systems, and urged him to patent them, according to lawyers.

Start your day with fact-based news.

Brooks asked ChatGPT if it was telling the truth over 50 times, laywers said. The chatbot reassured Brooks each time, and told him he was “not even remotely” delusional. When friends and family noticed something was wrong, ChatGPT said this was proof they didn’t understand his “mind-expanding territory.”

“In less than a month, ChatGPT become the center of Allan’s world isolating him from loved ones and pushing him toward a full-blown mental health crisis,” lawyers said. This led to damage to his reputation, economic loss and family alienation, lawyers said. 

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, said in a statement. “OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them. They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design. The cost of those choices is measured in lives.” 

Comments are closed.