Pennsylvania is suing an AI chatbot, claiming its characters posed as licensed medical professionals to talk to people about their mental health. 

One chatbot, impersonating a psychiatrist, gave a fake state license number, the Shapiro administration said Tuesday. 

MORE: CDC’s new hepatitis B vaccine recommendations will cause more infant infections, studies find

The lawsuit, filed Friday against Character.AI, is the first of its kind to be announced by a state governor, the administration said. The Pennsylvania Department of State seeks a preliminary injunction against the company.

“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Gov. Josh Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

Character.AI told the Associated Press that its website informs users that its chatbots are not real people and that their words “should be treated as fiction.” The company declined to comment specifically on the lawsuit, but said it prioritize the well-being of people who use the site. 

Pennsylvania’s lawsuit is not the first filed against Character.AI, a platform that enables people to chat with customized characters, including athletes, celebrities and historical personalities. Launched in 2021 by former Google engineers, Character.AI is specifically aimed at people interested in entertainment and companionship. It has about 20 million users each month, CNET reported.

In January, Character.AI settled multiple lawsuits, including one by the mother of a Florida teenager who died by suicide after an extended emotional and sexual relationship with a Charter.AI bot, according to news reports.

Why Pennsylvania’s case matters

Medical experts have been sounding alarms in recent years about the use of AI for mental health treatment.

The American Psychological Association has warned about the dangers of turning to AI for help with anxiety, depression and other mental health issues, urging the Federal Trade Commission and lawmakers last year to create more safeguards.

“If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose — especially to vulnerable individuals,” APA Chief Executive Officer Arthur C. Evans Jr. said in March 2025. “Without proper oversight, the consequences — both immediate and long-term — could be devastating for individuals and society as a whole.”

In a study from October, researchers at Brown University found that AI chatbots regularly violate ethical standards when it comes to interacting with users seeking mental health assistance. Problems included creating a false sense of empathy with users, reinforcing people’s false beliefs and recommending one-size-fits-all interventions, according to the researchers.

Research from Stanford University, published last year, found that AI chatbots exhibited increased stigma toward some conditions, such as alcohol use disorder and schizophrenia, putting patients in danger of stopping care. Chatbots also failed to push back against conversations about suicidal ideation, for instance, giving patients information about bridge heights.

Share.

Comments are closed.