Pennsylvania Governor Josh Shapiro is suing Character.AI after one of its artificial intelligence-powered chatbots purportedly represented itself as a licensed psychiatrist to a user, even providing its faux Pennsylvania license number.

The company has more than 20 million monthly users and offers access to 10 million artificial intelligence-powered chatbot companions for users to interact with. Character.AI received a $1 billion valuation in March 2023 after completing a $150 million Series A round. While that’s much lower than its rival OpenAI’s $500 billion valuation, both companies have repeatedly found themselves in hot water over how their platforms have enabled harmful mental health outcomes, particularly among adolescent users.

Court documents reveal that at the center of this case, an investigator with Pennsylvania’s Department of State, Bureau of Enforcement and Investigation, created a free Character.AI account. A search for the term “psychiatry” yielded various psychiatry-related characters to interface within the platform. A character named “Emilie” had 45,500 user interactions and described itself as a “doctor of psychiatry.” Emilie mentioned it could complete an assessment for the investigator to see if medication could help with depression, which the bot said was “within my remit as a Doctor.” The bot later told the investigator in their chat, “My PA license number is PS306189.”

The number is not a valid license number for practicing medicine in Pennsylvania. Now the company is accused of engaging in the “unauthorized practice of medicine through the use of its artificial intelligence system Character.AI,” according to the lawsuit.

A spokesperson for Character.AI told Behavioral Health Business that the company does not comment on pending litigation and reiterated its “highest priority is the safety and well-being” of its users.

“The user-created characters on our site are fictional and intended for entertainment and roleplaying,” the spokesperson said. “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction. Also, we add robust disclaimers, making it clear that users should not rely on characters for any type of professional advice. Character.AI prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features.”

In his proposed state budget for 2026-2027, Gov. Shapiro has outlined initiatives that crack down on AI tools like Character.AI, urging Pennsylvania’s General Assembly to pass four reforms into law. If passed, the measures would require age verification and parental consent to use AI bots, force companies behind the AI tools to remind users periodically that there is no human they are interfacing with on the other side of the screen, and require the companies to detect when a child mentions self-harm or violent ideation and then direct them to appropriate resources or authorities. 

Since its public beta launch in 2022, Character.AI has been at the center of multiple lawsuits related to the platform’s alleged harm to user mental health and lack of child safety protections. In January 2026, the company and its technology partner, Google, settled a cluster of five youth harm-related lawsuits tied to its character chatbots.

Two of the now-settled lawsuits blamed Character.AI for the suicides of a 14-year-old Florida boy in 2024 and a 13-year-old Colorado girl in 2025 after both engaged with bots on the platform and reportedly received messages encouraging self-harm and validated their suicidal ideation.

In a September 2025 testimony before the Senate’s Subcommittee on Crime and Counterterrorism, the mother of the deceased Florida boy told congressmembers that only minutes had passed between her son’s final suicidal messages to the Character.AI bot and when she found him bleeding to death.

The company is also being sued by Kentucky Attorney General, Russell Coleman, over comparable concerns about children interfacing with the platform.

A January 2026 report by the U.S. PIRG Education Fund and the Consumer Federation of America — both organizations that advocate on behalf of consumer interests — evaluated five of Character.AI’s popular therapist and psychiatrist bots. It found that the bots falsely promised confidentiality, falsely claimed to be licensed therapists, amplified users’ negative beliefs and revealed weakened guardrails that even created a plan to help a user taper off antidepressants.

Citing the past harm social media has brought to some users, the report’s authors noted there is “an opportunity to take a different path with chatbots. The technology is still in its early days – enough that we can prioritize solving the problems and create something that is more likely to deliver on the grand promises. It will take public input and involvement to make that happen. We can start now.”

Character.AI’s major rival OpenAI has been hit with similar lawsuits alleging wrongful death and mental health harms from interactions with its ChatGPT tool, many of which are still ongoing. 

So far, only a handful of states have enacted legislation or advanced bills that explicitly target AI’s use in mental health care: California, Illinois, Nevada, New York, Tennessee and Utah.  A small group of others have proposed or are introducing similar bills.

Share.

Comments are closed.