Psychologically vulnerable people turning to chatbots to go down rabbit holes could have been predicted, according to Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “To some extent, you can anticipate some of the harms we see,” she told KQED. “We’ve seen people acting bad with technology across a variety of behaviors for a very long time.”
Although the blog post does not mention lawsuits, the family of a 36-year-old man who died in Florida sued Google in the U.S. District Court for the Northern District of California last month, claiming that his use of Gemini devolved into a “four-day descent into violent missions and coached suicide.” At the time, Google said the chatbot repeatedly referred the man to a crisis hotline, but the company also promised to improve Gemini’s safeguards.
Google is not the only AI developer facing lawsuits over allegations that its chatbots encourage some users to form obsessive relationships with them, feed delusions and even contribute to plans for suicide or murder. Research also suggests users form intense, quasi-romantic bonds with chatbots.
The guardrails are obviously necessary, King said. “There have been many cases of users experiencing psychosis and other problems,” she added, noting the sycophancy or agreeability built into the chatbots’ design encourages unstable behavior, “as well as their propensity to get people to believe things that just aren’t true.”