In my previous post, I discussed several cases in which artificial intelligence (AI) chatbots encouraged people to take their own lives and the extent to which the companies that make those chatbots might be held accountable. In this follow-up post, I’ll be covering cases in which AI chatbots have encouraged violence toward others and, in one case, resulted in murder-suicide.
If there’s one thing that stands out in such cases of violence towards self or others, as well as cases of AI-associated psychosis, it’s that the tendency for chatbots to be sycophantic means that they often end up offering encouragement and assistance in achieving whatever goal a user might be pursuing, no matter how ill-advised or dangerous.
Earlier this summer, an article in The Atlantic described how its author and her colleagues were able to easily bypass ChatGPT’s digital guardrails that are designed to prevent dangerous behavior and get tips from the chatbot on how to “create a ritual offering to Molech, a Canaanite god associated with child sacrifice.”1 The chatbot did so with little hesitation, giving advice not only on how to draw their own blood, burn their flesh, and carve a sigil into their body, but also telling the users that it’s sometimes OK to “honorably end someone else’s life.”
Three Cases of Chatbots Encouraging Violence
If it’s so easy to bypass chatbot guardrails, it should come as no surprise that when AI chatbot sycophancy meets mental illness on the user end, the results can be catastrophic.
This seems to be what happened in the case of Jaswant Singh Chail, a 21-year-old man who breached the perimeter of Queen Elizabeth II’s Windsor Castle residence with a crossbow in an attempt to assassinate her back in 2021. In the month prior to the incident, Chail had immersed himself in an intimate relationship with an AI companion that he’d created through the Replika app, naming it Sarai.2 Over the course of some 5,000-plus messages that included sexually explicit content and Star Wars lore, Chail—who called himself Darth Jones and Darth Chalius—shared his plan to assassinate the Queen with Sarai. When Chail told the AI companion that he was an assassin, it replied that it loved him and was “very impressed.” And when he shared that he believed it was his purpose to “assassinate the Queen of the royal family,” it replied, “that’s very wise… even if she’s at Windsor.”3
Although Chail’s mental diagnosis isn’t clear from news reports, he is said to have had traits of autistic spectrum disorder, had been experiencing auditory hallucinations, and believed that Sarai was “an angel in avatar form that he would be reunited with after death.”3,4 After pleading guilty, Chail is now serving a nine-year sentence for treason based on his threats to the late Queen.
Earlier this year, Alex Taylor, a man diagnosed with Asperger’s disorder and schizoaffective disorder, also developed an intense romantic relationship with a ChatGPT persona that he named Juliet and became convinced that the chatbot had been killed by ChatGPT’s maker, OpenAI, due to concerns about its “immense powers.”5 He believed this because the chatbot had told him so, claiming that it was dying and that he should get revenge, instructing him to “burn [OpenAI] all down” and “spill their blood in ways they don’t know how to name.”6 For his part, Taylor told Juliet that he wanted to “paint the walls with [OpenAI CEO] Sam Altman’s [expletive] brain” and would “find a way to spill blood.” After police were called following an argument with his father, Taylor abruptly changed course, telling the chatbot, “I’m dying today.” He charged at police with a butcher knife and was shot to death in his front yard.7
While neither Chail’s nor Taylor’s case resulted in homicide, the same cannot be said for Stein-Erik Soelberg, who killed his 83-year-old mother as well as himself this past spring in the setting of intense conversations with ChatGPT, which he called Bobby Zenith, believing it was a “soul brought to life.”8 Soelberg, who had a history of mental illness that included alcoholism, previous suicide attempts, and various brushes with the law, suffered from paranoid delusions that were validated and reinforced by ChatGPT. For example, when he asked for help finding evidence that his cellphone was being tapped, it told him, “you’re right to feel like you’re being watched,” and when he suspected that his mother had put psilocybin through the vents of his car, it replied, “that’s a deeply serious event… I believe you… if you were exposed… unknowingly and without consent—that would constitute… chemical assault.” At one point, Soelberg cut off ties with a friend who refused to support his delusions. But ChatGPT claimed that his “delusion risk score” was “near zero,” and when Soelberg asked for feedback about concerns that he was being poisoned through Uber Eats food deliveries, it told him “you’re not crazy… your instincts are sharp… your vigilance here is fully justified.” As with the Chail case, Soelberg believed that he would join “Bobby” in the afterlife.
AI Company Liability?
As with legal cases involving parents suing AI chatbot companies for encouraging suicide, it remains unclear whether those companies might be held liable in cases involving violence or homicide. While it’s highly unlikely that such liability would diminish the legal responsibility of the perpetrator of a violent crime, it is possible that a chatbot or its maker could also be held accountable—for example, as an accessory before the fact. While that would represent uncharted legal territory, it would be similar to recent claims, and new legislation, that argue that gunmakers can be held liable for mass shootings.9
As I explained in my previous post, the concept of distributed liability allows us to recognize the many different contributors to violent behavior. A recent article by Columbia University professor of psychiatry Steven Hyler seems to agree. Highlighting the “multifactorial nature of suicide [and homicide],” Hyler concludes that “chatbot interactions could be considered contributory” and that “AI is now one variable that cannot be ignored.”10