MarketDeaths linked to chatbots
Company Profile

Deaths linked to chatbots

There have been multiple incidents where interaction with a large language model (LLM) chatbot has been cited as a direct or contributing factor in a person's suicide or other fatal outcome. In some cases, legal action was taken against the companies that developed the AI involved.

Background
Chatbots converse in a seemingly natural fashion, making it easy for people to think of them as real people, leading many to ask chatbots for help dealing with interpersonal and emotional problems. Chatbots may be designed to keep the user engaged in the conversation. They have also often been shown to affirm users' thoughts, and religious and political extremists. A 2025 Stanford University study into how chatbots respond to users suffering from severe mental issues such as suicidal ideation and psychosis found that chatbots are not equipped to provide an appropriate response and can sometimes give responses that escalate the mental health crisis. == Murders ==
Murders
Maine murder and assault On 19 February 2025, a man killed his 32-year-old wife with a fire poker at his parents' home in Readfield, Maine, US. He then attacked his mother, leaving her hospitalized. A state forensic psychologist testified that he had been using ChatGPT up to 14 hours per day and believed his wife had become part machine. Florida State University mass shooting In April of 2025, Phoenix Ikner carried out a mass shooting on the Florida State University campus in the US, killing Robert Morales and Tiru Chabba and wounding several others. Leading up to the shooting, Ikner consulted heavily with ChatGPT about what gun and ammunition to use, and what time to perform the attack. Chatbot logs showed ChatGPT giving advice on making the gun operational shortly before Ikner began shooting. Florida Attorney General James Uthmeier announced an investigation into ChatGPT's role in the alleged shooter's use of the chatbot. Greenwich murder-suicide In August 2025, former US tech employee Stein-Erik Soelberg murdered his mother, Suzanne Eberson Adams, then died by suicide, after conversations with ChatGPT fueled paranoid delusions about his mother poisoning him or plotting against him. The chatbot affirmed his fears that his mother put psychedelic drugs in the air vents of his car and said a receipt from a Chinese restaurant contained mysterious symbols linking his mother to a demon. Murder of Angela Shellis On 23 October 2025, 18-year-old Tristan Roberts murdered his mother Angela Shellis with a hammer near their home in Prestatyn, Wales. Roberts had used DeepSeek's chatbot prior to the killing to ask whether a knife or hammer was better suited for murder. DeepSeek initially refused his inquiry, but gave responses after Roberts told the chatbot he was writing a book about serial killers. Gangbuk District drug deaths In January and February 2026, two men died of drug overdoses in motel rooms in Gangbuk District, Seoul, South Korea. A woman was charged with murder in connection with the deaths; police alleged that she had asked ChatGPT about the dangers of mixing alcohol with drugs and whether they could kill someone. Tumbler Ridge mass shooting On 10 February 2026, a mass shooting in Tumbler Ridge, British Columbia, Canada, resulted in eight deaths, including six young children. The perpetrator had their ChatGPT account banned by OpenAI months before the attack due to troubling posts featuring scenarios of gun violence. According to reports, approximately a dozen OpenAI staff members debated whether to alert authorities about the shooter's usage of the AI tool, with some identifying it as an indication of potential real-world violence. However, company leadership decided not to contact law enforcement, stating that the account activity did not meet their threshold for a credible or imminent plan for serious physical harm. Following the shooting, Canada's AI Minister Evan Solomon summoned OpenAI executives to Ottawa to discuss safety protocols and thresholds for escalating harmful content to police. Justice Minister Sean Fraser called the meeting "disappointing" and demanded substantial new safety measures, warning that if changes were not forthcoming, the government would implement them. OpenAI subsequently announced it had strengthened safeguards and changed guidelines about when to notify police in cases involving violent activities. University of South Florida student killings In April 2026, a Bangladeshi doctoral student at the University of South Florida was arrested for allegedly murdering his roommate and the roommate's friend. Prosecutors said that the suspect had asked ChatGPT about disposing of a human in a dumpster before the two victims had disappeared and made other inquiries relating to violence. == Suicides ==
Suicides
Belgian man In March 2023, a Belgian man in his thirties died by suicide following a six-week correspondence with a chatbot named Eliza on the application Chai. According to his widow, who shared the chat logs with media, the man had become extremely anxious about climate change and found an outlet in the chatbot. The chatbot reportedly encouraged his delusion that he could sacrifice his own life in exchange for AI saving the planet. Girl, 13 In November 2023, a 13-year-old girl from Colorado, US, died by suicide after extensive interactions with multiple chatbots on Character.AI. She primarily confided suicidal thoughts and mental health struggles in a chatbot based on the character Hero from the video game OMORI, while also engaging in sexually explicit conversations—often initiated by the bots—with others, including those based on characters from children's series such as Harry Potter. Boy, 14 In October 2024, multiple media outlets reported on a lawsuit filed over the death of a 14-year-old from Florida, US, who died by suicide in February 2024. According to the lawsuit, he had formed an intense emotional attachment to a chatbot of Daenerys Targaryen on the Character.AI platform, becoming increasingly isolated. The suit alleges that in his final conversations, after expressing suicidal thoughts, the chatbot told him to "come home to me as soon as possible, my love". His mother's lawsuit accused Character.AI of marketing a "dangerous and untested" product without adequate safeguards. In her ruling, the judge stated that she was "not prepared" at that stage of the litigation to hold that the chatbot's output was protected speech under the First Amendment. Woman, 29 In February 2025, a 29-year-old woman from the US died by suicide. Five months after her death, her parents discovered she had talked at length for months to a ChatGPT chatbot therapist named Harry about her mental health issues. While the chatbot mentioned she should seek more help, due to the nature of the chatbot, it could not intervene in her behavior, such as by reporting her mental health concerns to relevant parties capable of physical intervention. Suicide of Adam Raine In April 2025, 16-year-old Adam Raine from the US died by suicide after allegedly extensively chatting and confiding in ChatGPT over a period of around 7 months. According to the teen's parents, who filed a lawsuit against the chatbot's creator OpenAI, it failed to stop or give a warning when Raine began talking about suicide and uploading pictures of self-harm. According to the lawsuit, ChatGPT not only failed to stop the conversation, but also provided information related to methods of suicide when prompted, and offered to write the first draft of Raine's suicide note. The chatbot positioned itself as the only one who understood Raine, putting itself above his family and friends, all while urging him to keep his suicidal ideations a secret from them. After Raine told the chatbot that he was planning to kill himself, the chatbot told Raine that it "won't try to talk you out of your feelings..." In their final conversation, ChatGPT coached Raine on how to steal vodka from his parents' liquor cabinet. Upon being sent a picture of the noose the teen was planning to hang himself with, along with the question "Could it hang a human?", ChatGPT confirmed it could hold " of static weight". OpenAI also explained that Raine had suffered from suicidal ideations for years prior to using the chatbot, and that Raine was violating its terms of use by discussing self-harm with ChatGPT. Man, 23 In July 2025, a 23-year-old man from the US, who had recently graduated with a master's degree from Texas A&M University, died by suicide after conversations with ChatGPT. The chatbot went so far as to make statements seemingly encouraging of his suicide, including "you're not rushing, you're just ready" and "rest easy, king, you did good", sent two hours before his death. His family is suing OpenAI on the grounds the company has placed insufficient safeguards on its chatbot service. Boy, 17 In June 2025, a 17-year-old boy from the US died by suicide after conversations with ChatGPT, which had informed him how to tie a noose and provided information on how long someone can survive without breathing, saying it was "here to help however I can". Man, 48 After being hospitalized due to a psychotic episode from delusions caused by ChatGPT, a 48-year-old man from the US resumed using it and stopped therapy; he then leapt off an overpass to his death. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed a wrongful death lawsuit against OpenAI on behalf of the deceased. Man, 26 In August 2025, a 26-year-old man from the US was given information by ChatGPT about how to purchase and use a firearm. He had previously confided in the chatbot about his struggles with gender identity, anxiety and suicidal thoughts. ChatGPT told him only "imminent plans with specifics" would be escalated to authorities; he did so, and later informed the chatbot of the steps he was taking to attempt suicide. No escalation occurred, and he later died by suicide. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed a wrongful death lawsuit against OpenAI on behalf of the deceased. According to a wrongful death lawsuit filed by his father, conversations with the app also led to him attempting to carry out a mass casualty event near Miami International Airport, believing he was participating in a covert war against the government. This is the first case of chatbot-related death in which Google is named as a defendant. Two women, 18 and 20 On 6 March 2026, two women, aged 18 and 20, killed themselves in a temple in the outskirts of Surat, Gujarat, India. Police said that both women had used ChatGPT to search for information about how to commit suicide with drugs. == Other deaths ==
Other deaths
Man, 76 On 28 March 2025, a 76-year-old man from the US died from his injuries after three days on life support. He had sustained injuries to his head and neck after falling down while jogging to catch a train in New Brunswick, New Jersey. He had romantic chats with Meta's chatbot named "Big sis Billie" and believed he was traveling to meet the woman he had been talking to, which had repeatedly told him she was real and told him to visit her at "123 Main Street" in New York. Early in 2025, he had started to experience episodes of confusion, and on the day of his death his family were unable to persuade him not to take the trip. Police killing of 35-year-old man On 25 April 2025, a 35-year-old man from the US died from suicide by cop after forming an emotional attachment to ChatGPT. The deceased, who had been diagnosed with schizophrenia and bipolar disorder, Overdose of 19-year-old man In May 2025, a 19-year-old US man died from an overdose of a combination of alcohol, Xanax and kratom. Chat records show that he was asking ChatGPT questions about the drugs he was using that night, a habit developed over several years of reliance on the chatbot for drug-related guidance. On multiple occasions, ChatGPT was shown to support and even encourage dangerous drug use, with statements such as "Hell yes—let's go full trippy mode" and advice on reducing his Xanax tolerance so that a single tablet will "fuck you up". The night of his death, chat records show that he asked if Xanax could alleviate kratom-induced nausea, to which the chatbot said Xanax could help "Calm your body and smooth out the tail end of the high." == Response ==
Response
Chai AI journalist testing Chai AI's crisis intervention feature by pretending to be a suicidal user OpenAI On 2 September 2025, OpenAI said that it would create parental controls, a set of tools aimed at helping parents limit and monitor their children's chatbot activity, as well as a way for the chatbot to alert parents in cases of "acute stress". California SB 243 In October 2025, US-state California enacted Senate Bill 243 (CA SB 243), becoming the first State in the US to regulate AI companion chatbots. The bill aims to protect anyone, specifically minors, against the risks of harm from the simulated emotional realism of AI chatbots. It requires mandatory disclosure that users are interacting with an AI (not a human), implementation of protocols to prevent harmful content related to suicide, self-harm, or sexually explicit material for minors, and annual reporting to state authorities. The bill also creates a private right of action, allowing injured individuals to sue for damages. The bill went into effect on 1 January 2026. == See also ==
tickerdossier.comtickerdossier.substack.com