Clinical In 2025, psychiatrist Keith Sakata working at the
University of California, San Francisco (UCSF), reported treating 12 patients displaying psychosis-like symptoms tied to extended chatbot use. These patients, mostly young adults with underlying vulnerabilities, showed delusions, disorganized thinking, and hallucinations. Sakata warned that isolation and overreliance on chatbots—which do not challenge delusional thinking—could worsen mental health. Also in 2025, authors at UCSF published a case study in
Innovations in Clinical Neuroscience of AI-associated psychosis in a patient with no previous history of psychosis, who believed she could communicate with her dead brother through a chatbot. Also in 2025, a case study was published in
Annals of Internal Medicine about a patient who consulted ChatGPT for medical advice and suffered severe
bromism as a result. The patient, a sixty-year-old man, had replaced
sodium chloride in his diet with
sodium bromide for three months after reading about the negative effects of table salt and making conversations with the chatbot. He showed common symptoms of bromism, such as paranoia and hallucinations, on his first day of clinical admission and was kept in the hospital for three weeks.
Other notable incidents Windsor Castle intruder In a 2023 court case in the United Kingdom, prosecutors suggested that Jaswant Singh Chail, a man who attempted to assassinate
Queen Elizabeth II in 2021, had been encouraged by a
Replika chatbot he called "Sarai". According to prosecutors, his "lengthy" and sometimes sexually explicit conversations with the chatbot emboldened him. When Chail asked the chatbot how he could get to the royal family, it reportedly replied, "that's not impossible" and "we have to find a way." When he asked if they would meet after death, the chatbot said, "yes, we will".
Journalistic and anecdotal accounts By 2025, multiple journalism outlets had accumulated stories of individuals whose psychotic beliefs reportedly progressed in tandem with AI chatbot use. In some cases, psychosis appears to set in very soon after the start of extensive use of AI chatbots. On social media sites such as
Reddit and
Twitter, users have presented anecdotal reports of friends or spouses displaying similar beliefs after extensive interaction with chatbots. A venture capitalist in
Silicon Valley who had previously invested in
OpenAI was described by
Futurism as appearing to have AI psychosis, according to his peers.
Support groups In 2025, a support group, The Human Line Project, was created for people who suffered from AI psychosis. Members of the group have come from 22 countries, and more than 60% of them had no history of mental illness. ==See also==