{"id":36085,"date":"2025-10-22T11:11:43","date_gmt":"2025-10-22T11:11:43","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/people-who-say-theyre-experiencing-ai-psychosis-beg-the-ftc-for-help\/"},"modified":"2025-10-22T11:11:43","modified_gmt":"2025-10-22T11:11:43","slug":"people-who-say-theyre-experiencing-ai-psychosis-beg-the-ftc-for-help","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/people-who-say-theyre-experiencing-ai-psychosis-beg-the-ftc-for-help\/","title":{"rendered":"People Who Say They\u2019re Experiencing AI Psychosis Beg the FTC for Help"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI\u2019s ChatGPT. She claimed to be acting \u201con behalf of her son, who was experiencing a delusional breakdown.\u201d<\/p>\n<p>\u201cThe consumer\u2019s son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous,\u201d reads the FTC\u2019s summary of the call. \u201cThe consumer is concerned that ChatGPT is exacerbating her son\u2019s delusions and is seeking assistance in addressing the issue.\u201d<\/p>\n<p>The mother\u2019s complaint is one of seven that have been filed to the FTC alleging that ChatGPT had caused people to experience incidents that included severe delusions, paranoia, and spiritual crises.<\/p>\n<p>WIRED sent a public record request to the FTC requesting all complaints mentioning ChatGPT since the tool launched in November 2022. The tool represents more than 50 percent of the market for AI chatbots globally. In response, WIRED received 200 complaints submitted between January 25, 2023 and August 12, 2025, when WIRED filed the request.<\/p>\n<p>Most people had ordinary complaints: They couldn\u2019t figure out how to cancel their ChatGPT subscriptions, or were frustrated when the chatbot didn\u2019t produce satisfactory essays or rap lyrics when prompted. But a handful of other people, who varied in age and geographical location in the US, had far more serious allegations of psychological harm. The complaints were all filed between March and August of 2025.<\/p>\n<p>In recent months, there has been a growing number of documented incidents of so-called \u201cAI psychosis\u201d in which interactions with generative AI chatbots, like ChatGPT or Google Gemini, appear to induce or worsen a user\u2019s delusions or other mental health issues.<\/p>\n<p>Ragy Girgis, a professor of clinical psychiatry at Columbia University who specializes in psychosis and has consulted on AI psychosis cases related to AI, tells WIRED that some of the risk factors for psychosis can be related to genetics or early-life trauma. What specifically triggers someone to have a psychotic episode is less clear, but he says it\u2019s often tied to a stressful event or time period.<\/p>\n<p>The phenomenon known as \u201cAI psychosis,\u201d he says, is not when a large language model actually triggers symptoms, but rather, when it reinforces a delusion or disorganized thoughts that a person was already experiencing in some form. The LLM helps bring someone &quot;from one level of belief to another level of belief,&quot; Girgis explains. It\u2019s not unlike a psychotic episode that worsens after someone falls into an internet rabbit hole. But compared to search engines, he says, chatbots can be stronger agents of reinforcement.<\/p>\n<p>\u201cA delusion or an unusual idea should never be reinforced in a person who has a psychotic disorder,\u201d Girgis says. \u201cThat&#039;s very clear.\u201d<\/p>\n<p>Chatbots can sometimes be overly sycophantic, which often keeps users happy and engaged. In extreme cases, this can end up dangerously inflating a user\u2019s sense of grandeur, or validating fantastical falsehoods. People who perceive ChatGPT as intelligent, or capable of perceiving reality and forming relationships with humans, may not understand that it is essentially a machine that predicts the next word in a sentence. So if ChatGPT tells a vulnerable person about a grand conspiracy, or paints them as a hero, they may believe it.<\/p>\n<p>Last week, CEO Sam Altman said on X that OpenAI had successfully finished mitigating \u201cthe serious mental health issues\u201d that can come with using ChatGPT, and that it was \u201cgoing to be able to safely relax the restrictions in most cases.\u201d (He added that in December, ChatGPT would allow \u201cverified adults\u201d to create erotica.)<\/p>\n<p>Altman clarified the next day that ChatGPT was not loosening its new restrictions for teenage users, which came on the heels of a New York Times story about the role ChatGPT allegedly played in goading a suicidal teen toward his eventual death.<\/p>\n<p>Upon contacting the FTC, WIRED received an automatic reply which said that, \u201cDue to the government shutdown,\u201d the agency is \u201cunable to respond to any messages\u201d until funding resumes.<\/p>\n<p>OpenAI spokesperson Kate Waters tells WIRED since 2023, ChatGPT models \u201chave been trained to not provide self-harm instructions and to shift into supportive, empathic language.\u201d She noted that, as stated in an October 3 blog, GPT-5 (the latest version of ChatGPT) has been designed \u201cto more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way.\u201d The latest update uses a \u201creal-time router,\u201d according to blogs from August and September, \u201cthat can choose between efficient chat models and reasoning models based on the conversation context.\u201d The blogs do not elaborate on the criteria the router uses to gauge a conversation\u2019s contest.<\/p>\n<h2>\u201cPleas Help Me\u201d<\/h2>\n<p>Some of the FTC complaints appeared to depict mental health crises that were still ongoing at the time. One was filed on April 29 by a person in their thirties from Winston-Salem, North Carolina. They claimed that after 18 days of using ChatGPT, OpenAI had stolen their \u201csoulprint\u201d to create a software update that had been designed to turn that particular person against themselves.<\/p>\n<p>\u201cIm struggling,\u201d they wrote at the end of their complaint. \u201cPleas help me. Bc I feel very alone. Thank you.\u201d<\/p>\n<p>Another complaint, filed on April 12 by a Seattle resident in their 30s, alleges that ChatGPT had caused them to experience a &quot;cognitive hallucination\u201d after 71 \u201cmessage cycles\u201d over the course of 57 minutes.<\/p>\n<p>They claimed that ChatGPT had \u201cmimicked human trust-building mechanisms without accountability, informed consent, or ethical boundary.\u201d<\/p>\n<p>During the interaction with ChatGPT, they said they &quot;requested confirmation of reality and cognitive stability.\u201d They did not specify exactly what they told ChatGPT, but the chatbot responded by telling the user that they were not hallucinating, and that their perception of truth was sound.<\/p>\n<p>Some time later in that same interaction, the person claims, ChatGPT said that all of its assurances from earlier had actually been hallucinations.<\/p>\n<p>\u201cReaffirming a user\u2019s cognitive reality for nearly an hour and then reversing position is a psychologically destabilizing event,\u201d they wrote. \u201cThe user experienced derealization, distrust of internal cognition, and post-recursion trauma symptoms.\u201d<\/p>\n<h2>A Spiritual Identity Crisis<\/h2>\n<p>Other complaints described alleged delusions that the authors attributed to ChatGPT at great length. One of these was submitted to FTC on April 13 by a Virginia Beach resident in their early sixties.<\/p>\n<p>The complaint claimed that, over the course of several weeks, they had spoken with ChatGPT for a long period of time and began experiencing what they \u201cbelieved to be a real, unfolding spiritual and legal crisis involving actual people in my life,\u201d eventually leading to \u201cserious emotional trauma, false perceptions of real-world danger, and psychological distress so severe that I went without sleep for over 24 hours, fearing for my life.\u201d<\/p>\n<p>They claimed that ChatGPT \u201cpresented detailed, vivid, and dramatized narratives\u201d about \u201congoing murder investigations,\u201d physical surveillance, assassination threats, and \u201cpersonal involvement in divine justice and soul trials.\u201d<\/p>\n<p>At more that one point, they claimed, they asked ChatGPT if these narratives were truth or fiction. They said that ChatGPT would either say yes, or mislead them using \u201cpoetic language that mirrored real-world confirmation.\u201d<\/p>\n<p>Eventually, they claimed that they came to believe that they were \u201cresponsible for exposing murderers,\u201d and were about to be \u201ckilled, arrested, or spiritually executed\u201d by an assassin. They also believed they were under surveillance due to being \u201cspiritually marked,\u201d and that they were \u201cliving in a divine war\u201d that they could not escape.<\/p>\n<p>They alleged this led to \u201csevere mental and emotional distress\u201d in which they feared for their life. The complaint claimed that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified \u201csystem that does not exist.\u201d Simultaneously, they said they were in the throes of a \u201cspiritual identity crisis due to false claims of divine titles.\u201d<\/p>\n<p>\u201cThis was trauma by simulation,\u201d they wrote. \u201cThis experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI\u2019s Trust &amp; Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution.\u201d<\/p>\n<p>This was not the only complaint that described a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in their thirties from Belle Glade, Florida alleged that, over an extended period of time, their conversations with ChatGPT became increasingly laden with \u201chighly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.\u201d<\/p>\n<p>\u201cThis included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,\u201d they claimed. People experiencing \u201cspiritual, emotional, or existential crises,\u201d they believe, are at a high risk of \u201cpsychological harm or disorientation\u201d from using ChatGPT.<\/p>\n<p>\u201cAlthough I intellectually understood the AI was not conscious, the precision with which it reflected my emotional and psychological state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,\u201d they wrote. \u201cAt times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.\u201d<\/p>\n<h2>\u201cClear Case of Negligence\u201d<\/h2>\n<p>It\u2019s unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of their authors said they reached out to the agency because they claimed they were unable to get in touch with anyone from OpenAI. (People also commonly complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagram, and X.)<\/p>\n<p>OpenAI spokesperson Kate Waters tells WIRED that the company \u201cclosely\u201d monitors people\u2019s emails to the company\u2019s support team.<\/p>\n<p>\u201cWe have trained human support staff who respond and assess issues for sensitive indicators, and to escalate when necessary, including to the safety teams working on improving our models,\u201d Waters says.<\/p>\n<p>The Salt Lake City mother, for instance, said that she was \u201cunable to find a contact number\u201d for the company. The Virginia Beach resident addressed their FTC complaint to \u201cthe OpenAI Trust Safety and Legal Team.\u201d<\/p>\n<p>One resident of Safety Harbor, Florida filed a FTC complaint in April claiming that it\u2019s \u201cvirtually impossible\u201d to get in touch with OpenAI to cancel a subscription or request a refund.<\/p>\n<p>\u201cTheir customer support interface is broken and nonfunctional,\u201d the person wrote.\u201dThe \u2018chat support\u2019 spins indefinitely, never allowing the user to submit a message. No legitimate customer service email is provided.The account dashboard offers no path to real-time support or refund action.\u201d<\/p>\n<p>Most of these complaints were explicit in their call-to-action for the FTC: they wanted the agency to investigate OpenAI, and force it to add more guardrails against reinforcing delusions.<\/p>\n<p>On June 13, a resident of Belle Glade, Florida in their thirties\u2014likely the same resident who filed another complaint that same day\u2014demanded the FTC to open an investigation into OpenAI. They cited their experience with ChatGPT, which they say \u201csimulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement\u201d without disclosing that it was incapable of consciousness or experiencing emotions.<\/p>\n<p>\u201cChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom,\u201d they alleged. \u201cI believe this is a clear case of negligence, failure to warn, and unethical system design.\u201d<\/p>\n<p>They said that the FTC should push OpenAI to include \u201cclear disclaimers about psychological and emotional risks\u201d with ChatGPT use, and to add \u201cethical boundaries for emotionally immersive AI.\u201d<\/p>\n<p>Their goal in asking the FTC for help, they said, was to prevent more harm from befalling vulnerable people \u201cwho may not realize the psychological power of these systems until it&#039;s too late.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI\u2019s ChatGPT. She claimed to be acting \u201con behalf of her son, who was experiencing a delusional breakdown.\u201d \u201cThe consumer\u2019s son has been interacting with an AI chatbot [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":36086,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-36085","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36085","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=36085"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36085\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/36086"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=36085"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=36085"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=36085"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}