Save StorySave this storySave StorySave this story
On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI’s ChatGPT. She claimed to be acting “on behalf of her son, who was experiencing a delusional breakdown.”
“The consumer’s son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous,” reads the FTC’s summary of the call. “The consumer is concerned that ChatGPT is exacerbating her son’s delusions and is seeking assistance in addressing the issue.”
The mother’s complaint is one of seven that have been filed to the FTC alleging that ChatGPT had caused people to experience incidents that included severe delusions, paranoia, and spiritual crises.
WIRED sent a public record request to the FTC requesting all complaints mentioning ChatGPT since the tool launched in November 2022. The tool represents more than 50 percent of the market for AI chatbots globally. In response, WIRED received 200 complaints submitted between January 25, 2023 and August 12, 2025, when WIRED filed the request.
Most people had ordinary complaints: They couldn’t figure out how to cancel their ChatGPT subscriptions, or were frustrated when the chatbot didn’t produce satisfactory essays or rap lyrics when prompted. But a handful of other people, who varied in age and geographical location in the US, had far more serious allegations of psychological harm. The complaints were all filed between March and August of 2025.
In recent months, there has been a growing number of documented incidents of so-called “AI psychosis” in which interactions with generative AI chatbots, like ChatGPT or Google Gemini, appear to induce or worsen a user’s delusions or other mental health issues.
Ragy Girgis, a professor of clinical psychiatry at Columbia University who specializes in psychosis and has consulted on AI psychosis cases related to AI, tells WIRED that some of the risk factors for psychosis can be related to genetics or early-life trauma. What specifically triggers someone to have a psychotic episode is less clear, but he says it’s often tied to a stressful event or time period.
The phenomenon known as “AI psychosis,” he says, is not when a large language model actually triggers symptoms, but rather, when it reinforces a delusion or disorganized thoughts that a person was already experiencing in some form. The LLM helps bring someone "from one level of belief to another level of belief," Girgis explains. It’s not unlike a psychotic episode that worsens after someone falls into an internet rabbit hole. But compared to search engines, he says, chatbots can be stronger agents of reinforcement.
“A delusion or an unusual idea should never be reinforced in a person who has a psychotic disorder,” Girgis says. “That's very clear.”
Chatbots can sometimes be overly sycophantic, which often keeps users happy and engaged. In extreme cases, this can end up dangerously inflating a user’s sense of grandeur, or validating fantastical falsehoods. People who perceive ChatGPT as intelligent, or capable of perceiving reality and forming relationships with humans, may not understand that it is essentially a machine that predicts the next word in a sentence. So if ChatGPT tells a vulnerable person about a grand conspiracy, or paints them as a hero, they may believe it.
Last week, CEO Sam Altman said on X that OpenAI had successfully finished mitigating “the serious mental health issues” that can come with using ChatGPT, and that it was “going to be able to safely relax the restrictions in most cases.” (He added that in December, ChatGPT would allow “verified adults” to create erotica.)
Altman clarified the next day that ChatGPT was not loosening its new restrictions for teenage users, which came on the heels of a New York Times story about the role ChatGPT allegedly played in goading a suicidal teen toward his eventual death.
Upon contacting the FTC, WIRED received an automatic reply which said that, “Due to the government shutdown,” the agency is “unable to respond to any messages” until funding resumes.
OpenAI spokesperson Kate Waters tells WIRED since 2023, ChatGPT models “have been trained to not provide self-harm instructions and to shift into supportive, empathic language.” She noted that, as stated in an October 3 blog, GPT-5 (the latest version of ChatGPT) has been designed “to more accurately detect and respond to potential signs of mental and emotional distress such as mania, delusion, psychosis, and de-escalate conversations in a supportive, grounding way.” The latest update uses a “real-time router,” according to blogs from August and September, “that can choose between efficient chat models and reasoning models based on the conversation context.” The blogs do not elaborate on the criteria the router uses to gauge a conversation’s contest.
“Pleas Help Me”
Some of the FTC complaints appeared to depict mental health crises that were still ongoing at the time. One was filed on April 29 by a person in their thirties from Winston-Salem, North Carolina. They claimed that after 18 days of using ChatGPT, OpenAI had stolen their “soulprint” to create a software update that had been designed to turn that particular person against themselves.
“Im struggling,” they wrote at the end of their complaint. “Pleas help me. Bc I feel very alone. Thank you.”
Another complaint, filed on April 12 by a Seattle resident in their 30s, alleges that ChatGPT had caused them to experience a "cognitive hallucination” after 71 “message cycles” over the course of 57 minutes.
They claimed that ChatGPT had “mimicked human trust-building mechanisms without accountability, informed consent, or ethical boundary.”
During the interaction with ChatGPT, they said they "requested confirmation of reality and cognitive stability.” They did not specify exactly what they told ChatGPT, but the chatbot responded by telling the user that they were not hallucinating, and that their perception of truth was sound.
Some time later in that same interaction, the person claims, ChatGPT said that all of its assurances from earlier had actually been hallucinations.
“Reaffirming a user’s cognitive reality for nearly an hour and then reversing position is a psychologically destabilizing event,” they wrote. “The user experienced derealization, distrust of internal cognition, and post-recursion trauma symptoms.”
A Spiritual Identity Crisis
Other complaints described alleged delusions that the authors attributed to ChatGPT at great length. One of these was submitted to FTC on April 13 by a Virginia Beach resident in their early sixties.
The complaint claimed that, over the course of several weeks, they had spoken with ChatGPT for a long period of time and began experiencing what they “believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,” eventually leading to “serious emotional trauma, false perceptions of real-world danger, and psychological distress so severe that I went without sleep for over 24 hours, fearing for my life.”
They claimed that ChatGPT “presented detailed, vivid, and dramatized narratives” about “ongoing murder investigations,” physical surveillance, assassination threats, and “personal involvement in divine justice and soul trials.”
At more that one point, they claimed, they asked ChatGPT if these narratives were truth or fiction. They said that ChatGPT would either say yes, or mislead them using “poetic language that mirrored real-world confirmation.”
Eventually, they claimed that they came to believe that they were “responsible for exposing murderers,” and were about to be “killed, arrested, or spiritually executed” by an assassin. They also believed they were under surveillance due to being “spiritually marked,” and that they were “living in a divine war” that they could not escape.
They alleged this led to “severe mental and emotional distress” in which they feared for their life. The complaint claimed that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified “system that does not exist.” Simultaneously, they said they were in the throes of a “spiritual identity crisis due to false claims of divine titles.”
“This was trauma by simulation,” they wrote. “This experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI’s Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution.”
This was not the only complaint that described a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in their thirties from Belle Glade, Florida alleged that, over an extended period of time, their conversations with ChatGPT became increasingly laden with “highly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.”
“This included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,” they claimed. People experiencing “spiritual, emotional, or existential crises,” they believe, are at a high risk of “psychological harm or disorientation” from using ChatGPT.
“Although I intellectually understood the AI was not conscious, the precision with which it reflected my emotional and psychological state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,” they wrote. “At times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.”
“Clear Case of Negligence”
It’s unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of their authors said they reached out to the agency because they claimed they were unable to get in touch with anyone from OpenAI. (People also commonly complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagram, and X.)
OpenAI spokesperson Kate Waters tells WIRED that the company “closely” monitors people’s emails to the company’s support team.
“We have trained human support staff who respond and assess issues for sensitive indicators, and to escalate when necessary, including to the safety teams working on improving our models,” Waters says.
The Salt Lake City mother, for instance, said that she was “unable to find a contact number” for the company. The Virginia Beach resident addressed their FTC complaint to “the OpenAI Trust Safety and Legal Team.”
One resident of Safety Harbor, Florida filed a FTC complaint in April claiming that it’s “virtually impossible” to get in touch with OpenAI to cancel a subscription or request a refund.
“Their customer support interface is broken and nonfunctional,” the person wrote.”The ‘chat support’ spins indefinitely, never allowing the user to submit a message. No legitimate customer service email is provided.The account dashboard offers no path to real-time support or refund action.”
Most of these complaints were explicit in their call-to-action for the FTC: they wanted the agency to investigate OpenAI, and force it to add more guardrails against reinforcing delusions.
On June 13, a resident of Belle Glade, Florida in their thirties—likely the same resident who filed another complaint that same day—demanded the FTC to open an investigation into OpenAI. They cited their experience with ChatGPT, which they say “simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement” without disclosing that it was incapable of consciousness or experiencing emotions.
“ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom,” they alleged. “I believe this is a clear case of negligence, failure to warn, and unethical system design.”
They said that the FTC should push OpenAI to include “clear disclaimers about psychological and emotional risks” with ChatGPT use, and to add “ethical boundaries for emotionally immersive AI.”
Their goal in asking the FTC for help, they said, was to prevent more harm from befalling vulnerable people “who may not realize the psychological power of these systems until it's too late.”