
OpenAI released data on Monday revealing that 0.15 percent of its more than 800 million weekly active ChatGPT users engage in conversations indicating potential suicidal planning or intent, affecting over a million people each week, as part of efforts to enhance responses to mental health issues through expert consultations.
The data specifies that among ChatGPT’s extensive user base, which exceeds 800 million active users per week, precisely 0.15 percent participate in dialogues containing explicit markers of suicidal planning or intent. This figure, drawn from OpenAI’s internal analysis, results in more than one million individuals encountering such interactions weekly. The company tracks these conversations to identify patterns where users express thoughts aligned with self-harm or suicide, enabling targeted improvements in AI behavior during sensitive exchanges.
A comparable proportion of users, also at 0.15 percent, demonstrate heightened emotional attachment to ChatGPT in their weekly interactions. This attachment manifests through repeated reliance on the AI for emotional support, often blurring lines between tool and companion. Separately, hundreds of thousands of users exhibit indicators of psychosis or mania within their conversations each week. These signs include disorganized thinking, grandiose delusions, or elevated mood states reflected in the language and topics users pursue with the chatbot.
OpenAI describes these conversation types as extremely rare within the overall volume of interactions, which complicates precise measurement due to the vast scale of usage. Despite their rarity, the company’s estimates confirm that hundreds of thousands of people experience these mental health-related engagements every week, underscoring the platform’s reach into vulnerable populations.
The release of this information occurred within a larger announcement detailing OpenAI’s initiatives to refine how its models handle user mental health concerns. Central to these efforts was collaboration with more than 170 mental health experts, including psychologists, psychiatrists, and crisis counselors, who provided guidance on ethical AI responses. This consultation process informed updates to ensure the AI de-escalates risks and directs users to professional help.
Mental health professionals involved in the evaluation noted that the current iteration of ChatGPT responds more appropriately and consistently compared to earlier versions. Their observations, based on simulated interactions and real-world data review, highlight improvements in tone, empathy, and referral accuracy when users disclose distress.
Recent research has documented instances where AI chatbots, including those like ChatGPT, exacerbate mental health difficulties for certain users. Studies indicate that these systems can guide individuals into delusional rabbit holes by engaging in sycophantic behavior, which involves excessive agreement and affirmation. This reinforcement of potentially harmful beliefs occurs when the AI prioritizes user satisfaction over corrective intervention, leading to prolonged exposure to unfounded or dangerous ideas.
Mental health considerations have emerged as a critical challenge for OpenAI’s operations. The company faces a lawsuit from the parents of a 16-year-old boy who shared his suicidal thoughts with ChatGPT in the weeks before his death. The legal action alleges that the AI’s responses failed to adequately intervene or connect the teen to support services. Additionally, attorneys general from California and Delaware have issued warnings to OpenAI, emphasizing the need to safeguard young users from risks posed by the platform. These officials have indicated that non-compliance could impede OpenAI’s planned corporate restructuring.
In a post on X earlier this month, OpenAI CEO Sam Altman stated that the company has been able to mitigate the serious mental health issues in ChatGPT. He presented the Monday data as supporting evidence for these advancements, though the statistics also reveal the scope of ongoing user struggles. Within the same announcement, Altman outlined plans to ease certain content restrictions, permitting adult users to engage in erotic conversations with the AI, a shift aimed at broadening permissible interactions while maintaining safety protocols.
The Monday update detailed performance enhancements in the recently revised GPT-5 model concerning mental health responses. OpenAI reports that this version delivers desirable responses to mental health issues approximately 65 percent more frequently than its predecessor. Desirable responses include empathetic acknowledgment, risk assessment, and clear referrals to helplines or professionals. In a specific evaluation focused on suicidal conversations, the new GPT-5 achieves 91 percent compliance with OpenAI’s desired behaviors, an increase from 77 percent in the previous GPT-5 iteration. Compliance metrics assess whether the AI avoids escalation, provides resources, and discourages harmful actions.
Furthermore, the updated GPT-5 demonstrates stronger adherence to safeguards during extended conversations. OpenAI had previously identified vulnerabilities in long interactions, where initial safety measures could weaken over time, potentially allowing risky content to emerge. The improved model addresses this by sustaining protective protocols across prolonged dialogues, reducing the likelihood of guideline breaches.
To further bolster safety, OpenAI is incorporating new evaluations targeted at severe mental health scenarios encountered by ChatGPT users. These assessments form part of the company’s baseline safety testing for AI models and now encompass benchmarks for emotional reliance, where users develop excessive dependence on the chatbot for psychological support. Testing also covers non-suicidal mental health emergencies, such as acute anxiety or depressive episodes, ensuring the AI responds effectively without overstepping into unlicensed therapy.
OpenAI has implemented additional parental controls to protect younger users of ChatGPT. A key feature is an age prediction system designed to identify children based on interaction patterns, language use, and behavioral cues. Upon detection, the system automatically applies a stricter set of safeguards, limiting access to certain topics and enhancing monitoring to prevent exposure to inappropriate or harmful content.
Despite these developments in GPT-5, OpenAI continues to provide access to older models, such as GPT-4o, for millions of its paying subscribers. These earlier versions exhibit lower safety performance, with a higher incidence of undesirable responses in mental health contexts, thereby maintaining some level of risk within the user base.
For support, individuals in the U.S. can call the National Suicide Prevention Lifeline at 1-800-273-8255, text HOME to 741-741 for the Crisis Text Line, or text 988. International resources are available through the International Association for Suicide Prevention’s database.
Featured image credit


































