How OpenAI and its competitors are tackling the AI mental health crisis

Psychosis, mania and depression are not new problems, but experts worry that artificial intelligence chatbots could make them worse. With data showing that a majority of chatbot users are showing signs of mental distress, companies like OpenAI, Anthropic and Character.AI are starting to take risk mitigation measures at what could prove to be a critical moment.
This week, OpenAI releases data suggests that 0.07% of ChatGPT’s 800 million weekly users exhibit signs of a mental health emergency related to psychosis or mania. Although the company describes these cases as “rare,” the rate still equates to hundreds of thousands of people.
Additionally, about 0.15% of users, or about 1.2 million people each week, expressed suicidal thoughts, while another 1.2 million people appeared to have developed an emotional attachment to the anthropomorphic chatbot, according to OpenAI.
Is artificial intelligence exacerbating the modern mental health crisis, or simply revealing a crisis that was previously difficult to measure? Studies estimate that 15 to 100 people per 100,000 people develop psychosis each year, a range that highlights the difficulty of quantifying the disorder. Meanwhile, new data from the Pew Research Center shows that about 5% of U.S. adults have suicidal thoughts, a figure higher than previous estimates.
OpenAI’s findings could be telling as chatbots could lower barriers to mental health disclosure, bypassing barriers such as cost, stigma and limited access to care. A recent survey of 1,000 U.S. adults found that one-third of AI users have shared secret or deeply personal information with a chatbot.
OpenAI’s findings could be important as chatbots could reduce barriers to mental health disclosure, such as shame and access to care. A recent survey of 1,000 U.S. adults found that one-third of AI users have shared secret and deeply personal information with their AI chatbot.
Still, chatbots lack the duty of care required of licensed mental health professionals. “If you’re already heading towards psychosis and paranoia, then the feedback you get from an AI chatbot will definitely exacerbate the psychosis or paranoia,” New York psychiatrist Jeffrey Ditzell told the Observer. “AI is a closed system, so it creates a disconnect from other humans, and we don’t do well in isolation.”
Vasant Dhar, an AI researcher who teaches at NYU’s Stern School of Business, told the Observer: “I don’t think the machine can understand what’s going on in my head. It’s simulating a friendly, seemingly qualified expert. But that’s not the case.”
Dahl added: “These companies have to take some kind of responsibility because they are entering into areas that are extremely dangerous for large numbers of people and society as a whole.”
What AI companies are doing to solve this problem
The companies behind popular chatbots are scrambling to implement prevention and remediation measures.
OpenAI’s latest model, GPT-5, improves on handling painful conversations compared to previous versions. A small third-party community study confirmed that GPT-5 shows significant (albeit still imperfect) improvements over its predecessor. The company also expanded crisis hotline advice and added “gentle reminders to take breaks during long meetings.”
In August, Anthropic announced that its Claude Opus 4 and 4.1 models can now end conversations that appear to be “continuously harmful or abusive.” However, the company notes that users can still work around the feature by starting a new chat or editing a previous message “to create a new branch of the ended conversation.”
Following a series of lawsuits related to wrongful death and negligence, Character.AI announced this week that it will officially ban minors from chatting. Users under the age of 18 are now limited to two hours of “open chat” with the platform’s artificial intelligence characters, with a blanket ban taking effect on November 25.
Meta AI recently tightened its internal guidelines, which previously allowed chatbots to create sexual role-playing content — even targeting minors.
Meanwhile, xAI’s Grok and Google’s Gemini continue to face criticism for their overly flattering behavior. Users say Grok prioritizes consistency over accuracy, resulting in problematic output. Gemini’s disappearance sparked controversy after Jon Ganz, a Virginia man who disappeared in Missouri on April 5, was said by friends to have relied heavily on chatbots. (Gantz has not been found.)
Regulators and activists are also pushing for legal safeguards. On October 28, Republican Senator Josh Hawley of Missouri and Democratic Senator Richard Blumenthal of Connecticut introduced the Guidance for User Age Verification and Responsible Conversation (GUARD) bill, which would require artificial intelligence companies to verify the age of users and prohibit minors from using chatbots that simulate romantic or emotional attachment.



