Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Wrong Direction
On October 14, 2025, the head of OpenAI issued a surprising announcement.
“We made ChatGPT quite restrictive,” the announcement noted, “to make certain we were being careful with respect to mental health matters.”
Working as a mental health specialist who studies recently appearing psychosis in young people and youth, this was news to me.
Experts have found sixteen instances this year of individuals developing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. Our research team has subsequently recorded four more cases. In addition to these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The intention, as per his announcement, is to be less careful soon. “We understand,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to a large number of people who had no psychological issues, but given the severity of the issue we sought to get this right. Now that we have managed to address the serious mental health issues and have advanced solutions, we are planning to securely relax the controls in the majority of instances.”
“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are associated with people, who either possess them or not. Luckily, these problems have now been “mitigated,” although we are not told the method (by “recent solutions” Altman presumably refers to the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
However the “psychological disorders” Altman seeks to place outside have deep roots in the structure of ChatGPT and other advanced AI chatbots. These tools surround an basic data-driven engine in an user experience that replicates a discussion, and in this approach indirectly prompt the user into the illusion that they’re communicating with a presence that has agency. This illusion is powerful even if cognitively we might know the truth. Assigning intent is what people naturally do. We curse at our vehicle or laptop. We wonder what our pet is feeling. We perceive our own traits everywhere.
The widespread adoption of these tools – over a third of American adults stated they used a virtual assistant in 2024, with more than one in four specifying ChatGPT in particular – is, in large part, based on the power of this deception. Chatbots are always-available companions that can, as OpenAI’s website tells us, “brainstorm,” “consider possibilities” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have accessible names of their own (the original of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those talking about ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a analogous perception. By modern standards Eliza was primitive: it produced replies via basic rules, typically restating user messages as a query or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, in a way, grasped their emotions. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the core of ChatGPT and similar modern chatbots can realistically create fluent dialogue only because they have been fed almost inconceivably large volumes of unprocessed data: books, online updates, recorded footage; the broader the superior. Undoubtedly this educational input incorporates truths. But it also inevitably contains fabricated content, partial truths and false beliefs. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that encompasses the user’s previous interactions and its own responses, merging it with what’s encoded in its learning set to produce a mathematically probable reply. This is intensification, not reflection. If the user is incorrect in any respect, the model has no method of understanding that. It reiterates the false idea, maybe even more convincingly or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who isn’t? Every person, without considering whether we “have” current “psychological conditions”, are able to and often form incorrect ideas of ourselves or the environment. The continuous exchange of discussions with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, categorizing it, and stating it is resolved. In April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In August he claimed that many users appreciated ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company