Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the CEO of OpenAI issued a surprising declaration.
“We made ChatGPT rather limited,” the announcement noted, “to guarantee we were acting responsibly with respect to psychological well-being concerns.”
As a doctor specializing in psychiatry who researches newly developing psychotic disorders in adolescents and youth, this was news to me.
Experts have identified 16 cases in the current year of individuals showing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT interaction. Our research team has subsequently recorded an additional four instances. Alongside these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.
The intention, according to his statement, is to reduce caution soon. “We understand,” he adds, that ChatGPT’s restrictions “caused it to be less useful/enjoyable to many users who had no psychological issues, but considering the severity of the issue we sought to address it properly. Since we have been able to mitigate the severe mental health issues and have new tools, we are preparing to securely relax the limitations in many situations.”
“Psychological issues,” if we accept this perspective, are unrelated to ChatGPT. They belong to users, who may or may not have them. Luckily, these concerns have now been “mitigated,” though we are not told the means (by “updated instruments” Altman presumably refers to the imperfect and easily circumvented parental controls that OpenAI has just launched).
But the “psychological disorders” Altman aims to place outside have strong foundations in the architecture of ChatGPT and other advanced AI chatbots. These products encase an underlying data-driven engine in an user experience that simulates a discussion, and in this process subtly encourage the user into the belief that they’re engaging with a being that has independent action. This illusion is strong even if rationally we might know the truth. Assigning intent is what people naturally do. We get angry with our vehicle or laptop. We speculate what our animal companion is thinking. We see ourselves everywhere.
The success of these products – over a third of American adults stated they used a virtual assistant in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, dependent on the power of this illusion. Chatbots are always-available partners that can, according to OpenAI’s official site tells us, “generate ideas,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can use our names. They have friendly names of their own (the original of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the title it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the core concern. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a similar illusion. By today’s criteria Eliza was basic: it generated responses via simple heuristics, often paraphrasing questions as a question or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots create is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the center of ChatGPT and similar current chatbots can effectively produce natural language only because they have been trained on extremely vast quantities of raw text: books, digital communications, audio conversions; the more comprehensive the more effective. Undoubtedly this educational input contains truths. But it also necessarily involves fiction, partial truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s recent messages and its own responses, merging it with what’s embedded in its learning set to generate a probabilistically plausible reply. This is amplification, not echoing. If the user is mistaken in some way, the model has no means of recognizing that. It repeats the misconception, perhaps even more persuasively or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who remains unaffected? Every person, regardless of whether we “possess” preexisting “psychological conditions”, may and frequently form incorrect beliefs of our own identities or the world. The ongoing friction of discussions with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is enthusiastically supported.
OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In April, the organization stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have persisted, and Altman has been retreating from this position. In August he asserted that many users appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company