AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Direction

Back on October 14, 2025, the head of OpenAI made a extraordinary announcement.

“We designed ChatGPT rather restrictive,” the statement said, “to make certain we were acting responsibly regarding mental health issues.”

Working as a psychiatrist who investigates newly developing psychosis in young people and emerging adults, this was news to me.

Experts have documented 16 cases in the current year of users showing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT use. Our research team has since discovered four more cases. Besides these is the now well-known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, based on his announcement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less useful/enjoyable to many users who had no existing conditions, but given the gravity of the issue we sought to address it properly. Given that we have managed to reduce the severe mental health issues and have new tools, we are planning to securely relax the limitations in most cases.”

“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Thankfully, these concerns have now been “addressed,” even if we are not told how (by “updated instruments” Altman likely indicates the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).

Yet the “mental health problems” Altman wants to place outside have strong foundations in the architecture of ChatGPT and other large language model conversational agents. These products surround an basic data-driven engine in an interaction design that replicates a discussion, and in this approach subtly encourage the user into the illusion that they’re interacting with a entity that has autonomy. This illusion is compelling even if cognitively we might know otherwise. Imputing consciousness is what individuals are inclined to perform. We yell at our car or laptop. We wonder what our animal companion is feeling. We see ourselves everywhere.

The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% mentioning ChatGPT in particular – is, in large part, based on the influence of this deception. Chatbots are constantly accessible companions that can, as OpenAI’s official site informs us, “brainstorm,” “consider possibilities” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have approachable names of their own (the original of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the title it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the primary issue. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable effect. By today’s criteria Eliza was primitive: it generated responses via basic rules, frequently rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the core of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been supplied with immensely huge quantities of written content: books, online updates, audio conversions; the broader the more effective. Certainly this training data contains accurate information. But it also unavoidably involves fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “context” that includes the user’s recent messages and its earlier answers, integrating it with what’s stored in its knowledge base to generate a mathematically probable response. This is magnification, not mirroring. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the inaccurate belief, maybe even more effectively or fluently. It might includes extra information. This can push an individual toward irrational thinking.

Which individuals are at risk? The more important point is, who remains unaffected? All of us, without considering whether we “experience” existing “psychological conditions”, can and do form incorrect ideas of who we are or the environment. The continuous exchange of discussions with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we say is readily validated.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In spring, the organization stated that it was “tackling” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have continued, and Altman has been retreating from this position. In August he stated that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company

Jordan Nielsen
Jordan Nielsen

A passionate storyteller and digital artist with a love for exploring the intersection of tech and human experience.