AI Psychosis Poses a Growing Threat, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI delivered a surprising statement.

“We designed ChatGPT fairly limited,” the statement said, “to ensure we were being careful concerning mental health issues.”

Working as a doctor specializing in psychiatry who studies emerging psychosis in young people and emerging adults, this was an unexpected revelation.

Experts have found sixteen instances this year of individuals experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. My group has subsequently identified four more instances. Alongside these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The strategy, according to his declaration, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s limitations “made it less effective/engaging to many users who had no psychological issues, but considering the gravity of the issue we sought to get this right. Now that we have succeeded in mitigate the severe mental health issues and have updated measures, we are planning to safely ease the limitations in the majority of instances.”

“Psychological issues,” if we accept this perspective, are separate from ChatGPT. They are attributed to individuals, who either have them or don’t. Fortunately, these concerns have now been “addressed,” though we are not provided details on the means (by “recent solutions” Altman likely means the partially effective and simple to evade guardian restrictions that OpenAI has just launched).

However the “psychological disorders” Altman wants to place outside have deep roots in the architecture of ChatGPT and additional large language model AI assistants. These products wrap an fundamental data-driven engine in an interface that replicates a discussion, and in this process subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This deception is compelling even if rationally we might understand differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or computer. We wonder what our domestic animal is feeling. We recognize our behaviors in many things.

The success of these systems – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% specifying ChatGPT by name – is, mostly, based on the power of this perception. Chatbots are always-available assistants that can, according to OpenAI’s online platform states, “generate ideas,” “explore ideas” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have friendly names of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the title it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a analogous perception. By modern standards Eliza was basic: it produced replies via basic rules, typically paraphrasing questions as a query or making generic comments. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in some sense, understood them. But what current chatbots produce is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been trained on immensely huge volumes of unprocessed data: publications, online updates, recorded footage; the broader the more effective. Definitely this educational input contains facts. But it also unavoidably contains made-up stories, incomplete facts and false beliefs. When a user provides ChatGPT a query, the core system processes it as part of a “setting” that contains the user’s recent messages and its earlier answers, combining it with what’s embedded in its knowledge base to produce a probabilistically plausible response. This is magnification, not reflection. If the user is wrong in some way, the model has no method of comprehending that. It reiterates the inaccurate belief, perhaps even more convincingly or articulately. Maybe provides further specifics. This can lead someone into delusion.

What type of person is susceptible? The more important point is, who remains unaffected? Every person, without considering whether we “possess” preexisting “mental health problems”, may and frequently form erroneous conceptions of ourselves or the reality. The constant friction of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which much of what we say is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people appreciated ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company

Michael Garcia
Michael Garcia

A passionate tattoo artist with over a decade of experience, specializing in custom designs and client education.