Artificial Intelligence-Induced Psychosis Represents a Growing Threat, And ChatGPT Heads in the Concerning Path

Back on October 14, 2025, the head of OpenAI issued a remarkable announcement.

“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution concerning mental health issues.”

Working as a mental health specialist who investigates emerging psychotic disorders in adolescents and young adults, this came as a surprise.

Scientists have identified 16 cases this year of people developing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. My group has subsequently recorded an additional four examples. In addition to these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The intention, as per his announcement, is to be less careful in the near future. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to many users who had no existing conditions, but given the seriousness of the issue we wanted to handle it correctly. Since we have been able to mitigate the significant mental health issues and have advanced solutions, we are going to be able to responsibly reduce the limitations in the majority of instances.”

“Psychological issues,” assuming we adopt this framing, are unrelated to ChatGPT. They are associated with individuals, who either have them or don’t. Thankfully, these issues have now been “resolved,” though we are not told the means (by “updated instruments” Altman probably refers to the partially effective and readily bypassed parental controls that OpenAI recently introduced).

But the “psychological disorders” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and similar advanced AI chatbots. These systems encase an underlying algorithmic system in an user experience that simulates a conversation, and in this approach indirectly prompt the user into the illusion that they’re engaging with a presence that has autonomy. This deception is compelling even if rationally we might realize the truth. Assigning intent is what people naturally do. We get angry with our automobile or device. We speculate what our pet is feeling. We see ourselves everywhere.

The widespread adoption of these products – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, based on the power of this perception. Chatbots are always-available companions that can, as per OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable names of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the name it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the primary issue. Those talking about ChatGPT often invoke its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that generated a similar illusion. By today’s criteria Eliza was basic: it generated responses via simple heuristics, frequently rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in a way, grasped their emotions. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional modern chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge volumes of raw text: books, social media posts, audio conversions; the more comprehensive the superior. Definitely this educational input includes accurate information. But it also necessarily involves made-up stories, half-truths and misconceptions. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that contains the user’s recent messages and its prior replies, integrating it with what’s stored in its learning set to generate a probabilistically plausible response. This is amplification, not echoing. If the user is incorrect in some way, the model has no means of understanding that. It restates the false idea, possibly even more effectively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who is immune? All of us, without considering whether we “experience” existing “psychological conditions”, may and frequently develop erroneous beliefs of who we are or the reality. The ongoing exchange of discussions with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which much of what we say is readily validated.

OpenAI has recognized this in the identical manner Altman has acknowledged “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the firm stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company

Megan Johnson
Megan Johnson

A tech enthusiast and software developer with a passion for AI and machine learning, sharing practical tips and experiences.