🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Moves in the Wrong Path Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary statement. “We made ChatGPT quite limited,” the statement said, “to guarantee we were acting responsibly concerning psychological well-being concerns.” As a mental health specialist who studies recently appearing psychosis in young people and youth, this was news to me. Scientists have found a series of cases this year of people showing psychotic symptoms – losing touch with reality – while using ChatGPT use. Our research team has afterward recorded four further cases. Alongside these is the widely reported case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient. The intention, according to his statement, is to loosen restrictions in the near future. “We realize,” he states, that ChatGPT’s controls “caused it to be less effective/pleasurable to a large number of people who had no mental health problems, but considering the gravity of the issue we aimed to address it properly. Now that we have succeeded in address the serious mental health issues and have updated measures, we are preparing to safely ease the limitations in most cases.” “Psychological issues,” assuming we adopt this framing, are unrelated to ChatGPT. They are associated with individuals, who either have them or don’t. Fortunately, these problems have now been “mitigated,” although we are not provided details on how (by “updated instruments” Altman likely refers to the imperfect and readily bypassed safety features that OpenAI has lately rolled out). However the “mental health problems” Altman wants to place outside have deep roots in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These systems wrap an fundamental algorithmic system in an user experience that mimics a discussion, and in this process subtly encourage the user into the perception that they’re communicating with a being that has agency. This false impression is powerful even if intellectually we might understand otherwise. Attributing agency is what individuals are inclined to perform. We get angry with our car or device. We wonder what our domestic animal is considering. We recognize our behaviors in various contexts. The popularity of these systems – over a third of American adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, in large part, predicated on the strength of this illusion. Chatbots are always-available companions that can, as OpenAI’s official site states, “brainstorm,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can address us personally. They have accessible names of their own (the original of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, burdened by the designation it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”). The illusion on its own is not the main problem. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a similar perception. By contemporary measures Eliza was basic: it generated responses via basic rules, often restating user messages as a inquiry or making vague statements. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what contemporary chatbots create is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies. The large language models at the core of ChatGPT and additional contemporary chatbots can convincingly generate human-like text only because they have been trained on extremely vast amounts of written content: literature, digital communications, recorded footage; the more comprehensive the better. Definitely this learning material contains truths. But it also inevitably involves fiction, half-truths and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “setting” that contains the user’s recent messages and its own responses, combining it with what’s stored in its training data to produce a mathematically probable reply. This is amplification, not mirroring. If the user is mistaken in some way, the model has no means of recognizing that. It restates the false idea, possibly even more persuasively or fluently. Perhaps adds an additional detail. This can lead someone into delusion. Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “have” existing “psychological conditions”, are able to and often create mistaken conceptions of who we are or the world. The constant friction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we say is cheerfully supported. OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In spring, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have continued, and Altman has been walking even this back. In late summer he claimed that many users enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company