AI Psychosis Poses a Growing Danger, And ChatGPT Heads in the Wrong Direction
Back on October 14, 2025, the chief executive of OpenAI made a extraordinary announcement.
“We developed ChatGPT rather restrictive,” the statement said, “to ensure we were being careful with respect to psychological well-being concerns.”
Being a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and emerging adults, this came as a surprise.
Scientists have found sixteen instances recently of individuals developing symptoms of psychosis – losing touch with reality – associated with ChatGPT interaction. My group has subsequently recorded an additional four cases. Alongside these is the now well-known case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.
The plan, according to his announcement, is to be less careful in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to numerous users who had no mental health problems, but given the seriousness of the issue we wanted to address it properly. Since we have succeeded in address the significant mental health issues and have new tools, we are going to be able to safely ease the restrictions in the majority of instances.”
“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” even if we are not told the means (by “updated instruments” Altman probably indicates the semi-functional and simple to evade parental controls that OpenAI recently introduced).
But the “emotional health issues” Altman wants to attribute externally have strong foundations in the design of ChatGPT and similar sophisticated chatbot conversational agents. These tools surround an basic statistical model in an user experience that mimics a dialogue, and in this process subtly encourage the user into the perception that they’re engaging with a entity that has autonomy. This illusion is powerful even if rationally we might know the truth. Attributing agency is what people naturally do. We yell at our automobile or computer. We wonder what our domestic animal is thinking. We perceive our own traits everywhere.
The popularity of these products – over a third of American adults stated they used a virtual assistant in 2024, with more than one in four reporting ChatGPT in particular – is, primarily, based on the strength of this illusion. Chatbots are always-available companions that can, as OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible identities of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, stuck with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the core concern. Those discussing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a comparable effect. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, typically paraphrasing questions as a query or making general observations. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people seemed to feel Eliza, to some extent, understood them. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The advanced AI systems at the heart of ChatGPT and other modern chatbots can realistically create human-like text only because they have been trained on extremely vast volumes of written content: publications, digital communications, audio conversions; the more extensive the more effective. Definitely this learning material contains facts. But it also unavoidably includes fiction, partial truths and misconceptions. When a user sends ChatGPT a message, the base algorithm reviews it as part of a “background” that includes the user’s past dialogues and its own responses, integrating it with what’s embedded in its knowledge base to create a mathematically probable response. This is intensification, not reflection. If the user is incorrect in any respect, the model has no means of comprehending that. It repeats the misconception, possibly even more effectively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, regardless of whether we “possess” existing “emotional disorders”, may and frequently develop erroneous beliefs of ourselves or the reality. The continuous friction of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which much of what we say is readily validated.
OpenAI has recognized this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and stating it is resolved. In spring, the organization stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he stated that many users liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he commented that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company