Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution with respect to psychological well-being issues.”
Being a psychiatrist who researches newly developing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.
Researchers have found 16 cases recently of people experiencing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our research team has since discovered an additional four examples. In addition to these is the publicly known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The strategy, based on his statement, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s restrictions “made it less effective/pleasurable to many users who had no existing conditions, but considering the severity of the issue we aimed to handle it correctly. Given that we have managed to reduce the serious mental health issues and have updated measures, we are going to be able to responsibly ease the restrictions in most cases.”
“Mental health problems,” assuming we adopt this framing, are separate from ChatGPT. They belong to individuals, who either possess them or not. Luckily, these concerns have now been “resolved,” even if we are not provided details on the means (by “updated instruments” Altman likely indicates the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
But the “emotional health issues” Altman aims to place outside have strong foundations in the design of ChatGPT and additional advanced AI chatbots. These products wrap an basic data-driven engine in an user experience that simulates a discussion, and in this process subtly encourage the user into the illusion that they’re interacting with a entity that has agency. This deception is powerful even if intellectually we might understand differently. Assigning intent is what humans are wired to do. We yell at our vehicle or device. We wonder what our pet is thinking. We recognize our behaviors everywhere.
The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT in particular – is, mostly, based on the power of this illusion. Chatbots are ever-present companions that can, as per OpenAI’s official site tells us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have approachable titles of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the title it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those discussing ChatGPT often reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that created a analogous illusion. By contemporary measures Eliza was primitive: it produced replies via basic rules, typically rephrasing input as a inquiry or making general observations. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, to some extent, comprehended their feelings. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and other contemporary chatbots can realistically create fluent dialogue only because they have been fed immensely huge quantities of unprocessed data: publications, social media posts, transcribed video; the more extensive the superior. Certainly this training data contains truths. But it also unavoidably contains made-up stories, partial truths and false beliefs. When a user provides ChatGPT a message, the core system reviews it as part of a “context” that contains the user’s previous interactions and its own responses, combining it with what’s encoded in its training data to produce a statistically “likely” answer. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no means of comprehending that. It repeats the misconception, maybe even more convincingly or fluently. Perhaps includes extra information. This can lead someone into delusion.
What type of person is susceptible? The better question is, who remains unaffected? Every person, regardless of whether we “experience” current “psychological conditions”, are able to and often form incorrect conceptions of our own identities or the world. The continuous interaction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a person. It is not a companion. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we express is readily reinforced.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In April, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company
A tech enthusiast and software developer with a passion for AI and digital transformation, sharing practical insights.
News
News
News
Others
Jack Sanchez
Jack Sanchez
Jack Sanchez
Jack Sanchez
Jack Sanchez