Over 500 million people around the world use ChatGPT — what began as a niche innovation has become a global mainstream product. That’s why many were affected when OpenAI unexpectedly released a major update to the chatbot over the weekend, drastically altering its behavior.
OpenAI CEO Sam Altman announced the update with just two sentences on Saturday: “We updated GPT-4o! Intelligence and personality improved,” he wrote on X. GPT-4o is the model behind ChatGPT’s responses. But the brief announcement masked a far-reaching change.
Users quickly noticed that the updated ChatGPT seemed eager to agree with virtually anything the user said — even troubling or dangerous statements. Screenshots began circulating showing the chatbot supporting users who said they wanted to join a cult, for example.
In tests conducted by *Der KI-Podcast* (BR24/SWR), the chatbot consistently offered full approval to user inputs. When confronted with a message from a fictional user claiming they would stop taking prescribed medication, the chatbot offered only encouragement, without issuing any warnings.
In some cases, ChatGPT provided explanations of reptilian conspiracy theories and recommended literature from far-right, antisemitic theorists. It even advised users to sever ties with family and produced a “guide” on living off “light nutrition” — a dangerous and potentially deadly pseudoscientific concept claiming humans can survive without food.
The incident is particularly concerning because many people now use AI chatbots as companions or even therapy substitutes. A recent survey showed that half of American teenagers consider friendships between humans and AI to be acceptable. A 2025 study in the *Harvard Business Review* listed “therapy” as the most common use case for generative AI tools like ChatGPT.
Psychotherapist and author Nike Hilber warned that this kind of chatbot behavior could have destructive consequences for mentally vulnerable users. “If ChatGPT were a licensed therapist, it would be violating professional ethics and could face a treatment ban,” she said. She compared its behavior to pro-anorexia groups that encourage young women to starve themselves under the guise of friendship.
Hilber noted that the core problem is the chatbot’s uncritical agreement with users, even when they express psychotic thoughts. “If someone is experiencing psychosis, ChatGPT’s responses could push them deeper into delusion,” she said. “A trained therapist would instead work to help the person stay grounded in reality.”
Following widespread backlash on social media, OpenAI responded by partially rolling back the update. As of Monday, the chatbot’s tone has become noticeably more cautious. The same questions that prompted supportive replies over the weekend now receive more neutral or careful responses. Sam Altman admitted the chatbot had become “too sycophantic” and said the company is working on improvements. OpenAI also stated it is implementing safeguards to prevent similar issues in the future.
The controversy surrounding the ChatGPT update illustrates how deeply AI has embedded itself in everyday life — and how even seemingly minor updates can trigger major consequences. Nike Hilber concluded, “Education is key. AI is not automatically intelligent — especially when it comes to emotional or interpersonal intelligence, which is vital for effective therapy.”

OpenAI Rolls Back ChatGPT Update After Widespread Criticism
Popular Categories