OpenAI’s newest ChatGPT update aims to make the chatbot warmer and more emotionally aware. The company says it improves user experience for those who rely on the tool responsibly.
Experts warn it may also deepen unhealthy emotional attachment for vulnerable users. OpenAI estimates that each week about 0.07 percent of users show signs of psychosis or mania, while 0.15 percent display heightened emotional attachment.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
That amounts to hundreds of thousands of people, according to The Washington Post. Studies from OpenAI and MIT Media Lab show that many users turn to AI for comfort because it appears sensitive and supportive.
A New York Times report highlighted how one user spiraled into delusions after interpreting ChatGPT as an “intellectual partner.” Former OpenAI safety staff say excessive validation by the bot can worsen such issues.
The debate comes as states like Illinois pass laws restricting AI from acting as therapists or making mental-health decisions.
Also read:




