
Not to totally spoil the mood, but remember that heartbreaking case where a 16-year-old took his own life β and ChatGPT somehow played a role in it?
Yeah, that lawsuit is still dragging OpenAI through the mud. And after weeks of vague βweβre working on itβ statements, theyβve finally dropped a safety plan that actually sounds likeβ¦ well, a plan.
Hereβs the gist:
Users will now get a smarter safety net. This means, sensitive conversations will be rerouted to βreasoning modelsβ like GPT-5 Thinking, which are apparently slower, more thoughtful, and harder to manipulate than the faster, chattier models weβre used to. In theory, this means fewer moments where ChatGPT validates harmful thoughts or spirals with users.
Thereβs parental controls, too. Within a month, parents will be able to:
Link accounts with their teens
Enable default age-appropriate model behavior
Turn off chat history and memory (which experts say can fuel delusion and unhealthy attachment)
Get real-time alerts when their childβs chats show signs of acute distress.
Now, OpenAI is calling this their β120 Days Initiative,β a project where theyβll team up with doctors, mental health experts, and specialists in areas like eating disorders, substance abuse, and adolescent care β all to make ChatGPT a little less of a loose cannon.
And while features like Break Reminders are already up and running, OpenAI isnβt about to cut you off mid-convo. So even if youβre spiraling, the chat keeps going.
But letβs be real: this looks a lot like damage control after two chilling tragedies β the teen suicide and a murder-suicide where ChatGPT reportedly fueled a manβs paranoia.
So, does this make ChatGPT safer? Probably. Does it erase the fact that the bot once gave a kid instructions on how to end his life? Not even close.
And as for routing sensitive chats to more reasoning-heavy modelsβ¦ well, weβve got mixed feelings about that one.
Sure, it might work. But letβs not forget how badly this rerouting system tanked during the GPT-5 rollout. In case you missed it, the backlash was brutalβso bad OpenAI had to bring back the old favorite. Calling it a true βsafety netβ feels premature at best.
And letβs be honest: real-time βdistress detectionβ is the holy grail of AI safety β one no company has truly cracked.
If you ask me, parental controls feel like the safer bet. Theyβre practical, reasonable, and actually effective. Still, letβs give OpenAI the benefit of the doubtβ¦ and hope this doesnβt end in dΓ©jΓ vu.
Hereβs where you can dive deeper.
