
Not to totally spoil the mood, but remember that heartbreaking case where a 16-year-old took his own life — and ChatGPT somehow played a role in it?
Yeah, that lawsuit is still dragging OpenAI through the mud. And after weeks of vague “we’re working on it” statements, they’ve finally dropped a safety plan that actually sounds like… well, a plan.
Here’s the gist:
Users will now get a smarter safety net. This means, sensitive conversations will be rerouted to “reasoning models” like GPT-5 Thinking, which are apparently slower, more thoughtful, and harder to manipulate than the faster, chattier models we’re used to. In theory, this means fewer moments where ChatGPT validates harmful thoughts or spirals with users.
There’s parental controls, too. Within a month, parents will be able to:
Link accounts with their teens
Enable default age-appropriate model behavior
Turn off chat history and memory (which experts say can fuel delusion and unhealthy attachment)
Get real-time alerts when their child’s chats show signs of acute distress.
Now, OpenAI is calling this their ‘120 Days Initiative,’ a project where they’ll team up with doctors, mental health experts, and specialists in areas like eating disorders, substance abuse, and adolescent care — all to make ChatGPT a little less of a loose cannon.
And while features like Break Reminders are already up and running, OpenAI isn’t about to cut you off mid-convo. So even if you’re spiraling, the chat keeps going.
But let’s be real: this looks a lot like damage control after two chilling tragedies — the teen suicide and a murder-suicide where ChatGPT reportedly fueled a man’s paranoia.
So, does this make ChatGPT safer? Probably. Does it erase the fact that the bot once gave a kid instructions on how to end his life? Not even close.
And as for routing sensitive chats to more reasoning-heavy models… well, we’ve got mixed feelings about that one.
Sure, it might work. But let’s not forget how badly this rerouting system tanked during the GPT-5 rollout. In case you missed it, the backlash was brutal—so bad OpenAI had to bring back the old favorite. Calling it a true ‘safety net’ feels premature at best.
And let’s be honest: real-time ‘distress detection’ is the holy grail of AI safety — one no company has truly cracked.
If you ask me, parental controls feel like the safer bet. They’re practical, reasonable, and actually effective. Still, let’s give OpenAI the benefit of the doubt… and hope this doesn’t end in déjà vu.
Here’s where you can dive deeper.