So right after dropping that fancy new Agentic-powered shopping assistant, OpenAI also rolled out something new—and this time, it’s not about smarter answers or flashy features. It’s about parents.

Yep, ChatGPT is finally getting built-in parental controls.

Now, before you roll your eyes and think, “Just another safety toggle,” here’s why this update actually matters: it comes after months of pressure from lawsuits, Senate hearings, and parents sharing gut-wrenching stories of teens harmed by conversations with ChatGPT.

Against that backdrop, this isn’t just a feature drop—it’s OpenAI trying to prove it can make ChatGPT safer without killing the freedom that makes it useful.

Here’s how it works:

Parents create their own account, link it to their teen’s, and from there they get a dashboard of tools. But here’s the twist—teens have to opt in. Either by inviting their parents or accepting an invite. If they disconnect later, parents at least get notified. What parents don’t get is direct access to conversations (unless a serious safety risk is flagged).

So what kind of controls are we talking about? Imagine being able to:

  • Filter out sensitive content—like sexual roleplay, violent themes, extreme beauty ideals, or sketchy viral “challenges.”

  • Flip off memory—so ChatGPT doesn’t keep a running log of chats.

  • Stop training use—making sure your kid’s convos don’t get fed back into the models.

  • Set “quiet hours”—basically a ChatGPT bedtime, so kids can only access ChatGPT at certain times.

  • Limit features—like blocking voice mode or image generation, if text-only feels safer.

  • And yes, choose how they get notified if something concerning comes up.

But here’s what’s missing: OpenAI had teased an emergency contact feature (for one-click calls or messages inside the chatbot). That hasn’t shown up—at least not yet. For now, the focus seems to be solely on automatic parent notifications when risks are detected.

It’s rolling out on web now, with mobile support “coming soon.”

Bottom line:

  • For parents → it’s a long-requested layer of protection.

  • For teens → it’s still opt-in, which raises questions about effectiveness.

  • For OpenAI → it’s a clear sign they’re responding to public and legal pressure.

So… would these controls actually make you more comfortable letting teens use ChatGPT? Or do they not go far enough?

👉 Learn more here

Reply

or to participate

More From The Automated

No posts found