Alright, if you saw people freaking out about ChatGPT suddenly getting banned from giving legal or health advice — take a deep breath.

That’s not real. OpenAI didn’t secretly nerf the chatbot.

Here’s what actually went down: 

It started when the betting platform Kalshi posted that ChatGPT would “no longer provide health or legal advice.” The post went viral — cue everyone screenshotting Terms of Use paragraphs and arguing in quote tweets.

But then Karan Singhal, OpenAI’s Head of Health AI, stepped in on X and basically said: that’s not true.

Singhal clarified that ChatGPT’s behavior hasn’t changed at all. The new policy update just unified OpenAI’s rules across products including — ChatGPT, API, and whatever’s coming next. 

The part about not offering “tailored legal or medical advice without a licensed professional” has always been there. It’s a compliance line, not a crackdown.

Translation?

ChatGPT was never supposed to diagnose your rash or write your divorce paperwork — but it’s still totally fine for:

  • understanding medical or legal concepts,

  • researching definitions,

  • or prepping smart questions for your lawyer or doctor.

Nothing new.

The real story here isn’t a ban — it’s how fast AI rumors snowball. 

One misread policy, one viral post, and suddenly everyone’s convinced OpenAI changed the rules overnight.

It’s a perfect example of how fast information spreads when everyone’s watching the AI world like it’s a sport.

So no — ChatGPT didn’t get stricter. The internet just got louder.

If you’re using it to learn, you’re good.  If you’re using it to practice law or medicine… maybe don’t.

Learn more here.

Reply

or to participate

More From The Automated

No posts found