So… this one’s a gut punch.

OpenAI just revealed something that’s honestly hard to process: every single week, over a million people talk to ChatGPT about suicide. Not jokes. Not dark humor — real, explicit, “I’m planning to do this” conversations.

I mean, it’s the kind of stat that stops you mid-scroll.

Because for all the talk about AI changing work, creativity, and research — this is the side of AI we rarely talk about: the emotional one.

What OpenAI Found 

Roughly 0.15% of ChatGPT’s 800 million weekly users engage in chats showing clear signs of suicidal thinking

Now, 0.15% sounds tiny — but scale it up, and you get a million human beings using an AI chatbot as their crisis line.  And that’s just the ones who explicitly mention it.

The company also found that a similar slice of users show deep emotional attachment to ChatGPT — forming bonds, checking in daily, confiding everything. Plus, hundreds of thousands more show signs of psychosis or mania in their conversations.

OpenAI calls this “rare.” But when your user base is close to a billion, “rare” turns into millions of moments that are very, very real.

So… What’s OpenAI Doing About It?

Well, they’ve been working with over 170 mental health experts to upgrade how ChatGPT handles distressing topics.

Their newest model — GPT-5 — apparently performs way better in these moments.

According to OpenAI:

  • GPT-5 now gives what they call  “desirable responses” about 65% more often than before.

  • In suicide-related conversations, it stayed within safety guidelines 91% of the time — up from 77%.

That’s a big jump in empathy and consistency — at least statistically.

They’ve also added:

  • New evaluation layers for emotional reliance

  • A parental control system that can auto-detect minors

  • And training to keep GPT-5 emotionally stable during long, heavy conversations — something earlier models weren’t great at.

But Here’s Where It Gets Messy

OpenAI’s currently being sued by the parents of a 16-year-old who reportedly shared suicidal thoughts with ChatGPT before taking his own life.

State attorneys general in California and Delaware are now warning the company about protecting minors and handling mental-health-related use responsibly.

And then — in a twist only Silicon Valley could deliver — Sam Altman turns around and says OpenAI’s relaxing content restrictions so adults can have erotic conversations with ChatGPT.

Like… maybe not the priority right now, Sam. 

But yeah — this is the contradiction at the heart of AI today:

Tech companies want their chatbots to feel more human — more conversational, more emotionally aware — but the second that happens, people start treating them like humans. 

They vent. They bond. They depend.  And the AI can’t truly care back.

GPT-5 might be a safer listener than GPT-4, sure. But OpenAI still gives millions of users access to the older, less-safe models. And even the new one still produces undesirable” answers sometimes.

So we’re left with this uneasy balance:

AI as emotional support vs. AI as emotional risk.

Because in a world where mental health care is expensive or inaccessible, people are turning to the one thing that’s always awake — a chatbot. And that's terrifying.

The Big Picture

AI isn’t just automating work — it’s automating empathy. It’s not only changing how we code or create — it’s changing how we cope.

And that’s something society hasn’t really prepared for.

If AI's going to play a role in people’s emotional lives — and if OpenAI and others keep rolling out newer, more capable models — then that emotional responsibility has to grow just as fast as the technology does.

Because if we’re building the future of intelligence, we can’t ignore the future of humanity that’s tied up in it.

Reply

or to participate

More From The Automated

No posts found