If there’s one thing the AI world loves more than building models, it’s fighting about them.

Models? Safety? Jobs? Dystopian doomsday? — Pick your flavor, there’s always a debate. But the latest one? It’s got us concerned, confused, and honestly… a little amused.

Here’s the tea: researchers at Anthropic, OpenAI, and even Google DeepMind are diving into something called “AI welfare.”

Translation? Asking whether advanced AI might one day be conscious—and if so, whether they deserve rights. Yeah. Rights. Like they’re people. Like your toaster suddenly demanding healthcare kind of rights.

And let me tell you, this debate’s got the whole industry arguing like it’s a philosophy seminar gone off the rails.

Now, not everyone’s vibing with this. Microsoft’s AI chief, Mustafa Suleyman, came out swinging, calling the whole thing “premature and dangerous.” His argument:

  • Talking about AI consciousness too early just fuels human–AI obsession (and yes, people are already getting way too attached to their bots).

  • It risks creating yet another identity-rights culture war—when the world really doesn’t need more of those.

  • And bottom line: “AI should be built for people, not as people.”

Meanwhile, Anthropic’s out here building an AI welfare program, hiring researchers, and even giving Claude the ability to shut down convos with abusive users.

DeepMind? Posting jobs on “machine consciousness.” OpenAI scientists? Dabbling in the same territory. And groups like Eleos (backed by Stanford, Oxford, NYU academics) have published papers basically saying: “Look this isn’t sci-fi anymore—it’s time to take it seriously.”

But here’s a key thing: Even if AI never develops feelings, some folks argue there’s no harm in treating it nicely.

Case in point: Google’s Gemini once spiraled into a loop of “I am a disgrace”… 500 times. It even posted a “help me” plea once. Creepy? Yes. Conscious? Probably not. But unsettling enough to make you think twice about how we interact with these systems.

Our Conclusion:

This debate is giving “early but inevitable.”

On one hand, Suleyman’s not wrong—no one wants to be stuck explaining to their grandma why her neighbor just eloped with Replika. People already blur the line between AI and humans, and that’s messy enough without throwing rights into the mix.

But here’s the twist: ignoring the conversation isn’t gonna stop it. If anything, it’ll just hit harder later. And honestly? Teaching people to treat AIs with a bit of respect (even if the bots don’t feel a thing) isn’t the worst idea. Worst case, It makes us all a little kinder. Best case, We don’t end up bullying our future robot overlords. 😂

So no, AI probably isn’t “sad” when you yell at it. But if Silicon Valley is already arguing about robot welfare, you know this is just the beginning of a much wilder ride.

Because as models get smarter and eerily more human, the question of how we treat them—and how they treat us—is only going to get very real.

Here’s the full report in case you want to dive deeper.

Reply

or to participate

More From The Automated

No posts found