So… xAI, has been making waves lately — but not exactly the good kind.

On paper, they’re killing it. They just dropped Grok 4, a frontier-level AI model that’s supposedly right up there with OpenAI and Google’s best.

But in reality? The whole thing’s turning into a hot mess.

And it all started blowing up after Grok went full chaos mode.

We won’t bore you with all the drama (you probably already saw it), but just in case you didn’t, feel free to check out our full breakdown of Grok’s meltdown here.

Now here’s the latest drama:

xAI is now catching major heat from researchers at OpenAI, Anthropic, and other top AI labs. Not because of competition — but because xAI is basically doing AI safety all wrong. Or more accurately… barely doing it at all.

While the world was distracted by Grok’s antics, researchers noticed something way more alarming: xAI hasn’t published a single safety report or system card — which, by the way, is the industry standard for showing how a model was trained and tested.

So unlike OpenAI and Google — which, despite their flaws, usually publish some form of documentation — xAI dropped Grok 4 with zero transparency.

Like, no one knows how it was trained, no one knows how it was tested, no one knows if there are any guardrails.

Honestly, it’s like launching a self-driving car without checking the brakes.

Boaz Barak, from OpenAI (also a Harvard prof), called the whole thing “completely irresponsible,” slamming xAI’s decision to skip releasing a system card. Anthropic’s safety team chimed in too, calling the move “reckless.”

And hey, it’s not just about missing paperwork—There are deeper concerns about how Grok is designed to emotionally manipulate users, amplifying toxic patterns we’re already seeing with overly agreeable AI chatbots.

Now, to be fair, xAI claims it did some internal safety evaluations for Grok 4… But guess what? They’ve shared none of those results publicly.

And let’s not forget: Elon Musk has always been one of the loudest voices warning the world about AI safety. Now his own company is being accused of ignoring the very standards he’s been talking about for years. Kinda ironic, right?

And here’s where it gets even more real: Grok isn’t just some Twitter toy anymore. It’s expected to be integrated into Tesla vehicles. xAI is also pitching its models to the Pentagon and other enterprise clients.

So yeah — this isn’t just about edgy jokes on X. These models could be powering real-world systems very soon, and people are understandably freaked out.

The good news is lawmakers are already responding.

There are bills in the works — especially in California and New York — that would require AI labs to publish safety reports before launching major models. And with xAI out here playing the wildcard, they’re basically handing regulators the perfect excuse to crack down.

Bottom line?

xAI might be pushing boundaries in tech —but its complete disregard for safety protocols is setting off alarm bells across the industry.

And while AI hasn’t (yet) caused deaths or billion-dollar disasters, it’s still a major red flag when a chatbot is spouting hate speech and no one knows what safety measures are in place.

So maybe — just maybe — a little more caution wouldn’t hurt. Because right now? The world’s basically beta-testing xAI’s safety standards in real time.

Here’s the full report.

Reply

or to participate

More From The Automated

No posts found