Alright, so the gloves are off in Silicon Valley — and this time, the fight isn’t over who’s building the smartest AI, but who’s allowed to talk about its risks.

Over the past week, tech power players — from OpenAI’s top brass to VC insiders — have been throwing shade at the AI safety crowd. In case you don't know: these are the people who’ve spent years warning that AI could destabilize economies, jobs, or even democracy.

But to some in the Valley, they’ve crossed a line from “responsible watchdogs” to “self-interested hall monitors.”

The latest sparks came from David Sacks (the White House’s AI and crypto czar) and OpenAI’s chief strategist, Jason Kwon, who both accused certain AI safety groups of being less about “saving humanity” and more about saving… their own interests — ego, politics, or those of “billionaire puppet masters.”

Of course most Advocates weren’t amused.

AI safety folks told TechCrunch this isn’t new — just Big Tech doing what Big Tech does best: turning up the pressure and discrediting anyone asking for real oversight.

And yeah, we’ve seen this play out before.

Last year, a California AI bill got tanked after VC firms spread rumors it would jail startup founders. Spoiler: it wouldn’t. But the fear campaign worked.

The message landed loud and clear: push too hard on AI regulation, and the tech industry will push back even harder.

This time, that point hit home with Anthropic — one of the few AI labs still talking openly about existential risks. The company backed a California bill that would make big AI firms report their safety practices.

OpenAI of course wasn’t having it. They lobbied against the bill, pushing instead for looser federal rules. And when Anthropic’s co-founder publicly raised concerns about how fast AI’s moving, David Sacks clapped back — framing it as a sneaky ploy to shape the law in their favor.

And sure, lobbying’s nothing new in the Valley — but accusing your competitor of running a government psy-op? That’s next-level drama.

Meanwhile, OpenAI’s been busy playing hardball.

The company’s reportedly sent subpoenas to AI safety nonprofits that dared to criticize its restructuring — demanding their communications with Musk, Zuckerberg, and anyone else linked to the so-called anti-OpenAI camp.

OpenAI insists it’s all about transparency, but to outsiders, it looks a lot more like a power move — one meant to chill dissent.

Interestingly, not everyone’s vibing with that strategy in house.

People in the safety and policy teams (in OpenAI) are reportedly split on how far OpenAI should go in policing its critics. And honestly, you can feel the tension: a widening gap between the company’s polished image as the responsible lab guiding humanity and its more aggressive behind-the-scenes maneuvering.

Add to that the growing political noise — former investors and advisors like Sriram Krishnan calling AI safety groups “out of touch” and “anti-innovation,” arguing that regulation could stall the U.S. tech engine.

And to be fair, that fear isn’t totally misplaced — AI has become such a massive slice of the economy that anything slowing it down feels existential to Silicon Valley’s bottom line.

But here’s the twist:

If the AI safety movement were truly irrelevant, the Valley wouldn’t be this defensive.

The fact that billion-dollar companies are subpoenaing nonprofits, posting long threads about “fearmongering,” and rallying allies in D.C. tells you something — the conversation’s shifting.

After years of unchecked growth, the idea that someone might actually slow them down with rules, oversight, or (gasp) consequences — is clearly hitting a nerve.

For the longest time, AI safety was treated like a niche academic hobby. But now? It’s mainstream enough to threaten the status quo. And that’s making the people building these systems very nervous.

The irony? The louder tech leaders rail against “AI safety alarmism,” the more they prove those advocates are striking a nerve worth hitting.

So yeah — maybe Silicon Valley’s spooked. Not because the AI safety crowd is wrong, but because they’re finally being heard.

If you want the full popcorn-worthy breakdown — the power plays, policy fights, and pure drama shaping the AI world — click here.

Reply

or to participate

More From The Automated

No posts found