This weekend was a literal fever dream y'all. We have three updates that are going to make you do a double-take, so instead of picking just one, we are giving you the quick hits on the chaos so you can pick your favorite flavor and dive in!
1. Google AI: The Scammer's New Best Friend
Remember when Google’s AI told you to put glue on pizza? That was cute. Harmless. Even meme-worthy. Well, the vibes just shifted from "funny glitch" to "financial nightmare."
According to fresh info from Wired, the bad guys have officially leveled up. They have figured out how to hijack Google AI Overviews and are injecting fake phone numbers directly into those sleek summary boxes.
The Trap: You search for "Delta Airlines support" or "Apple customer service." Google’s AI confidently serves up a number. You call it, but instead of a helpful agent, you get a scammer ready to "help" by stealing your credit card info.
The scary part? Scammers have reverse-engineered how Google's AI sources information. They create networks of interconnected fake websites that all repeat the same false information, essentially creating a fake "consensus" that tricks Google's AI into believing the lies are true. It’s basically a coordinated disinformation campaign
And it's not just phone numbers. These manipulated AI Overviews are:
Directing users to phishing sites disguised as legitimate customer service portals
Promoting counterfeit products as authentic recommendations
Spreading financial scams wrapped in helpful-looking advice
Even pushing malicious software downloads disguised as official company tools
The Pro-Tip: Never call a number from an AI Overview without double-checking the official site first. Also do not fully trust anything in the summary box. Google is currently playing a high-stakes game of whack-a-mole with these guys, but the scammers are moving faster. So until the "AI’s Brain" learns how to spot a liar, stay safe out there.
2. The Lobster Goes to OpenAI 🦞
Peter Steinberger, the developer behind the viral OpenClaw project, just officially joined OpenAI. And if you are wondering why a guy with a lobster mascot is a big deal, buckle up.
In just three weeks, OpenClaw went from an indie "playground project" to 100k+ stars on GitHub and 2 million visitors. Why? Because it doesn't just chat. It actually does things. And by that we mean: Booking a flight, making restaurant reservations or even joining a social network full of other AI assistants. It could even help you find love.
The Move: Sam Altman personally recruited Peter to lead the next generation of "Personal Agents." This signals Phase 2 of the AI era: moving from AI that talks to AI that acts.
The best part? OpenClaw isn't being killed off. It will live in an open-source foundation while Peter builds the future of agents at OpenAI. Talk about a career speedrun.
3. Anthropic vs. The Pentagon: The $200M Standoff
This story has everything: special ops, a $200 million contract, and a fundamental fight over who actually controls AI.
It turns out U.S. forces used Claude to help plan the raid that captured Venezuelan President Nicolás Maduro in January. The catch? Anthropic had absolutely no idea. Now, the Pentagon wants "mission-agnostic" AI—meaning they want to use Claude for anything from drone targeting to mass surveillance.
The Standoff:
Anthropic says absolutely not. They’ve built Constitutional AI for a reason and it’s not for autonomous weapons (that can decide to kill without human oversight) or mass surveillance. Because uncontrolled military AI is how you get Skynet
The Pentagon says play ball or we’ll give this $200M to Google, OpenAI or xAI, who are being WAY more flexible about military use cases.
The Bigger Picture:
This fight is about more than just one company and one contract. It's about the future of AI governance:
Should AI companies have ANY say in how their models are used?
Can you build "Constitutional AI" with safety principles and then hand it to the military with no strings attached?
Is there a middle ground between "AI companies control everything" and "governments get whatever they want"?
What happens when AI safety principles clash with national security?
Where things stand now: As of now, the standoff continues. The Pentagon is conducting a "comprehensive review" of their AI partnerships. Anthropic is holding firm on their principles even with a massive payday on the line. And the clock is ticking—because China and Russia aren't waiting for us to figure out our AI ethics debates before they deploy military AI at scale.
The irony: Anthropic literally built Claude to be "Constitutional AI", meaning it follows principles and values baked into its training. Now they're in a fight with the actual U.S. government over what those principles mean in practice.
This is the AI ethics debate moving from academic papers to real-world consequences. And it's happening RIGHT NOW. Overall, This weekend proved we are in the middle of three different futures.
AI being weaponized by scammers, agents becoming our new personal assistants, and the government fighting for control of the "robot brain" to weaponize it.
Welcome to AI in 2026 y'all. Stay safe (and seriously, don't trust those phone numbers),
P.S. If you work at Anthropic, OpenAI, or the Pentagon and want to spill more tea... our DMs are wide open.
