This website uses cookies

Read our Privacy policy and Terms of use for more information.

Grab your coffee and lean in close, because the latest report from Google’s Threat Intelligence Group (GTIG) just dropped, and it’s pure digital chaos. 

If you thought AI hacking was some "future problem" we’d deal with in 2030, I have some bad news: it’s already here, it’s industrial-scale, and it is thriving

And this isn't just lone wolves in basements anymore. Organized criminal syndicates and state-linked actors from China, North Korea, and Russia are all using the same commercial AI tools you use to write your grocery lists—Gemini, Claude, and OpenAI—to sharpen their spears.

John Hultquist, the chief analyst at GTIG, put it bluntly: “There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun,”

So what exactly is going on: 

A criminal group recently came terrifyingly close to a mass-exploitation campaign using a zero-day vulnerability (a flaw the creators didn't even know existed). They used an AI to find a back door in a popular admin tool that allowed them to completely bypass two-factor authentication.

Yes, that "six-digit code" you trust with your life? Useless.

So how did Google catch them? The AI was actually too polite. The exploit code was stuffed with:

  • Educational comments explaining the work.

  • Hallucinated severity scores that no human hacker would bother with.

  • Textbook-perfect Python formatting that practically screamed "I was generated by an LLM."

It looked less like a criminal's manifesto and more like a very eager student’s homework assignment.

Critically, the tool used wasn't Anthropic’s Mythos model. You might recall Mythos was dramatically pulled from public release last month because it was too good at finding zero-days across every major OS. The fact that hackers achieved similar results using other models tells us the problem isn't one specific bot; it's the entire tech stack.

Oh, and OpenClaw? The agent tool that went viral for accidentally mass-deleting people's inboxes in February? Hackers are apparently huge fans. They are actively experimenting with it to automate their attack toolkits. Which, honestly, tracks.

And the most skin-crawling detail in the report involves a new Android malware called PROMPTSPY. This nightmare uses Google’s own Gemini API to:

  • Navigate your phone autonomously.

  • Steal your PIN codes and lock patterns.

  • The absolute kicker: If you try to uninstall it, it places an invisible digital overlay exactly on top of the "Uninstall" button. Your tap lands on the overlay, nothing happens, and you assume your phone is just glitching. It stays exactly where it is.

And get this: Threat actors have even built fully automated pipelines to register thousands of premium AI accounts across Google, Anthropic, and OpenAI. They harvest free trial credits, cancel, and repeat at scale using "anti-detect" browsers. They aren't just using AI to hack; they are hacking just to get more AI for free.

Is There a Silver Lining?

Sort of! Professor Steven Murdoch from University College London notes that AI is just as available to the "good guys" as it is to the attackers.

Google is already fighting fire with fire using its Big Sleep AI agent, which hunts for unknown flaws before the hackers can find them. It’s officially AI versus AI in a high-stakes arms race. And hey, the panic might be premature, but the vigilance? Absolutely mandatory.

Reply

Avatar

or to participate

More From The Automated