Remember when we thought AI tools were just for making our jobs easier? Well, turns out the bad guys got the memo too. Their jobs just happen to involve breaking into your stuff.

Google’s Threat Intelligence Group just dropped a report showing that government-backed hackers are weaponizing Gemini AI to turbocharge everything from target research to actual cyberattacks. We aren't talking about script kiddies in basements anymore. These are nation-state actors from North Korea, China, and Iran using AI like a digital Swiss Army knife for cybercrime.

Here is where it gets wild:

North Korea’s Lazarus Group (tracked as UNC2970) used Gemini to map out specific technical job roles and salary info at major defense companies. Why? Because they run fake recruiting scams where they pretend to be HR managers offering "dream gigs" at aerospace, defense, and energy firms.

They are using AI to craft perfectly tailored phishing personas, turning reconnaissance that used to take weeks into a quick afternoon research session. It is basically "Phishing for Dummies: AI Edition."

But North Korea isn't alone in this arms race. Chinese hacking groups like APT31 and APT41 are also using Gemini for some "creative" tasks:

  • APT31: Automates vulnerability analysis and generates targeted testing plans by pretending to be a legit security researcher.

  • APT41: Uses the AI to troubleshoot and debug exploit code.

  • UNC795: Troubleshoot their code, conduct research, and develop web shells and scanners for PHP web servers. 

  • APT42 (Iran): creates fake personas for targeted social engineering and developed custom tools, including a maps scraper, a SIM card management system in Rust, and a WinRAR exploit PoC.

Now, here is where your jaw should actually drop. Google detected a new malware called HONESTCUE. This thing actually sends prompts to Gemini’s API during an active attack.

Read that again: The malware calls the AI mid-heist to receive fresh C# source code as a response. It then executes that code directly in your computer's memory, leaving zero footprints on your hard drive. It is nearly invisible to traditional security tools because it literally "invents" its next move on the fly.

Even worse, the barrier to entry for these attacks has officially hit the floor.They also found: 

  • COINBAIT: An AI-generated phishing kit that masquerades as a crypto exchange to steal your credentials.

  • Model Extraction: Hackers are even trying to "steal" Gemini’s brain by sending 100,000+ prompts to map out its logic and replicate it for their own evil versions.

Your email spam filter was trained to spot bad grammar and typos? Cool story. But AI-generated phishing doesn't have typos anymore.

The good news: Google Cloud is already deploying AI-powered countermeasures to fight back, but we are in a literal evolutionary arms race.

The companies that invest in AI-aware security now are the ones who will still be standing. Those that don’t? They are bringing a butter knife to a fight where the other side has smart weapons.

Here’s where you can find out more.

Reply

Avatar

or to participate

More From The Automated