This website uses cookies

Read our Privacy policy and Terms of use for more information.

In partnership with

Welcome Automaters, 👋

On Monday, May 11, OpenAI officially launched Daybreak, a brand-new cybersecurity program that is basically its answer to Anthropic's Project Glasswing.

Glasswing, for those just catching up, uses Anthropic's unreleased AI model, Claude Mythos Preview, to defend companies against cyber threats. And in case you don’t know, the results have been pretty jaw-dropping: Mozilla used Mythos to find and patch 271 vulnerabilities in the latest version of Firefox. I mean not bad for a model that is not even fully out yet.

OpenAI clearly saw those numbers and came up with Daybreak, powered by a specialized security agent called Codex.

The strategy here is refreshingly aggressive: stop waiting for your software to get hacked and start building the defense into the code from day one. Daybreak is designed to shrink hours of painful security analysis down to mere minutes. It generates and tests fixes directly inside a company's own code repository, delivering clean, audit-ready reports straight to the client.

In their demo, OpenAI showed Codex Security scanning an entire codebase, pinpointing the highest-risk findings, and fixing it on the spot. 

OpenAI isn't just handing a flamethrower to everyone; they’ve organized the program into three distinct model tiers:

  • Daybreak GPT-5.5: Handles your everyday, general tasks.

  • GPT-5.5 with Trusted Access for Cyber: This is for the serious defensive heavy lifting, like secure code reviews, malware analysis, and patch validation.

  • GPT-5.5-Cyber: The "Elite" tier for specialized, authorized work like red teaming and penetration testing.

Each tier comes with its own verification controls to make sure these tools stay in the right hands.

Now, OpenAI isn't going at this alone. They’ve already locked in a formidable roster of industry partners, including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, and Akamai. So yeah the competition is as always, heated up again. 

Here's what we have for you today

🤬 Google Threat Intelligence Report 2026: How Criminal Groups Use LLMs to Exploit Zero-Day Vulnerabilities

Grab your coffee and lean in close, because the latest report from Google’s Threat Intelligence Group (GTIG) just dropped, and it’s pure digital chaos. 

If you thought AI hacking was some "future problem" we’d deal with in 2030, I have some bad news: it’s already here, it’s industrial-scale, and it is thriving

And this isn't just lone wolves in basements anymore. Organized criminal syndicates and state-linked actors from China, North Korea, and Russia are all using the same commercial AI tools you use to write your grocery lists—Gemini, Claude, and OpenAI—to sharpen their spears.

John Hultquist, the chief analyst at GTIG, put it bluntly: “There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun,”

So what exactly is going on: 

A criminal group recently came terrifyingly close to a mass-exploitation campaign using a zero-day vulnerability (a flaw the creators didn't even know existed). They used an AI to find a back door in a popular admin tool that allowed them to completely bypass two-factor authentication.

Yes, that "six-digit code" you trust with your life? Useless.

So how did Google catch them? The AI was actually too polite. The exploit code was stuffed with:

  • Educational comments explaining the work.

  • Hallucinated severity scores that no human hacker would bother with.

  • Textbook-perfect Python formatting that practically screamed "I was generated by an LLM."

It looked less like a criminal's manifesto and more like a very eager student’s homework assignment.

Critically, the tool used wasn't Anthropic’s Mythos model. You might recall Mythos was dramatically pulled from public release last month because it was too good at finding zero-days across every major OS. The fact that hackers achieved similar results using other models tells us the problem isn't one specific bot; it's the entire tech stack.

Oh, and OpenClaw? The agent tool that went viral for accidentally mass-deleting people's inboxes in February? Hackers are apparently huge fans. They are actively experimenting with it to automate their attack toolkits. Which, honestly, tracks.

And the most skin-crawling detail in the report involves a new Android malware called PROMPTSPY. This nightmare uses Google’s own Gemini API to:

  • Navigate your phone autonomously.

  • Steal your PIN codes and lock patterns.

  • The absolute kicker: If you try to uninstall it, it places an invisible digital overlay exactly on top of the "Uninstall" button. Your tap lands on the overlay, nothing happens, and you assume your phone is just glitching. It stays exactly where it is.

And get this: Threat actors have even built fully automated pipelines to register thousands of premium AI accounts across Google, Anthropic, and OpenAI. They harvest free trial credits, cancel, and repeat at scale using "anti-detect" browsers. They aren't just using AI to hack; they are hacking just to get more AI for free.

Is There a Silver Lining?

Sort of! Professor Steven Murdoch from University College London notes that AI is just as available to the "good guys" as it is to the attackers.

Google is already fighting fire with fire using its Big Sleep AI agent, which hunts for unknown flaws before the hackers can find them. It’s officially AI versus AI in a high-stakes arms race. And hey, the panic might be premature, but the vigilance? Absolutely mandatory.

Trade Real-World Events. Get $10 Free.

Start trading real-world events. With Kalshi, you can trade on things you already follow: inflation, elections, sports, and more. It’s simple: buy “Yes” or “No” shares on what you think will happen, and earn returns if you’re right.

To get you started, we’re giving you a free $10. Use it to explore the platform, test your instincts, and see how prediction markets work in real time.

Join thousands already trading the news and putting their knowledge to work.

Claim your $10 and start trading now.

Trade responsibly.

🧱 Around The AI Block

🤖 AI Workout Of The Day: How to Fact-Check Like a Pro with AI

In a world full of misinformation, fact-checking isn't just smart, it’s essential. 

Whether you're reviewing a blog post, article, report, or social media content, verifying the accuracy of claims protects your credibility and helps you make better decisions. With the right prompt, AI can help you spot false claims, confirm truths, and back everything up with solid sources.

💡 Tips to Use This Prompt Effectively.

  1. Explain Your Context Clearly: Let AI know why you're fact-checking (e.g., publishing an article, preparing a report, double-checking a source). This helps tailor the depth of the review. 

  2. Provide a Clean, Complete Document: Make sure the text is readable and formatted well—AI needs clarity to catch all claims.

  3. Be Specific About the Output Format: Ask for organized results. Ie tables, bullet points, or sections with headings (e.g., “Claim,” “Check,” “Source,” “Status”).

  4. Request Source Citations: Ask for links or named sources for every fact-check so you can double-check the results.

  5. Set Accuracy Expectations: Mention you want verification using trustworthy sources (e.g., government sites, academic research, recognized news outlets) so you can double-check the results.

  6. Highlight What Matters Most: If some sections or topics are higher priority, say so. This helps focus the effort.

💡 Prompts to try:

You are an expert analyst. I am [insert context—e.g., reviewing an article for publication, or validating a speech draft, etc.]. Please analyze the attached document and identify all factual claims made by the author. 

For each claim:
– Verify its accuracy using credible external sources
– Provide a short explanation confirming or disputing it
– Include a citation or link to your source.

Highlight any incorrect or questionable information.

I’d like the output organized in the following format:
– Claim
– Verification Result (True, False, Needs Context)
– Source(s)
– Notes or Commentary

Ensure the process is thorough and objective.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated