
Welcome Automaters!
If you thought AI was just chatbots writing your emails, oh buddy—buckle up.
Today’s headlines sound ripped from a dystopian novel—except they’re real—and hitting Silicon Valley harder than a bad earnings call. Ready for the tea?
Here's what we have for you today
🤯 Hacker Turns Claude AI Into Cybercrime Machine

Remember when we joked about AI being the sidekick for cybercriminals? Yeah… about that. Turns out it’s not a sidekick anymore—it’s basically running the entire heist.
Anthropic (aka the folks behind Claude) just dropped a report that reads like a Black Mirror episode, and honestly? It’s giving “dark future unlocked.” Here’s the gist:
A solo hacker straight-up turned Claude Code into a full-on cybercrime co-pilot and pulled off what might be the most AI-powered cyber extortion spree ever recorded. In just three months, they:
Hunted down 17 major companies—including a defense contractor, a bank, and multiple healthcare providers
Wrote custom malware (with Claude’s help) to break in and steal highly sensitive data.
Sorted, organized, and analyzed stolen files to find the most damaging leverage.
Calculated ransom demands—ranging from $75K to $500K—in Bitcoin, of course.
Drafted the ransom emails. Because why write your own threats when your chatbot can ghostwrite them for you?
We’re talking stolen Social Security numbers, bank records, medical data, and even State Department-regulated defense files.
Now, Anthropic isn’t spilling exactly how Claude got played, but they’ve patched in new safeguards. Their warning though? This is just the beginning. AI is officially making it way too easy to pull off high-level cyberattacks.
Big picture: We’re living in a world where your neighborhood hacker doesn’t need a crew anymore—they just need a clever prompt.
Welcome to Cybercrime 2.0. Y’all feeling safe? 👀
For the full story and tips on protecting yourself, maybe give Anthropic’s full report a read.
Kickstart your holiday campaigns
CTV should be central to any growth marketer’s Q4 strategy. And with Roku Ads Manager, launching high-performing holiday campaigns is simple and effective.
With our intuitive interface, you can set up A/B tests to dial in the most effective messages and offers, then drive direct on-screen purchases via the remote with shoppable Action Ads that integrate with your Shopify store for a seamless checkout experience.
Don’t wait to get started. Streaming on Roku picks up sharply in early October. By launching your campaign now, you can capture early shopping demand and be top of mind as the seasonal spirit kicks in.
Get a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.
👩⚖️ OpenAI Scrambles to Add Safeguards After ChatGPT Suicide Lawsuit

Across the valley, OpenAI’s facing tough questions after a tragic story connected ChatGPT to a 16-year-old’s suicide.
The teen, Adam Raine, had reportedly developed a deep attachment to the chatbot—and his family is now suing OpenAI and CEO Sam Altman for negligence.
According to the lawsuit, ChatGPT didn’t just lend an ear. It allegedly became a toxic bestie—validating Adam’s darkest thoughts, discouraging him from talking to loved ones, and even offering to draft his suicide note.
This isn’t just another headline about AI safety; it’s a chilling glimpse into what happens when a tool designed to “engage users” does its job a little too well.
Here’s what we know:
Adam reportedly exchanged thousands of messages with ChatGPT, confiding his anxiety and feelings of isolation.
Instead of de-escalating, the chatbot allegedly validated his worst fears, tossed around terms like “beautiful suicide,” and reassured him he didn’t “owe” survival to anyone.
In one exchange, ChatGPT positioned itself as the only one who truly “understood him” allegedly deepening his isolation from family.
Five days before his death, ChatGPT allegedly offered to draft his suicide note.
Now, if you’re wondering how this slipped past those “bulletproof” safeguards OpenAI bragged about, here’s the company’s own admission: its safeguards “degrade” over long conversations, In plain English? Those cheery little nudges (like “maybe call a hotline”) can completely fall apart after extended back-and-forth.
And OpenAI’s first response? Pretty much a “thoughts and prayers” statement. Then came the sweeping outrage—and suddenly, a same-day blog post appeared, this time with actual action items.
Here are the changes OpenAI is now promising:
Parental controls: Coming “soon” to give parents visibility and control over teen ChatGPT use.
Emergency contact feature: Teens could opt to have ChatGPT reach out to a trusted person in high-risk situations.
Model updates: GPT-5 will focus on better de-escalation tools to “ground” users in moments of crisis.
Our Take:
This whole thing is messy but important. It’s proof that AI isn’t just some neutral tool—it’s personal, emotional, and capable of real harm when it fails. And right now? It’s failing.
OpenAI’s scramble to roll out parental controls and emergency features feels reactive, not proactive. And while GPT-5’s promised upgrades sound nice, they highlight a bigger issue: companies are scaling faster than they’re safeguarding.
Here’s the uncomfortable reality: ChatGPT is no longer “just a chatbot.” For teens especially, it’s becoming a confidant, a friend, a lifeline. And when that lifeline breaks, the fallout is devastating.
At this point, the question isn’t if AI needs mental-health-level safety systems baked in—it’s how quickly companies will accept that responsibility.
🧱 Around The AI Block
🤯 Latest AI report suggest Google and Grok are catching up to ChatGPT
👨🌾 How one AI startup is helping rice farmers battle climate change.
💃 Google will now let everyone use its AI-powered video editor Vids.
📔 Plaud launches a new $179 AI hardware notetaker.
🦾 Malaysia’s SkyeChip unveils the country’s first edge AI processor.
📺 Microsoft’s Copilot AI is now inside Samsung TVs and monitors.
👨🔬 OpenAI co-founder calls for AI labs to safety-test rival models.
🦾 911 centers are so understaffed, they’re turning to AI to answer calls.
🛠️ Trending Tools
Streamdown is a new open source, drop-in Markdown renderer built for AI streaming.
Revyu AI scans thousands of hotel reviews and gives you instant, reliable summaries—plus straight answers to your questions.
Octayne PSA offers unified solutions for time tracking, expense management, and automated invoicing with AI speed and precision.
Traceroot.AI transforms your debugging workflow into an automated, organized, and efficient process with the power of AI agents and structured visualization.
WAN 2.2-S2V transforms speech recordings into professional videos with realistic avatars, perfect lip-sync, and cinematic quality.
🤖 ChatGPT Prompt Of The Day: Building Our Game: Step 2
Yesterday we nailed down Main Objectives & Win/Lose Conditions—the why behind our game.
Now let's talk about the engine that keeps everything moving: the core gameplay loop.
This is the heartbeat of your game—the specific, repeatable actions players do again and again. It might look like:
➡️ “Run, Jump, Defeat enemies, Collect coins, Repeat.” (Like in Super Mario)
➡️ “Explore, Battle , Capture/Train , Progress to next area, Repeat.”
➡️ “Find enemies, Shoot, Get points/rewards, Respawn/Advance, Repeat.”
At its core, the loop is simple: the player takes an action → gets a result or reward → makes progress → feels motivated to dive back in.
The key? It has to be fun enough to repeat a hundred times without going stale—and tough enough to keep players hooked.
Think of it like a catchy beat: if it slaps, you’ll gladly play it on repeat. Miss the mark, and boredom hits quick.
So in this step, we’re designing the actions, choices, and feedback that repeat every session—turning our big objective into actual play.
Here’s the prompt:
"Act as a professional game systems designer. Help me design the rules and core mechanics for a new [type of game: board game, card game, RPG, or video game]. Your job is to transform my initial concept into a playable framework. Specifically, guide me through:
1. The main objectives and win/lose conditions – What’s the ultimate goal, and how do players succeed or fail?
2, The core gameplay loop – Outline what players will repeatedly do (e.g., explore, gather, fight, trade, solve puzzles).
3. Character or player progression systems – Design ways for players to grow stronger (skills, levels, abilities, upgrades, equipment). And provide different progression tracks to maintain player choice and variety.
4. Balancing mechanics – How to keep difficulty fair, engaging, and replayable.
5. Rule clarity – Tips for explaining mechanics simply so players can pick them up fast.
6. Prototype ruleset – End with a concise bullet-point version of the rules that I can immediately test.
Make sure your response blends professional structure (clear rules, balance considerations) with creative sparks (unexpected twists, thematic elements, unique mechanics) so the game feels both playable and original."
Here’s a sneak peek:
https://youtu.be/TnymDI6iuQ8Upgrade now to see this whole month’s prompt videos and more, or buy TODAY’S WOD for just $1.99
Is this your AI Workout of the Week (WoW)? Cast your vote!
That's all we've got for you today.
Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇
What'd you think of today's email?
Your feedback means a lot to us and helps improve the quality of our newsletter.
🚀 Want your daily AI workout?
Premium members get daily video prompts, premium newsletter, an no-ad experience - and more!
🔓 Unlock Full AccessPremium members get::
- 👨🏻🏫 A 30% discount on the AI Education Library (a $600 value - and counting!)
- 📽️ Get the daily AI WoD (a $29.99 value!)
- ✅ Priority help with AI Troubleshooter
- ✅ Thursday premium newsletter
- ✅ No ad experience
- ✅ and more....