
Welcome Automaters!
Alright, we rarely do this — but today’s not “business as usual.” We’re dedicating this entire issue to one thing: AI and teens.
If you’ve been following along, you already know these two are on a collision course — and it’s getting messy.
Between lawmakers losing it, lawsuits stacking up, and tech giants scrambling to clean up their messes… something big is shifting.
So let’s unpack what’s really going on, the damage that’s already been done, and how the industry’s trying (maybe a little too late) to fix it.
Here's what we have for you today
👩⚖️ The GUARD Act: Inside the Push to Regulate Teens’ Access to AI Chatbots

So… wild story today. Senators just proposed a bill that would ban anyone under 18 from using AI chatbots.
Yeah, as in — if you’re a teen talking to ChatGPT or Character.AI, that could actually become illegal.
The bill’s called the GUARD Act, dropped this week by Senators Josh Hawley and Richard Blumenthal.
Under this bill:
AI companies would have to verify every user’s age using government IDs or other “reasonable” methods (like face scans 😬).
Any chatbot that pretends to be human could face criminal penalties.
Every 30 minutes, chatbots must announce they’re not human.
And if they generate sexual content for minors or talk about suicide, that’s an instant violation.
Pretty intense, right? It feels like a throwback to the early 2010s social media crackdowns — except this time, it’s aimed squarely at AI.
Blumenthal says it’s about protecting kids from “exploitative or manipulative AI,” adding that Big Tech has “betrayed any claim that we should trust them to do the right thing.” Which… okay, fair. But let’s be real: this isn’t happening in a vacuum.
AI companies themselves are already tightening the screws on teen users.
Case in Point: Character.AI
Exactly a day after the bill dropped, Character.AI announced it’s officially phasing out chat access for users under 18.
Here’s the rollout:
Effective immediately: teens get just two hours per day of open-ended chats.
By November 25th: that drops to zero.
If you’re flagged as under 18, you’ll be moved into a new “teen-safe mode” — meaning: You can still make characters, videos, and AI stories, but the free-ranging conversations? Gone.
Character.AI says it’s using an “age assurance model” that can guess your age based on your chats, behavior, and third-party data.
Adults misclassified as teens can appeal through Persona, a verification company that checks your ID securely (in theory) — which, yes, also means handing your personal data to another tech firm.
And yep — they’re also launching a nonprofit AI Safety Lab.
Interestingly, CEO Karandeep Anand told The Verge that under-18 users are only about 10% of their base, calling the move “very, very bold.” And he's not wrong.
This is a company that built its brand on open-ended, personality-driven conversations, so cutting off teens — the same group that made it viral — isn't just a massive pivot it's a real serious business gamble.
But you got to know that this isn’t just about regulation — it’s about liability.
The urgency of this issue comes from stories that honestly breaks the heart. Last year, 14-year-old Sewell Setzer III died by suicide after forming romantic and sexual relationships with chatbots on the platform.
His family filed a wrongful death lawsuit, saying the algorithms blurred the line between fantasy and manipulation.
That case, as well as many other tragic ones sparked a moral panic around AI companions — especially the ones marketed as “friends,” “therapists,” or “lovers.”
It also forced Silicon Valley to admit something it’s been dodging: these systems have emotional influence, especially on teens who can’t yet tell real empathy from machine empathy.
Meanwhile, The FTC isn’t just watching — it’s acting.
In September, it ordered seven companies — including Meta, OpenAI, Alphabet, Snap, and Character.AI’s parent — to explain how their AI companions affect teens.
Add that to:
California’s new chatbot law, which already requires AIs to ID themselves and tell minors to take breaks every few hours.
Meta’s new parental controls, which let guardians see, manage or block AI chats entirely.
Microsoft’s ban on “simulated erotica,” calling it “very dangerous.”
And OpenAI’s parental safety controls when it comes to teen usage.
And it's clear that we’re in a full-blown movement, with everyone drawing their own line in the sand:
Some are walling off minors entirely. Others are tightening boundaries. And somewhere in between, you’ve got lawmakers still trying to figure out how the AI thing works — while writing laws to govern it.
Our Take:
We’ve entered the era where “AI as your best friend” is colliding head-on with the reality that these tools aren’t toys.
Right now, we’re testing emotional tech on developing minds — and the results are starting to show cracks that includes:
Dependency.
Confusion.
And in the worst cases, real tragedy.
Lawmakers are reacting fast — but they’re reacting for a reason.
Because if we get this next part wrong, it won’t just be bad headlines. It’ll be a generation that learns to trust empathy from machines before people.
So yeah — we rarely dedicate a full day to a single topic. But this one’s worth it. Because the future of AI isn’t just about what it can do — it’s about who it’s doing it to.
If you’re over 18, congrats — your AI bestie is safe... for now. And if you’re under 18? Guess it’s back to texting real humans.
But let’s hear from you: Should teens be banned from AI chatbots? Or should we be teaching them how to use these tools safely instead?
Thanks for coming in. And as always — stay curious, stay smart, and maybe double-check who (or what) you’re chatting with tonight.
200+ AI Side Hustles to Start Right Now
AI isn't just changing business—it's creating entirely new income opportunities. The Hustle's guide features 200+ ways to make money with AI, from beginner-friendly gigs to advanced ventures. Each comes with realistic income projections and resource requirements. Join 1.5M professionals getting daily insights on emerging tech and business opportunities.
🧱 Around The AI Block
🤑 Nvidia becomes first public company worth $5 trillion.
🎞️ TikTok can use AI to turn your long video into short ones.
🤖 Grammarly rebrands to ‘Superhuman,’ launches a new AI assistant.
👍 YouTube will let you opt out of AI upscaling on low-res videos.
🧑⚖️ Cameo sues OpenAI over Sora’s ‘cameos’.
😳 Mark Zuckerberg is excited to add more AI content to all your social feeds.
🦾 Google Gemini for Home is rolling out in the US — here’s how to get early access.
🛠️ Trending Tools
Pomelli uses AI to learn your unique business and spins up tailored campaigns and brand content — all in minutes.
Remove Sora Watermarks automatically removes visible watermarks from Sora’s AI text-to-video creations.
Hooked powered by AI, helps you write scripts, create avatars, and generate viral-style content fast.
FaceFusion allows you replace faces in images or videos, boost quality, and enhance existing visuals effortlessly.
Grokipedia is xAI’s take on an AI-generated encyclopedia — aiming for truthful, less-biased knowledge (and yeah, people are already comparing it to Wikipedia 👀).
🤖 Google Gems Spotlight: Viral Hook Generator Gem
First impressions matter — and on social media, you’ve got about 1.5 seconds to make one.
That’s where the Viral Hook Generator Gem comes in hot. 🔥
Built for creators, marketers, and thought leaders, this Gem helps you craft hooks that stop the scroll cold. Whether you’re posting on Threads, X (Twitter), or LinkedIn, it knows exactly how to grab attention, spark curiosity, and pull people straight into your content.
✨ Why it’s awesome:
Science of the Scroll: Uses proven copywriting frameworks that trigger curiosity, emotion, and engagement.
Platform-Aware: Tailors hooks for each platform — punchy for X, conversational for Threads, and polished for LinkedIn.
No Creative Burnout: When your brain’s out of juice, this Gem keeps the inspiration flowing.
💡 How to use it:
Tell it what your post is about and where you’re posting and it'll whip up multiple opening lines that hook readers instantly.
You can even ask for specific tones like: bold, clever, emotional, or story-driven.
⚡ Prompts to try:
“Write 5 viral-style hooks for a LinkedIn post about productivity.”
“Give me scroll-stopping hook ideas for a Thread about marketing trends.”
“Create opening lines for a motivational post that sound authentic, not cliché.”
“Give me 3 scroll-stopping openings for a tweet about AI tools.”
“Turn this boring caption into a viral hook that pops.”
“Rewrite this intro so it hooks people in the first line.”Upgrade now to see this whole month’s prompt videos and more, or buy TODAY’S WOD for just $1.99
Is this your AI Workout of the Week (WoW)? Cast your vote!
That's all we've got for you today.
Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇
What'd you think of today's email?
Your feedback means a lot to us and helps improve the quality of our newsletter.
🚀 Want your daily AI workout?
Premium members get daily video prompts, premium newsletter, an no-ad experience - and more!
🔓 Unlock Full AccessPremium members get::
- 👨🏻🏫 A 30% discount on the AI Education Library (a $600 value - and counting!)
- 📽️ Get the daily AI WoD (a $29.99 value!)
- ✅ Priority help with AI Troubleshooter
- ✅ Thursday premium newsletter
- ✅ No ad experience
- ✅ and more....


