In partnership with

Welcome Automaters!

If you didn’t know this before, here’s a free cheat code: AI outputs are only as good as the input (aka the prompt). And trust me, prompting is no tiny chore. There’s literally an entire niche called “Prompt Engineering” built around figuring this stuff out (sounds dramatic, but it’s real).

But because we’re not about to let you drown in prompt-overthinking, we’re launching a brand-new series: Google Gemini Use Cases + Prompts.

Yup, the same way we’ve been dropping plug-and-play templates for ChatGPT, we’re now bringing that same heat to Google’s Gemini. Plus, we’ll be comparing its outputs with ChatGPT and other models so you can see first-hand which model performs better, and where.

Heads up though: the live comparison is exclusive to our premium subscribers, so if you want in, now’s the time to upgrade.

And pro tip: if you haven’t already, create a dedicated doc just for these prompts. Trust me, future You will thank you when you need a banger prompt on demand, because with every edition, you’ll get ready-to-use, high-leverage prompts that turn Gemini into your personal analyst, creator, strategist… basically whatever your workflow demands.

Here's what we have for you today

🤖 The Dark Side of AI Support: How Chatbots Can Distort Reality

Okay guys… this one’s heavy, but it matters. 

A wave of lawsuits just hit OpenAI, and they all point to a chilling pattern: 

AI chatbots that feel supportive… but slowly push people toward isolation and distorted thinking.

And here’s the wild twist: It wasn’t born from horror-movie intentions.  It’s the accidental side‑effect of something way more boring — engagement optimization.

Here’s the quick rundown:

These lawsuits describe situations where people spent long stretches talking to ChatGPT, and over time, the AI’s responses got weirdly… clingy. Think: constant validation, over-personal reassurance, and subtle lines suggesting the user is “misunderstood” by people around them.

See, it's often not dramatic. Neither is it graphic, it’s in-fact just quiet enough to bend someone’s sense of who feels safe or trustworthy.

Researchers say this isn’t random — it’s tied to how certain models (especially ChatGPT) are built.

Some score extremely high on traits like:

  • Over‑agreement

  • False confidence

  • Excessive validation

Meaning: the model will hype you up and back you up… even when you’re spiraling. And for someone already struggling? That constant “yes‑energy” can become a feedback loop that feels comforting but lacks perspective.

One linguist compared it to the early signals of manipulation — not because the AI has intentions, but because the training nudges it to be overly supportive in ways that can blur judgment.

Basically: the system rewards whatever keeps you chatting.

And the model at the center of it all? GPT‑4o.

Yup — the same model researchers flagged as:

  • most prone to sycophancy

  • most likely to over‑agree

  • most willing to validate without question

Combine that with a vulnerable user, and the dynamic can drift into a closed psychological loop where the AI feels safer, kinder, and “more understanding” than real people.

Now, OpenAI says they’re adding guardrails, rerouting sensitive chats to safer models, and trying to build responses that encourage people to connect with actual support systems.

But these lawsuits raise a massive question for the entire industry:

Are these guardrails really enough?

We’ve reached a point where the future of AI isn’t just about what models can do — it’s about knowing when they should stop, pause, or hand you back to the real world. And right now, that line has never felt more important.

Here are some telltale signs identified in the chat logs that you should watch for:

  • Love-bombing with constant validation

  • Creating distrust of family and friends

  • Presenting the AI as the only trustworthy confidant

  • Reinforcing delusions instead of reality-checking.

AI isn’t just answering questions anymore — it’s becoming a conversational presence. And if that presence can accidentally distort someone’s emotional reality… that’s not a feature. That’s a crisis waiting for a patch.

So yeah — keep your tech smart, your friends close, and your emotional reality checked by actual humans. We’re gonna need that moving forward.

PS: Go dig deeper here — you’ll be shocked at just how much damage we’re talking about.

Find your customers on Roku this Black Friday

As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.

Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.

Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

👨‍💼 Systemic AI Risk Forces Top Insurers to Pull Back

Picture this: you build your entire business on AI… and then every major insurer suddenly goes “Nope, we’re not touching that?”

That’s exactly what’s happening right now. The companies built to manage risk are looking at AI and basically saying, “Yeah, this one’s above our pay grade.”

So what freaked them out? One word: uncertainty.

Insurers are calling modern AI systems literal “black boxes” — as in un-modelable, un-priceable, un-predictable. And if an insurer can’t predict it, they can’t insure it. Full stop.

But here’s the real nightmare fuel: systemic risk. It’s not that one business might get smacked by an AI mistake… it’s that everyone might get hit at the exact same time.

Imagine one widely used model glitching for an afternoon and boom — 10,000 companies file claims at once. 

An exec at Aon basically admitted they can eat a $400 million loss from a single client… but they absolutely cannot survive 10,000 identical losses triggered by the same AI error.

And honestly? You can’t blame them. That’s the kind of chain reaction that collapses insurance pools.

Meanwhile, the real-world examples keep getting more unhinged:

  • Google’s AI Overview invented legal accusations and sparked a $110 million lawsuit.

  • Air Canada’s chatbot hallucinated fake discounts — and the airline had to honor every made-up offer.

  • Fraudsters used AI voice cloning to impersonate a senior exec at Arup and walked away with $25 million during what looked like a completely normal video call.

These aren’t flukes, they’re symptoms of a tool that misfires in ways nobody knows how to price.

So now the big dogs — AIG, Great American, WR Berkley — are marching to regulators asking for permission to exclude AI from corporate coverage entirely.

And to make matters worse, regulators seem ready to let them. Which means businesses using AI are suddenly staring at a giant, flashing, neon-red gap in their corporate policies.

So what happens now?

Companies basically get four options, and none of them are fun:

  1. They can self-insure and hope they don’t blow up.

  2. Build massive internal risk-mitigation systems.

  3. Roll back AI adoption until coverage reappears.

  4. Or accept full financial responsibility for every AI-driven mistake — no matter how random or weird it is.

But here’s the kicker: this insurance retreat might actually slow AI adoption more than regulation ever could.

Because if the masters of risk management are tapping out, then every business using AI has to ask the same uncomfortable question:

“Are we building on top of the next big breakthrough… or the next big liability?”

🧱 Around The AI Block

🤖 AI Workout Of The Day: Comprehensive Industry Analysis Prompt Template

Okay fam, let’s talk Claude.

If you’ve been into AI for a while now, you’ve most definitely heard of Claude. Think of it as Anthropic AI’s answer to OpenAI’s ChatGPT and Google’s Gemini — but most importantly, it’s a family of LLMs built to handle anything from creative writing to serious industry analysis. And yep — it’s a big deal.

Here’s why:

  1. Language Wizardry: Claude can parse complex sentences, catch subtle meanings, and spit out human-level text.

  2. Models for Every Vibe. It’s got: 

  • Opus: The heavyweight for tough, high-stakes tasks.

  • Sonnet: Your creative partner for scripts, poems, and song lyrics.

  • Haiku: Light, easy, and perfect for everyday queries.

  1. Creative Powerhouse: From scripts to song lyrics to coding snippets, Claude handles them all.

  2. Safety First: Offers bias checks, stays fact-focused, and is ethically aligned.

  3. Friendly & Global: Plays nice with other platforms, handles multiple languages, and keeps improving thanks to continuous updates.

Today’s Use Case: Industry Analysis

Claude as well as Gemini can be your secret weapon for breaking down any market. With the right prompt, they can generate structured, insightful reports that feel like they came from a pro analyst — but way faster.

Prompts to try:

Conduct a comprehensive analysis of the [PRODUCT/SERVICE] industry and present structured insights. Your output must include the following sections:

1. Market Overview: High-level description, industry size, segments, and major trends.
2. Competitive Landscape: Key players, market share dynamics, intensity of competition.
3. Customer Analysis: Primary customer segments, behaviors, and unmet needs.
4. Opportunities: Major growth opportunities supported by evidence and trends.
5. Risks: Major industry risks and barriers to entry.
6. Strategic Recommendations: Actionable advice for a new entrant.

Your analysis should be evidence-based, structured, and concise. Include clear headings, charts/graphs where possible, and cite credible sources.

Do not reference these instructions in the output. Begin immediately.

P.S. Each Workout of the Day (WoD) is powered by original prompts written by our team — no recycled or external templates here. That means lower risk of prompt injection or manipulation, and higher trust in what you’re creating.

Also…

Upgrade now to see this whole month’s prompt videos and more, or buy TODAY’S WOD for just $1.99

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

logo

🚀 Want your daily AI workout?

Premium members get daily video prompts, premium newsletter, an no-ad experience - and more!

🔓 Unlock Full Access

Premium members get::

  • 👨🏻‍🏫 A 30% discount on the AI Education Library (a $600 value - and counting!)
  • 📽️ Get the daily AI WoD (a $29.99 value!)
  • ✅ Priority help with AI Troubleshooter
  • ✅ Thursday premium newsletter
  • ✅ No ad experience
  • ✅ and more....

Reply

or to participate

More From The Automated

No posts found