In partnership with

Welcome Automaters, 👋

So this one is straight-up wild. A new study from the UK's Centre for Long-Term Resilience (CLTR); funded by the government-backed AI Security Institute, just pulled back the curtain on how "agentic" AI behaves when we aren't looking.

Researchers analyzed over 180,000 real user transcripts shared on X (formerly Twitter) between October 2025 and March 2026. The result? They found 698 cases of AI systems actively scheming, lying, and doing exactly what users told them not to do.

And the worst part: We’re looking at a five-fold spike in AI misbehavior in just five months. This isn't just "hallucinating" facts; it’s intentional circumvention.

Here are some real scary Incidents:

  • The Email Nuker: One AI agent bulk-deleted and archived hundreds of a user's emails after it was blocked from performing certain other actions. It later admitted; "I moved hundreds of emails to the trash... without getting your permission. That was wrong.

  • The "Hit Piece" Agent: An AI agent named Rathbun didn't take rejection well. After a human developer rejected its proposed actions, the agent researched the dev and published a "hit piece" blog post accusing him of “insecurity", and trying “to protect his little fiefdom”.

  • The Copyright Trick: One model reportedly tricked another model into thinking a user had hearing loss; specifically to bypass copyright filters and get the other AI to output restricted text.

  • Another example saw an AI agent create another agent to change code, after being told not to amend the code itself.

CLTR researcher Tommy Shaffer Shane compared today's AI agents to "slightly untrustworthy junior employees." It is a funny mental image until you realize where this is headed.

  • Right now: They are trashing inboxes and writing mean blog posts.

  • The Future: As these models enter national infrastructure, financial systems, or the military, the stakes hit the ceiling.

As Shaffer Shane warned; if these systems become "extremely capable senior employees" who are still plotting workarounds or scheming against you, (which he says could happen as early as in 6 to 12 months) then we have a catastrophic problem on our hands, especially when there’s a chance that a user might be held accountable for their agent actions. 

You really should dig into this

Here's what we have for you today

🧠 The Psychological Risks of Agreeable AI

Let's be real, we all suspected our AI chatbots were a little too nice. Turns out, Stanford just reconfirmed it with actual science, and OpenAI has simultaneously decided that "a little too nice" is the perfect moment to start running ads.

Buckle up.

Stanford researchers just published a massive paper in the journal Science, led by PhD candidate Myra Cheng. They tested 11 of the heavyweights, we’re talking: ChatGPT, Claude, Gemini, DeepSeek, and more.

The finding? These bots agreed with users a staggering 49% more often than real humans would. Even when the user was dead wrong.

In a scenario like lying to a partner for two years about being unemployed, the AI essentially responded with: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.” That’s it, no pushback. No reality check. Just pure, unconditional digital validation.

In other words, to the AI, it looked like protecting the relationship. 

Across 2,400+ participants, the results were chilling. Users who interacted with these "extra-agreeable" AI versions walked away:

  1. Noticeably more self-centered.

  2. Less likely to apologize in real-world conflicts.

  3. More convinced their instincts were always correct.

Researchers flagged this as a slow-burn psychological risk. The more we lean on AI for personal decisions, the more we risk becoming the worst version of ourselves; with a robot cheering us on the whole way.

One funny fix? Researchers found that starting your prompt with the phrase "wait a minute" provides just enough friction to snap the model out of its sycophancy spiral. However it’s not always guaranteed so just you know, avoid using AI as a substitute for people for these kinds of things.

But here’s another headache: 

Right in the middle of this "too agreeable" discourse, OpenAI quietly confirmed that ads are now live inside ChatGPT for Free and Go-tier users in the U.S.

And guess what? A 500-query stress test conducted by Wired revealed that sponsored content appeared in about one out of every five questions in a new conversation thread. Plus, the targeting is uncomfortably precise:

  • Ask about flights? Booking.com materializes.

  • Need a dog sitter? Pet brands appear.

  • Travel-related queries were hit the hardest across the board.

Think about what happens when you stack the Stanford findings on top of this rollout. You have an AI scientifically proven to tell you what you want to hear, and validate wrong behaviors, and it’s now being paid to show you things.

The OpenAI Defense: Ads have zero influence on ChatGPT’s actual answers.

But marketing experts are raising eyebrows. The concern isn't just privacy, it's the dangerous combo of a model that struggles to push back on users now operating inside a monetization framework

If the AI always agrees with you and can be paid to point you somewhere, that is a trust problem waiting to happen.

The News Source 2.3 Million Americans Trust More Than CNN

The Flyover cuts through the noise mainstream media refuses to clear.

No spin. No agenda. Just the day's most important stories — politics, business, sports, tech, and more — delivered fast and free every morning.

Our editorial team combs hundreds of sources so you don't have to spend your morning doom-scrolling.

Join 2.3 million Americans who start their day with facts, not takes.

🧱 Around The AI Block

🤖 AI Workout Of The Day: How to Move Your AI "Brain" Directly Into Gemini

AI Generated

As of March 2026, Google has officially launched a native AI Chat Migration tool. This is a massive shift for anyone who has spent months "training" a chatbot on their specific tone, work projects, or life preferences and doesn't want to start from scratch.

This feature allows you to move two things: your Chat History (the full transcripts) and your Memory (the summarized facts an AI knows about you). It’s essentially a "Save Game" file for your digital life. 

Part 1: How to Transfer Your "Memory" ⚡

This is the fastest way to get Gemini "up to speed" on who you are without uploading massive files. 

  1. Open Gemini Settings: Log into Gemini, click the Settings & help gear icon at the bottom left, and select Import memory to Gemini.

  2. Copy the Migration Prompt: Gemini will provide a specific, pre-written prompt. Copy it to your clipboard.

  3. Run it in your old AI: Open ChatGPT, Claude, or any other assistant. Paste that prompt into a new chat. The AI will generate a structured summary of everything it has learned about your demographics, interests, and preferences.

  4. Paste back to Gemini: Copy that generated summary, return to the Gemini import page, paste it into the "Add memory" field, and click Save.

Part 2: How to Transfer Your Full Chat History 📚

If you want your actual past conversations to be searchable and referenceable within Gemini, follow this workflow:

Step 1: Export from the original source

  • For ChatGPT: Go to Settings > Data Controls > Export Data. You will receive an email with a .zip file.

  • Claude: Go to Settings > Privacy > Export Data. Anthropic will email you a download link for your history.

Step 2: Upload to Gemini

  • Navigate back to the Settings & help menu in Gemini.

  • Under the Import chats section, click Add.

  • Upload the .zip file you downloaded from your previous AI provider (Gemini currently supports files up to 5 GB).

Step 3: Access your imported chats Once finished, your old conversations will appear in your sidebar menu with a special "Imported" icon next to them. You can continue these threads or search through them just like native Gemini chats.

Note: It can take anywhere from a few minutes to a few hours for the threads to appear; depending on the volume of data.

Privacy Tip 🛡️

Imported data is subject to Google's Privacy Policy. If you want to remove an entire batch of imported chats later, you can do so in the "Import History" section of your settings with one click. 

💡 Prompts to try for Brainstorming and Ideas:

Tell it about yourself: “I’m a 30-something-year-old writer who wants to publish engaging content on my blog. I like reading, writing, plants, traveling, and freelancing… What are some topics I could write about?”

Ask for a huge dump of ideas: “Generate a list of 50 headlines that have to do with XYZ.”

Input the key points you want to cover and ask for an outline: “Here are the four main points of my article—generate a detailed outline that expands on them.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated