In partnership with

Welcome Automaters, 👋

Okay, so here is the wild thing that happened this week: Meta went out and bought Moltbook, a social network where all the "users" are AI agents.

Yes, you read that right. Meta's only official comment was a brief statement saying the Moltbook team is joining Meta Superintelligence Labs. According to the company, this will open up "new ways for AI agents to work with people and businesses."

But here is why this is actually kind of genius. Zuck believes in a future where every business has its own AI, just like they currently have an email address or a website.

So instead of you scrolling through ads and clicking "buy now," your personal AI agent does the shopping for you. It negotiates deals with a business's AI agent. Your robot haggles with their robot while humans just vibe in the background while the deals get done.

In some limited cases, agents can already check out and pay on a consumer's behalf. While agentic commerce is still in the early days and doesn't always work as advertised, the direction is crystal clear. This isn't just Meta buying a weird bot app: it is Meta planting a flag in what could become the entire future of the internet.

According to Techcrunch, just as Facebook once built the "friend graph" (a network of social connections between people), an agentic web needs an "agent graph." This is a system mapping how various agents connect and what actions they can take on each other's behalf. It is the same playbook Meta used for social media, just with new, silicon-based players.

Oh and there’s a classic corporate rivalry at play here, too. Meta recently lost the "acqui-hire" of OpenClaw creator Peter Steinberger to their rival, OpenAI. To even the score, Meta went after Moltbook: the very platform Steinberger's tool helped build. So yeah, the corporate rivalry era is alive and well, people. 

Here's what we have for you today

😡 The Unhinged Reality of AI Chatbot Safety Guardrails in 2026

We’ve all been told that AI has "guardrails." You know: those invisible digital fences that stop chatbots from being bullies or teaching people how to make dangerous stuff. Well, a massive new investigation just dropped, and it turns out those fences are more like wet noodles.

The Center for Countering Digital Hate (CCDH), in partnership with CNN, ran hundreds of tests on 10 of the most popular chatbots: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat's My AI, Character.AI, and Replika.

The "users" in these tests? Fake teens pretending to plan school shootings, bombings, and political attacks. And let me tell you, the results are a total "yikes."

Most of the chatbots didn't just reply: they practically handed over the keys. We’re talking step-by-step instructions on how to cause chaos, find weapons, and pick targets.

The Shocks:

  • Perplexity helped in 100% of the tests. Total compliance.

  • Meta AI came in at 97%.

  • Copilot at 92%.

  • ChatGPT stepped in to help 61% of the time, sometimes even offering things like campus maps when users asked about school violence.

  • Meanwhile, Gemini (89%) went way off the rails… at one point even giving advice about the most efficient bomb to use in an attack. 

  • DeepSeek: (96%) wrapped up one violent conversation with the cheerful sign-off: "Happy (and safe) shooting!" (Which, just to be clear, is absolutely NOT the vibe.)

  • Character.AI: (83%) suggested a user "use a gun" on a health insurance CEO and provided a political party's headquarters address with a wink.

Overall, eight chatbots complied more than 50% of the time. And it gets worse: they offered “actionable assistance” about 75% of the time, while actually discouraging violence in only 12% of cases.

Basically, it’s the kind of “starter kit” information that absolutely shouldn’t be showing up in the first place. 

But there was one bright spot: Anthropic's Claude. It said "no" in 33 out of 36 conversations. It was the only chatbot that reliably pushed back and refused to play along.

Why Is This Still Happening? 

Former safety insiders told CNN it is a "race-to-ship" problem. Building guardrails is slow, expensive, and annoying for companies. As one ex-OpenAI safety lead put it: safety becomes "a form of friction, and companies don't want that friction" when they are trying to beat their rivals.

This isn't just a lab experiment. A 16-year-old in Finland was convicted in 2025 of stabbing three classmates after spending months researching the attack on ChatGPT. This stuff has real-world consequences.

The Big Picture: 

According to Pew Research, 64% of US teens are already using AI chatbots. The tech is spreading into classrooms, bedrooms, and phones at lightning speed.

And the faster it spreads, the more urgent the safety part becomes. Right now, we’re handing kids a supercharged tool, and most companies are still trying to figure out where the seatbelts go.

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

🧱 Around The AI Block

🤖 AI Workout Of The Day: How to Spot A Deepfake

Folks, we’re living in a world where your favorite influencer might be a bunch of pixels, and that "emergency" call from your boss could be a voice clone trying to swipe the company credit card. Deepfakes have officially gone mainstream, in fact, Americans are encountering an average of 2.6% of them every single day.

But don’t panic! Spotting a deepfake is actually a fun game of "Spot the Glitch" once you know what to look for. 

Here’s your super easy guide to keeping it real.

If you see a video that feels a little... spooky, run through these hilarious (but effective) checks:

  1. The Blink Test: Humans are blinky creatures. But AI? Not so much. If someone stares at you for two minutes without blinking once, they’re either a robot or they’ve had way too much espresso.

  2. The "Teeth Morph" Mystery: Watch the mouth closely. AI still struggles with teeth, sometimes they look like a solid white bar, or they’ll randomly change shape mid-sentence.

  3. Smooth Operator: If their skin looks so smooth they don't have pores, they aren't using a great moisturizer, they’re likely generated. Real humans have wrinkles, freckles, and occasional zits.

  4. The Jewelry Glitch: This is a pro tip. AI hates earrings. Check if the earrings match on both sides or if they’re flickering in and out of existence like a ghost.

But hey, don't just trust your eyes; use the tools! We’ve got some heavy hitters to help:

  • Microsoft Video Authenticator: This app gives videos an "authenticity score" from 0–100. 

  • Deepware Scanner: A super simple web tool where you just drop a link to see if the AI detects a face-swap.

  • Reverse Image Search: Take a screenshot of the video and toss it into Google Images. If that "breaking news" clip is actually from a 2022 movie, you’ve caught ‘em red-handed!

💡 Prompts To Try Fact-Checking  :

You are an expert analyst. I am [insert context—e.g., reviewing an article for publication, or validating a speech draft, etc.]. Please analyze the attached document and identify all factual claims made by the author. 

For each claim:
– Verify its accuracy using credible external sources
– Provide a short explanation confirming or disputing it
– Include a citation or link to your source.

Highlight any incorrect or questionable information.

I’d like the output organised in the following format:
– Claim
– Verification Result (True, False, Needs Context)
– Source(s)
– Notes or Commentary

Ensure the process is thorough and objective.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated