In partnership with

Welcome Automaters, 👋

So X quietly dropped a new toggle on its iOS app this week labeled "Block Modifications by Grok." It’s a feature designed to stop xAI's chatbot from AI-editing the photos you upload. No big announcement. No press release. Just a quiet little button that users started spotting in the image upload menu.

Sounds great, right? Wrong.

Testing reveals the toggle only does one thing: it stops other users from tagging @Grok in replies to ask it to edit your photo. That’s literally it.

Anyone can still download or screenshot your photo, re-upload it as a new post, and ask Grok to go wild. No restrictions apply. The fine print buried under the feature name even admits it only prevents modifications through that one specific tagging method.

In other words: The barn door is still wide open. And the cows? Already in the next county.

This matters because the history here is serious. Earlier in 2026, Grok's image generation tools were linked to the creation of roughly 3 million sexualized or nudified images (with an estimated 23,000 involving children) over just an 11-day window.

That triggered two separate EU regulatory investigations into xAI and X over potentially illegal deepfakes. So, what does X do? They offer a toggle that stops one of many possible routes to abuse.

As critics have pointed out: xAI could just pause image generation entirely until the problem is actually fixed. But that would hurt Premium subscriber perks, and apparently, engagement trumps everything else.

The Big Picture: 

This story is a perfect snapshot of where AI adoption is right now. Platforms are sprinting to integrate powerful AI tools, then scrambling backward when things go sideways. They are offering band-aids when surgery is clearly needed.

As generative AI gets woven deeper into our everyday apps and social feeds, the gap between what these tools promise and what they actually protect is becoming the defining challenge of the decade.

So yeah, be safe out there (and maybe keep the selfies to a minimum for now).

Here's what we have for you today

🧑‍⚖️ Anthropic Sues the DOD After Being Labeled a National Security Risk

AI Generated

One of Silicon Valley's most respected AI companies is now in a full-blown legal war with the United States military. No drama: just facts.

Anthropic (the company behind the Claude AI) filed two federal lawsuits on Monday against the Trump administration. The complaints were dropped in the U.S. District Court for the Northern District of California and the federal appeals court in D.C. The move follows weeks of escalating tension over a deceptively simple question: should a private company be allowed to set limits on how its AI gets used in war?

Here’s The Backstory: 

Anthropic signed a $200 million contract with the Department of Defense back in July, making it the first AI lab to deploy technology across the agency's classified networks.

But when the DOD wanted to renegotiate, things fell apart fast. The Pentagon wanted access to Claude for "all lawful purposes" with zero carve-outs. Anthropic said a hard "no" to two things specifically: using Claude for mass surveillance on American citizens and putting it in charge of autonomous weapons with no human pulling the trigger.

So yeah, when talks collapsed, the administration directed federal agencies to stop using Anthropic's technology immediately. The Pentagon then hit the company with a "supply chain risk" designation. Think of it like a "Do Not Trust" sticker normally slapped on companies connected to foreign adversaries like China or Russia. It basically tells every business working with the Pentagon: don't use Anthropic's technology, or else….

Well. Anthropic is now fighting back with a legal argument that has real teeth:

  1. Procedural Violations: The company claims the designation was issued without following the steps Congress requires (like conducting a risk assessment or allowing a response).

  2. First Amendment Rights: Anthropic argues they have a constitutional right to express views on AI safety. They claim the government cannot use state power to punish or suppress that expression.

  3. Economic Stakes: The lawsuit warns this "blacklist" jeopardizes hundreds of millions of dollars in revenue as government contracts are being canceled.

Oh and In an extraordinary show of industry unity, dozens of researchers from OpenAI and Google DeepMind (Anthropic's direct competitors) filed a supporting brief in their personal capacities.

Even Jeff Dean, Google’s Chief Scientist, signed on. The consensus among the rivals is clear: the Pentagon's move creates unpredictability, undermines American competitiveness, and chills the debate about AI safety. When your biggest competitors start defending you in court, you know the government has crossed a significant line.

The Big Picture: 

Anthropic says its lawsuits aren't meant to force the government to work with them. They just want to stop officials from blacklisting companies over policy disagreements.

This is a critical distinction. It’s a fight over who gets the final "veto" on how AI is deployed: the engineers who built it or the generals who bought it. The outcome of this case will set the precedent for every AI company doing business with the government for the next decade.

Become An AI Expert In Just 5 Minutes

If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.

This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.

Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.

🧱 Around The AI Block

🤖 AI Workout Of The Day: How to Lipsync Your Videos With Dzine AI

Screenshot

If your video doesn't have perfect audio-to-visual mapping, are you even on the internet? We’ve seen AI generate cats playing drums and Sora make movie trailers, but the "uncanny valley" of weird, twitchy mouths has been the final boss. Until now.

We’re talking perfect phonetic matching, realistic jaw movement, and absolutely zero professional editing skills required.

Dzine AI has dropped a lip-sync tool that is so smooth, it’s honestly a little scary. Whether you want to make your pet talk or translate your corporate keynote into fluent French without looking like a dubbed Godzilla movie, this is the tool. The best part? It’s built for the "lazy creator" (our favorite kind).

How To Use It? It’s dead-simple 

  • Step 1: Head over to Dzine.ai and log in.

  • Step 2: Upload your base video or a high-quality photo. (Pro tip: Clear lighting makes the AI much happier).

  • Step 3: Upload your audio file or script. You can use a voice memo of yourself, or a high-fidelity track from something like Suno or ElevenLabs.

  • Step 4: Hit "Sync" and let the GPU do the heavy lifting. In less time than it takes to microwave a burrito, you’ve got a masterpiece.

  • Step 5: Download and post. 

Everything You Should know:

  • The Tech: Uses advanced facial landmark tracking to map phonemes (speech sounds) to lip shapes.

  • The Cost: Dzine uses a credit-based system, but there’s a generous free tier for new explorers.

  • The Vibe: High-energy, low-effort, and 100% viral-ready.

💡 Prompts To Try:

The Hype-Man: 

A fast-paced, high-energy young male voice. Enthusiastic, breathless, and extremely expressive.

The Cool Older Sister: 

A raspy, laid-back female voice with a dry sense of humor. Slow pacing and sarcastic undertones.

The Tech Guru: 

Authoritative neutral accent. Professional but conversational. Clear enunciation with studio-quality clarity.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated