In partnership with

Welcome Automaters, 👋

So, remember that drama where the Pentagon labeled Anthropic a "supply chain risk" and moved to block military contractors from working with the company altogether?

Well, here’s the reason: Anthropic refused to let the military use Claude for mass surveillance or fully autonomous weapons. Of course, Anthropic sued; calling the move unconstitutional.

Then came Tuesday's hearing in San Francisco, and Judge Rita Lin did not hold back. She called the Pentagon's actions "troubling" and noted they did not appear "tailored to any real national security concern."

Her vibe was basically: If you don’t trust them, just stop using them… why try to tank their whole business? For her, it looked a lot like targeted punishment.

Here’s The Highlights from the Bench:

  • The "Cripple" Comment: "It looks like defendants went further than that because they were trying to punish Anthropic," Lin noted.

  • The Legal Hyperbole: One legal brief even used the phrase "attempted corporate murder." Lin's response? "I don't know if it's murder, but it looks like an attempt to cripple Anthropic."

  • The "Stubborn" Defense: She pushed back on the government's claim that Anthropic was a sabotage risk; questioning whether being "stubborn" and "asking annoying questions" was now enough to brand a company a national security threat.

It gets even better.

The government’s own lawyer admitted something huge. The rule does not actually stop companies from using Anthropic for non-military work. Also, the Pentagon cannot legally punish contractors just for working with Anthropic outside defense projects.

So… what exactly was the point?

Anthropic says the damage is real anyway. A viral post from Defense Secretary Pete Hegseth made it seem like companies could get in trouble just for working with them. Which in turn brought confusion, panic, and a hit to their business. 

And when the Pentagon tried to take the “safety defence" Anthropic's argument was simple: a real saboteur doesn't pick a public fight. They just quietly sign the contract and cause trouble later. By being vocal about their safety standards, Anthropic argues they are doing the opposite of sabotaging.

The judge called the whole situation a “fascinating public policy debate”… but made it clear that is not what she is ruling on. She is focused on one thing. Was the government’s move legal, or did they overstep?

And from the way this hearing went… let’s just say the Pentagon might be sweating a little.

Here's what we have for you today

👉 OpenAI’s Chaotic Tuesday: A Funeral, a Stumble, and a Major Safety Win

OpenAI is currently one of the most powerful AI companies on the planet, and on March 24th, they proved it by doing three very human things at once: admitting a big idea didn't work, quietly killing a product that got too weird, and doing something genuinely good for kids online.

If that sounds like your typical Tuesday at work, congratulations; you might be running an AI empire.

First, the Funeral 🪦

Buckle up, because OpenAI just pulled the plug on Sora, its AI video app that was basically a deepfake factory in your pocket. Sora was supposed to be an AI-first TikTok, letting you clone your own face and star in AI-generated videos.

The Cringe Factor: People immediately started making deepfakes of Martin Luther King Jr. and Robin Williams, prompting their daughters to go on Instagram and beg users to stop. Yikes.

The Numbers: The app peaked at roughly 3.3 million downloads in November, then slid to about 1.1 million by February. Translation: the hype bubble popped faster than a viral TikTok trend. OpenAI says it's shutting Sora down to focus on other priorities, specifically redirecting its computing power toward robotics and world simulation research.

Oh, and remember that mega Disney deal? Disney had agreed to license Mickey Mouse and Cinderella for a planned $1 billion investment. That’s all dead now; Disney’s response was basically a very polite "good luck with that."

Then, the Stumble 🛒

While Sora was getting its eulogy, the "Instant Checkout" experiment was quietly limping home. Last September, OpenAI looked at Amazon and thought, "we could do that." They let users browse products from Walmart, Etsy, and Shopify without ever leaving ChatGPT.

The Reality Check: Users loved browsing, but when it came time to pay, they bolted back to websites they already trusted. Turns out, trusting a chatbot with your credit card is a bridge too far for most people; and honestly, fair!

OpenAI admitted the feature "did not offer the level of flexibility" they hoped for; so they are now letting merchants handle their own checkout experiences instead.

So here's where we are: one product dead, one product limping. You'd be forgiven for thinking this was a bad day at OpenAI HQ.

But here's the thing about OpenAI's chaotic Tuesday, it didn't end in a dumpster fire. 

There’s in fact, a Win.

Underneath the shutdowns and the shopping flop was something actually worth celebrating. OpenAI released a set of open-source safety prompts that developers can drop directly into their apps to make them safer for teenagers.

The "Safety Starter Pack" covers:

  • Graphic violence and sexual contents.

  • Harmful body ideals and behaviours.

  • Dangerous activities and challenges.

  • Romantic or violent role play

  • Age-restricted goods and services.

OpenAI built these tools in partnership with Common Sense Media and everyone.ai. They called it a "meaningful safety floor", not a perfect solution, but a real step in the right direction.

So yeah, this is the full story of OpenAI’s Tuesday. A company big enough to fail publicly, pivot loudly, and still sneak in a genuine win all in the same news cycle. Love it or hate it, nobody is doing it quite like them right now.

The Free Newsletter Fintech and Finance Execs Actually Read

If you work in fintech or finance, you already have too many tabs open and not enough time.

Fintech Takes is the free newsletter senior leaders actually read. Each week, I break down the trends, deals, and regulatory moves shaping the industry — and explain why they matter — in plain English.

No filler, no PR spin, and no “insights” you already saw on LinkedIn eight times this week. Just clear analysis and the occasional bad joke to make it go down easier.

Get context you can actually use. Subscribe free and see what’s coming before everyone else.

🧱 Around The AI Block

🤖 AI Workout Of The Day: How To Find Research Methodologies Using AI

Whether you're exploring qualitative or quantitative methods, or you're simply trying to understand the best approach for your topic, this prompt will help you find a method that aligns with your research objectives.

💡 The Prompt:

Act as an academic research expert. Your task is to suggest appropriate methodologies for researching [topic]. Provide a comprehensive list of both qualitative and quantitative research methods that are best suited for the subject matter. 

Justify each methodology's relevance and potential advantages, ensuring they align with the research objectives.

Additionally, address any potential limitations or challenges of each method, and offer potential solutions or alternative approaches. Your suggestions should be rooted in academic literature, ensuring their validity and appropriateness for academic research.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated