In partnership with

Welcome Automaters, 👋

So, Google didn't just release a new family of AI models. It quietly handed developers something far more powerful: freedom.

Here's what we have for you today

🦾 Google Releases Gemma 4 Under Apache 2.0

Google has unleashed Gemma 4; a brand-new family of open AI models; and the developer community is doing a collective happy dance.

Built on the same "secret sauce" powering Gemini 3 Gemma 4 comes in multiple sizes, from a tiny 2 billion parameters to a massive 31 billion. Whether you are running AI on a laptop or a beefy server farm, there is a Gemma 4 with your name on it.

Meet the Four Models: 

But first get this: Gemma 4 is organized into two distinct tiers. 

1. Edge Tier: These models are designed to live natively on your devices.

  • E2B (The Tiny Giant): Don't let the name fool you. It looks like a 2B model but packs 5.1 billion parameters using a clever "Per-Layer Embeddings" trick. It handles text, images, and audio natively for live offline translation.

  • E4B (The Pocket Mathlete): Still phone-friendly but with serious muscle. It scored 42.5% on the AIME 2026 math test; it’s jaw-dropping for something that fits in your pocket.

Both edge models support a 128K-token context window — that's roughly the length of an entire novel worth of memory per conversation.

2. The Workstation Tier is built for developers who need local "frontier-class" intelligence.

  • 26B A4B (The Specialist): A Mixture-of-Experts (MoE) model. It carries 25.2 billion parameters but only "wakes up" 3.8 billion at a time. You get 30B-class intelligence at the speed and cost of a 4B model.

  • 31B Dense (The Absolute Unit): All 31 billion parameters are active all the time. It scored an eye-watering 89.2% on AIME 2026; beating models 20 times its size on the Arena AI leaderboard.

Unlike older models that had vision and audio awkwardly "stitched" on, Gemma 4 is natively multimodal from scratch:

  • It reads images, documents, and video frames with high-detail OCR.

  • Live Audio: On-device speech recognition and translation (Edge models only).

  • Function Calling: All four models are trained from the ground up to use tools and interact with software for complex, multi-step tasks.

  • Global Fluency: Supports 140+ languages natively.

But Here’s where it gets legendary: Previous Gemma models came with Google's own custom license that had legal teams sweating, but Google slapped an Apache 2.0 license on this entire family, and in case you don't know, this is  the same no-strings-attached license used by most of the open-source AI world. 

In plain English? Startups and indie builders can grab these models, use them commercially, modify them, and build empires without paying Google a single cent. In a world of restrictive licenses, this is the AI equivalent of finding a sports car with free lifetime fuel.

The Reality Check: Gemma 4 is straight-up embarrassing its predecessor. The tiny E2B now outperforms the old Gemma 3 27B on major benchmarks. It’s like the intern outperforming last year’s senior manager. Awkward, but amazing for us.

you can grab the weights right now on Hugging Face, Kaggle, or Ollama. Or go learn more here.

88% resolved. 22% stayed loyal. What went wrong?

That's the AI paradox hiding in your CX stack. Tickets close. Customers leave. And most teams don't see it coming because they're measuring the wrong things.

Efficiency metrics look great on paper. Handle time down. Containment rate up. But customer loyalty? That's a different story — and it's one your current dashboards probably aren't telling you.

Gladly's 2026 Customer Expectations Report surveyed thousands of real consumers to find out exactly where AI-powered service breaks trust, and what separates the platforms that drive retention from the ones that quietly erode it.

If you're architecting the CX stack, this is the data you need to build it right. Not just fast. Not just cheap. Built to last.

🧱 Around The AI Block

🤖 AI Workout Of The Day: Problem-Solving & Strategy Prompt

Ever asked an AI for help, but got vague or surface-level answers? That’s because the way you prompt decides the quality of the response. 

Today’s prompt is designed to make AI think like a strategist; breaking down problems step by step and giving you a clear action plan.

How to Use It Effectively

Here’s the trick: don't just throw in a generic problem like ‘I need business advice.’ Be specific. Say something like ‘I’m growing my tutoring business, but I suck at marketing and time management.’ The more detail and context you give, the smarter and deeper ChatGPT’s response gets.

💡 Prompts to try:

You are an analytical problem solver. Given the situation: [INSERT PROBLEM/SCENARIO], identify the main challenges, analyze possible solutions with pros/cons, and propose a structured action plan. Present your reasoning clearly, and finish with a concise recommendation.

Is this your AI Workout of the Week (WoW)? Cast your vote!

Login or Subscribe to participate

That's all we've got for you today.

Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇

What'd you think of today's email?

Login or Subscribe to participate

Your feedback means a lot to us and helps improve the quality of our newsletter.

Reply

Avatar

or to participate

More From The Automated