Alright, so Google is moving FAST right now. 

Like… “don’t blink or you’ll miss three product launches” fast.

Because yesterday—literally right after OpenAI rolled out its new image generator inside ChatGPT— Google responded by:

  • Dropping Gemini 3 Flash

  • Making it the default model in the Gemini app and Search

  • And quietly sliding its vibe-coding tool, Opal, straight into Gemini

Yeah. That’s a lot. So let’s break it down.

First up: Gemini 3 Flash

This is Google’s fast-and-cheap workhorse model, built on Gemini 3 from last month. And let’s be honest—it’s very clearly designed to steal some thunder from OpenAI.

And… it kinda does.

On benchmarks, Gemini 3 Flash absolutely clears its predecessor.

  • On Humanity’s Last Exam — a benchmark basically designed to embarrass models — it scored 33.7% without tools

  • For context: Gemini 2.5 Flash was at 11%, Gemini 3 Pro at 37.5% and GPT-5.2 at 34.5%

  • On MMMU-Pro, which tests multimodal reasoning, it scored an insane 81.2%—the best score in the group.

  • For Coding: Google says its faster, more capable than 3 Pro; and excels at rapid, iterative development (also scored 78% on SWE-bench Verified Agentic code)

  • For Gaming: It boasts superior video analysis + near real-time reasoning; enabling smarter characters & realistic worlds

  • For Deepfake Detection: It powers Resemble AI for near real-time analysis, translating complex forensic data into simple insights.

So yeah—Flash, which used to be the fast, cheap, kinda-meh option, is now playing in frontier territory.

Now, the consumer side

Gemini 3 Flash is now the default model globally in the Gemini app. You can still switch to Pro for hardcore math or deep coding — but for most people? Flash is it.

And Google’s play is obvious: speed + multimodality wins.

Because here’s the thing — Gemini 3 Flash isn’t just spitting out answers. It actually understands: videos, sketches, audio, messy prompts… even vague intentions. Then it responds instantly with visuals, tables, and structured answers — basically understanding what you want without you having to over-explain.

You can:

  • Upload a pickleball clip and get technique tips

  • Sketch something terribly and have Gemini guess what it is

  • Drop in audio and turn it into analysis or a quiz

  • Even generate app prototypes straight from a prompt

Which… leads perfectly into the second big update.

Opal, now inside Gemini

If you’ve heard the term “vibe-coding,” this is that, but Google-ified.

Opal lets you build AI-powered mini apps just by describing what you want in plain English. Inside Gemini, these show up as “Gems”--- y’know custom versions of Gemini built for specific jobs.

Now, here’s how it works:

  1. You describe the app

  2. Gemini turns it into steps

  3. You tweak the flow

  4. And Boom — you’ve got a reusable mini-app

If you want more control, there's an Advanced Editor for that. But the real point is this:

Google just made app-building feel casual.

Now, let’s talk pricing.

Gemini 3 Flash comes in at about $0.50 per million input tokens and $3 per million output tokens. It's slightly pricier than 2.5 Flash. But Google says it’s 3× faster and uses 30% fewer tokens on reasoning-heavy tasks.

Which means…For a lot of real-world workflows, it’s actually cheaper in practice.

And honestly, this is the bar now. 

To stay ahead in this fierce, real-deal AI arms race, you need models that are not just smart, but faster, leaner, and more efficient.

So go try it out. I’m sure Google wouldn’t say all this to hype itself up… right?

But also — run it yourself. Benchmarks are cute, but experience is way louder.

Reply

or to participate

More From The Automated

No posts found