
After years of building behind closed doors, OpenAI finally let a little sunlight in.
They just released two open-weight AI models—gpt-oss-120b and gpt-oss-20b—and yeah, the whole thing screams: “Don’t forget we’re still down with the devs!”
Oh and, they’re already up for grabs on Hugging Face, totally free.
Now let’s unpack it, shall we?
First up: what dropped?
gpt-oss-120b – the big on with roughly 117 billion total parameters runs on a single Nvidia GPU.
gpt-oss-20b – The lighter sibling packs 21B parameters and can run on a regular laptop with 16GB of RAM.
Both are free. Both are usable. But don’t get it twisted—they’re not the best OpenAI has to offer. They trail behind the company’s private models like o3 and o4-mini, but still outpace popular open alternatives from places like DeepSeek and Qwen.
So… why the sudden generosity?
Let’s be real—this isn’t just a good-will drop. This is OpenAI responding to pressure—on all sides.
Chinese labs (like DeepSeek, Moonshot, and Alibaba’s Qwen) are dominating the open-source space.
The U.S. government is nudging American AI firms to open up for the sake of “values and leadership.”
And Sam Altman literally admitted OpenAI’s been “on the wrong side of history” when it comes to open-sourcing its technologies.
So this drop? It’s a bit of a reputation reset. A strategic flex. A peace offering. Maybe all three.
But are the models any good?
Depends what you mean by “good.”
They handle coding and reasoning pretty well.
Can use tools like Python, web search, etc., thanks to fancy reinforcement learning.
Can transfer complex queries to other AI models in the cloud
But they’re still text-only—so forget images or audio.
And here’s the kicker: They hallucinate. A lot. Like, 49 and 53% of the time on PersonQA, (OpenAI’s own benchmark). That my friends, is 3x more than o1 and still higher than o4-mini.
Basically: use with caution.
Let’s talk licensing.
They’re releasing these models under Apache 2.0. In plain English that means:
You can build with them.
You can monetize them.
No fees. No permission needed.
But you don’t get the training data.
Why? Because lawsuits are looming, so OpenAI’s playing it safe and keeping that part under wraps.
So what’s the bigger picture here?
This isn’t OpenAI going fully open-source. It’s a balancing act—a way to stay relevant in the open-source conversation without giving up the crown jewels.
They’re trying to:
Win back trust from devs
Keep the government off their backs
And still protect their most powerful (and profitable) models
It’s not transparency. It’s tactical generosity. Just enough to stay in the conversation.
And with DeepSeek R2 and Meta’s next-gen open model both looming? Yeah… it’s about to get spicy.
Learn more here.