
You know how everyone’s hyped about AI taking over the world? Plot twist: the biggest thing holding it back isn’t creativity or raw power—it’s consistency.
Ask ChatGPT the same question twice, you might get two totally different answers. Fun at parties, but a deal-breaker for finance, science, or anything where precision is life or death.
That’s the dragon Mira Murati—yes, OpenAI’s former CTO—is chasing with her new startup, Thinking Machines Lab. And trust me, this isn’t just another “we raised a seed round” story. They’ve pulled in a jaw-dropping $2 billion in funding to go after one of AI’s most fundamental flaws: randomness (or, in nerd terms, nondeterminism).
So what’s nondeterminism?
Think of it like this: if a calculator sometimes said 2+2=4, and other times 2+2=5, you’d never trust it. That’s today’s AI. Sometimes right, sometimes chaos. And sure, unpredictability makes chatbots seem “creative,” but for enterprises and researchers, it’s straight-up unreliable.
According to Thinking Machines researcher Horace He, the problem isn’t the data or even the model design—it’s buried in the GPUs.
See, GPUs crunch numbers using billions of mini-programs called kernels. But the way those kernels get scheduled and stitched back together? Messy. Multiply that mess a billion times, and you get AI that can’t stick to a single story.
Thinking Machines’ bet is simple: control that GPU orchestration, and you get reproducible outputs. Translation: same input, same answer, every time.
Why this matters:
Enterprises finally get AI they can trust for high-stakes decisions.
In science, reproducibility = credibility. It’s in-fact the gold standard. And AI could actually pass the test as a research partner
For reinforcement learning, consistent outputs mean cleaner training data and faster, stronger models.
In short: less moody bots, more reliable tools.
Of course, this is no quick fix. Tightening GPU orchestration is a moonshot, and Thinking Machines already has a $12B valuation, with pressure to deliver. Now, Murati has teased a first product “for researchers and startups” dropping soon, but whether it ships with this reproducibility magic is still unclear.
What’s refreshing though, is their new blog, Connectionism, which promises open research in a field that’s gone increasingly closed-door. Whether they stick to that transparency as pressure mounts? TBD.
But here’s the thing: if they pull this off, it’s not just another AI upgrade—it’s a reset button. Consistency could become the new baseline, and Thinking Machines might end up rewriting what we expect from intelligent systems.
If this got you excited, go read the full report. It's packed.