
OpenAI is out here talking about building a trillion-dollar empire of AI data centers, meanwhile, Satya Nadella just quietly dropped a tweet that basically said:
“Cute idea, Sam… but we’ve already got them.” 😏
This week, Nadella shared a video showing off Microsoft’s first massive AI system— or as Nvidia calls it, an AI “factory”.
Picture this: more than 4,600 Nvidia GB300 rack computers, each loaded with Blackwell Ultra GPUs — the shiny new chips everyone wants but nobody can get. All wired together by InfiniBand, Nvidia’s lightning-fast networking tech that’s basically the tech equivalent of strapping rocket engines to a server rack.
And here’s the kicker — Microsoft says this is just the first of many.
They’re planning to roll out hundreds of thousands of these GPUs across Azure’s 300+ data centers in 34 countries.
Translation? While OpenAI is still sketching blueprints, Microsoft’s already shipping hardware.
And timing-wise, it’s almost poetic. Just as OpenAI locks in fresh partnerships with Nvidia and AMD for its own AI campuses, Microsoft swoops in to remind the world — politely, of course — that it’s been the one hosting OpenAI’s models all along.
Microsoft’s message is subtle but loud:
They’re not chasing the AI frontier — they built the road to it. And they’re not just ready for frontier AI; they’re already running it.
We’ll probably hear more soon. CTO Kevin Scott is set to speak at TechCrunch Disrupt later this month, and something tells me he’s bringing receipts.
In the meantime, go see how this move might just reshape the AI race — right here.