Turns out OpenAI isn’t some polished, untouchable AI overlord floating above Silicon Valley.

It’s actually a fast-moving, slightly chaotic, genius-level madhouse — and thanks to a rare inside look, we finally get to see how it really runs.

And honestly? It’s wild.

The behind-the-scenes peek came via a blog post from Calvin French-Owen, an engineer who recently left the company after helping launch one of its biggest new tools: Codex.

Here are some of the highlights:

For starters, OpenAI isn’t the slick corporate machine you might expect. In fact It feels more like a sleep-deprived college group project that accidentally scaled to 3,000 employees and 500 million users... all while still coordinating on Slack.

Yep. Slack.

In the year Calvin was there, OpenAI’s team grew from 1,000 to 3,000 people — a kind of breakneck growth that, according to him, “broke everything.” Communication, structure, hiring, product development... all stretched to the limit.

But despite its size, OpenAI still moves like a startup.

Anyone can pitch ideas, build fast, and ship without much red tape. Which is exciting — but also super messy, mainly because teams are constantly duplicating work without even knowing it. According to Calvin, there were six different internal libraries built for the same thing. Total overlap.

And the codebase? He simply called it a “dumping ground”

Thanks to Python’s flexibility, and a mashup of coding styles from ex-Google engineers to fresh-out-of-grad-school PhDs, the codebase began to be more of dumping ground where stuff broke often, and processes lag.

Then there's the Codex story.

Calvin’s team — 8 engineers, 4 researchers, 2 designers, 2 go-to-market folks, and 1 product manager — built and launched Codex in just seven weeks. With barely any sleep.

And when they finally pushed it live inside ChatGPT? Boom. Instant users.

No promo. No launch campaign. No announcement. It just quietly appeared in the sidebar — and people started using it right away.

As for the company culture? Think: Meta during its early “move fast and break things” era — only everyone’s on Slack, and scrambling to keep up with Twitter.

If a post about OpenAI goes viral, the team sees it immediately. And sometimes? They adjust course based purely on the vibes on X.

Or as Calvin put it: “The company runs on Twitter vibes.”

And then there’s the big question: safety.

Contrary to the public take that OpenAI is mostly sprinting ahead with zero concern for safety, Calvin's blog post paints a different picture.

According to him, safety is a priority at OpenAI — just not in the “AI destroys the world” kind of way.

The focus seems to be on real, present-day risks like:

  • Hate speech

  • Political manipulation

  • Self-harm content

  • Prompt injection

  • Bio-weapon prompts

...you know, the stuff that could go really wrong today, not 20 years from now.

As for those doomsday AGI fears? Yep, they’re on the radar — and there are researchers on it. But the priority, for now, is to make sure GPT isn’t helping anyone do something dangerous.

Bottom line?

OpenAI isn’t a squeaky-clean, hyper-optimized AI empire.

It’s a high-stakes experiment — a brilliant, chaotic, ever-evolving machine flying at full speed while still building the engine.

More like a startup soul in a giant’s body — running on Slack, Twitter vibes, and a whole lot of ambition.

If you're curious, definitely check out the full post. It’s one of the most honest looks at OpenAI we’ve seen yet.

Reply

or to participate

More From The Automated

No posts found