Picture this: you build your entire business on AI… and then every major insurer suddenly goes “Nope, we’re not touching that?”

That’s exactly what’s happening right now. The companies built to manage risk are looking at AI and basically saying, “Yeah, this one’s above our pay grade.”

So what freaked them out? One word: uncertainty.

Insurers are calling modern AI systems literal “black boxes” — as in un-modelable, un-priceable, un-predictable. And if an insurer can’t predict it, they can’t insure it. Full stop.

But here’s the real nightmare fuel: systemic risk. It’s not that one business might get smacked by an AI mistake… it’s that everyone might get hit at the exact same time.

Imagine one widely used model glitching for an afternoon and boom — 10,000 companies file claims at once. 

An exec at Aon basically admitted they can eat a $400 million loss from a single client… but they absolutely cannot survive 10,000 identical losses triggered by the same AI error.

And honestly? You can’t blame them. That’s the kind of chain reaction that collapses insurance pools.

Meanwhile, the real-world examples keep getting more unhinged:

  • Google’s AI Overview invented legal accusations and sparked a $110 million lawsuit.

  • Air Canada’s chatbot hallucinated fake discounts — and the airline had to honor every made-up offer.

  • Fraudsters used AI voice cloning to impersonate a senior exec at Arup and walked away with $25 million during what looked like a completely normal video call.

These aren’t flukes, they’re symptoms of a tool that misfires in ways nobody knows how to price.

So now the big dogs — AIG, Great American, WR Berkley — are marching to regulators asking for permission to exclude AI from corporate coverage entirely.

And to make matters worse, regulators seem ready to let them. Which means businesses using AI are suddenly staring at a giant, flashing, neon-red gap in their corporate policies.

So what happens now?

Companies basically get four options, and none of them are fun:

  1. They can self-insure and hope they don’t blow up.

  2. Build massive internal risk-mitigation systems.

  3. Roll back AI adoption until coverage reappears.

  4. Or accept full financial responsibility for every AI-driven mistake — no matter how random or weird it is.

But here’s the kicker: this insurance retreat might actually slow AI adoption more than regulation ever could.

Because if the masters of risk management are tapping out, then every business using AI has to ask the same uncomfortable question:

“Are we building on top of the next big breakthrough… or the next big liability?”

Reply

or to participate

More From The Automated

No posts found