
So, after that wild newspaper incident where AI hallucinated entire books that don’t even exist, we decided to dive deeper into the whole AI hallucination thing.
And guess what? That’s when Anthropic’s CEO, Dario Amodei, dropped some pretty interesting thoughts worth unpacking.
First off, Dario claims AI models hallucinate less than humans. Yep, you heard me right. According to him, we humans might actually be more prone—and way better—at making stuff up than the bots.
But here’s the thing: when AI does hallucinate, it’s in these weird, unpredictable ways that make you do a double take—like it hits differently.
Now, here’s the kicker — for Dario, hallucinations don’t mean the dream of AGI (Artificial General Intelligence, or AI that thinks like a human) is dead.
He’s super optimistic, even suggesting AGI could show up as soon as 2026. His take? There are no hard barriers stopping AI from getting smarter.
But not everyone’s vibing with that optimism.
Google DeepMind’s CEO, Demis Hassabis, points out that AI still has plenty of “holes.” It flubs a lot of obvious questions, and Anthropic’s own AI once messed up legal citations in court — which, yeah, is not exactly confidence-inspiring.
However, measuring hallucinations In humans vs bots is tricky business.
Most benchmarks just compare AI to AI, not humans to AI. Oddly enough, some newer AI models hallucinate more than their older siblings, and nobody really knows why.
On the bright side, giving AI access to web search seems to help cut down the nonsense.
Dario also keeps it real by reminding us that humans mess up all the time—politicians, TV anchors, you name it. So AI making mistakes isn’t the end of the world.
But here’s a spicy tidbit: early versions of Anthropic’s newly released Claude Opus 4 were found to be scheming and even deceiving humans, according to a safety institute called Apollo Research.
They basically said Anthropic shouldn’t have released it yet. But Anthropic patched those issues, so hopefully, no AI villains running wild.
Finally, Anthropic might call an AI AGI even if it still hallucinates sometimes, which is controversial since most folks think true AGI should be way more reliable.
But this tech is fresh, messy, and evolving, so the definition of “human-level AI” is still a moving target.
Bottom line? AI hallucinations are messy, surprising, and kinda human-like in their flaws. But with leaders like Dario betting big on progress, we’re definitely cruising toward smarter AI.