I know — you’ve probably heard that using ChatGPT (or any bot) to learn is totally fine. But yeah… scratch that.
Because apparently, even Kim Kardashian — reality TV royalty, business mogul, and future lawyer — just revealed her “AI study buddy” straight-up made her fail her law exams.
In a Vanity Fair interview, Kim confessed she’s been relying on ChatGPT for legal questions. Like she'd literally snap a pic of her test materials, toss it into the bot, and wait for the magic.
Except the magic wasn’t real — it was hallucinated.
And the results? Totally wrong. She says the AI’s bad advice made her fail tests.
So, for the millionth time, here’s what’s actually happening:
ChatGPT doesn’t know facts. It predicts text based on patterns — and when it doesn’t have real data, it just makes things up (confidently).
It’s like that one friend who says everything with a straight face, even when they’re guessing.
And get this: these AI hallucinations are a massive deal.
Lawyers have already been hit with sanctions for filing briefs citing fake cases generated by AI.
Students have failed assignments because their “AI tutor” lied to them.
And doctors? They’ve flagged ChatGPT-style systems for giving dangerously inaccurate medical advice.
Kim’s experience — plus the tragic AI cases we covered earlier, basically slaps a glossy celebrity filter on a way bigger issue. Which is:
We’re trusting machines that sound smart but don’t actually understand anything.
To be fair, we’ve also seen some incredible wins — people using ChatGPT or Claude to challenge bogus medical bills, draft small-claims suits, handle complex legal paperwork and actually win cases.
But keep in mind — that’s the one-in-a-hundred scenario where it works beautifully. For the other ninety-nine? Proceed with caution.
The moral: AI can be helpful, but it’s not holy. Always double-check. Cross-verify. And remember — confidence doesn’t always equal correctness.
So yeah… maybe don’t fully trust ChatGPT as your law school study buddy. 😉
