
The High Court in England just threw some serious shade at lawyers who rely way too much on AI tools like ChatGPT for legal research.
Why?
Because AI might sound super confident, but it’s basically that one friend who talks big and totally messes up facts like, big time.
So, in short, the court made it crystal clear:
Lawyers have a professional duty to fact-check AI outputs using real, reliable sources before bringing that mess to court.
And in case you haven’t heard, this fuss didn’t just pop out of nowhere — two lawyers actually got caught slipping:
One filed court papers with 45 citations, but here’s the plot twist — 18 of those cases were straight-up fake. As in, “did-not-exist” fake, while many others “did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application.”
Another lawyer cited five cases that likely came from AI-generated summaries she found through Google or Safari. She denied using AI directly, but the court wasn’t buying the “accidental” excuse.
In light of all this, Judge Victoria Sharp made it clear that ignoring your duty to verify citations isn’t just sloppy — it’s serious. And the consequences can range from:
Public scolding (ouch),
Paying costs,
Contempt of court proceedings,
To even police involvement. Yeah, it can get that wild.
The big takeaway? AI is cool — but if you’re a lawyer, don’t let it do your homework solo. Either check your facts or risk some serious legal karma, because in law, “plausible-sounding” just ain’t good enough.
So, what do you think? should lawyers get AI fact-checkers on speed dial, or stick to the old-school grind?
Either way, this ruling is a solid reminder that AI isn’t the “plug and play” magic wand we all want it to be—at least, not yet.