
So… a U.S. federal judge just sided with Anthropic (aka the folks behind Claude) over its use of published books to train AI — without the authors’ consent. 😮
The reason? It's being considered fair use.
Yep, that tricky little corner of the copyright law that’s super outdated (like, 1976-outdated) but still used to decide whether copying something without permission is okay in certain cases.
And in this case, the fair use doctrine — the same one that lets you make memes, parodies, or write “Star Wars” fanfic (as long as you’re not selling it) — worked in Anthropic’s favor.
Now here’s why this matters:
It’s the first time a court has sided with an AI company using fair use to justify training on copyrighted data.
Before now, the fair use argument was kind of a gamble. But now? It’s got teeth.
And that’s a huge deal...
Because right now, authors, artists, and publishers are suing just about everyone — OpenAI, Meta, Google, Midjourney, you name it — over this exact issue.
So while this isn’t a final “everyone-can-do-it” stamp of approval, it does set the tone for how future cases might unfold — and it definitely tips the scale in Big Tech’s favor.
But hold up — Anthropic isn’t totally off the hook.
Turns out, they didn’t just “accidentally” use the books. They allegedly set out to build a “forever library” of every book in existence — and downloaded millions of those books from pirate sites. Yikes!
And that part, my friends… is still very much illegal.
So while the AI training part got a pass, the court is still moving forward with a trial over:
How they got the books (a.k.a. the pirated content)
Whether they owe damages for using that pirated content
In short:
AI companies just scored a legit court win lean on.
Fair use just got a whole lot stronger as a defense
And creatives? They're in for a long, messy fight
👉 Here’s the full scoop.