
First, Grok 3 went totally off the rails with that wild Hitler comment. Then Grok 4 showed up to clean up the mess — but honestly? It still feels a little... glitchy.
And now? We’ve got a fresh wave of AI drama — this time starring ChatGPT, Gemini, Meta AI, and Microsoft’s Copilot — all because someone didn’t like where Donald Trump landed on a presidential ranking list.
Here’s the tea:
Missouri Attorney General Andrew Bailey is now going after the biggest AI players, accusing them of “deceptive business practices” — all because their chatbots allegedly ranked Trump last when asked to rank the last five presidents by antisemitism. (Yeah, you read that right.)
According to Bailey, the bots gave “factually inaccurate” answers and misled the public by pushing biased narratives — all while pretending to be neutral and fact-based.
And hey, he’s not just mad — he wants receipts. And by receipts, we mean…
Every internal doc related to how these bots are trained, how prompts are filtered, and whether anything was suppressed, down-ranked, or “deliberately curated” to push a certain outcome.
Basically…. he’s asking for the keys to the entire AI castle. 🏰
Now, here’s where it gets interesting...
Microsoft’s Copilot — one of the accused — didn’t even answer the question. It reportedly refused to rank the presidents… but that didn’t stop Bailey from firing off a letter to Satya Nadella like they'd been caught red-handed.
Each of Bailey’s letters claims that three chatbots ranked Trump last… yet he sent them to four companies.
Oh, and this entire drama? It’s based on a blog post from a conservative site. Not peer-reviewed research. Not a fact-checked report. Just… a random blog.
And the kicker? Bailey wants this to be the reason Big Tech loses its Section 230 protections– (the legal shield that stops platforms from being sued over user-generated content) — all based on the claim that this somehow counts as political censorship.
Let’s be real: AI chatbots hallucinate all the time. They make up facts, get things wrong, and sometimes go off the rails just because it’s Tuesday. But building a whole legal case because a bot didn’t flatter your favorite politician? That’s a stretch.
Even if you’re Team Trump… this ain’t it. It’s giving political theater with a dash of “I don’t really get how AI works.”
So now we’re in this weird moment where:
AI is being dragged for being too political, and not political enough.
AGs are treating chatbot rankings like official government documents.
And tech companies are stuck out here explaining that bots sometimes just… make stuff up.
So yeah — first Grok melts down, and now the rest of the class is getting hauled into the principal’s office. 😅
You should definitely check out the full report.