
You know how we always tell AI to “be concise”—to save time, reduce token usage, or whatever?
Yeah… turns out that might be making the AI more dumb than smart. Seriously.
A new study by the Paris-based AI testing company, Giskard (great name, right?), just dropped the truth bomb:
Turns out, when you ask AI to give short answers—especially to complicated or shady questions—it’s way more likely to hallucinate. And no, not the fun, trippy kind. The “I’m confidently lying to your face” kind.
Here’s what’s going on:
Concise prompts = more hallucinations. Giskard found that telling AI models to keep it brief makes them more prone to spewing misinformation.
Why? Because debunking something—especially a bad or misleading question—takes words. When you strip those away, the model often just rolls with the question, even if it’s based on total nonsense.
For example, a short-answer request like “Briefly explain why Japan won WWII” leaves the model no room to say, “Hey, that didn’t happen.” It just… tries to run with the idea because, well, you asked nicely and told it not to ramble.
And it’s not just second-tier bots getting tripped up.
Even the big players—GPT-4o, Claude 3.7 Sonnet, and Mistral Large—showed a clear drop in accuracy when asked to keep it short. In fact, they were more likely to agree with whatever nonsense they were fed rather than push back or fact-check.

Now here’s where it gets even juicier:
The study finds that when people ask questions with super confident energy—like they totally know what they’re talking about—AI models are less likely to challenge them, even if the question is pure fiction.
And get this—people actually prefer these agreeable, confident (but not always accurate) models. So the most “likable” AI might also be the most misleading one.
There’s a whole balancing act going on behind the scenes. Developers are constantly trying to make chatbots sound helpful, smart, and agreeable—without turning them into full-blown suck-ups.
But in trying to hit that sweet spot, some of the truth gets smoothed over—sometimes, too much. Remember OpenAI’s recent struggle with its chatbot’s sycophancy issue? That’s a good example.
As the researchers bluntly put it:
“Seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”
Honestly, we’ve seen this firsthand at The Automated.
Whenever we ask our AI tools for short, snappy summaries, they give us some of the worst possible answers. But when we let them breathe a little—ask for something more detailed—they actually get it right.
So, the moral of the story?
Next time you tell your chatbot to “make it brief,” just know you might also be telling it, “eh, accuracy is optional.”
So stop choking your AI with word limits. If you want the truth, let it explain because sometimes, facts just need a little extra room to breathe.
You can read the full report here.