
Okay… this one’s spicy.
Dozens of state attorneys general just teamed up and basically told Microsoft, OpenAI, Google — and pretty much every major AI company — to fix “delusional outputs” in their chatbots… or catch smoke.
And yes, they actually used the word delusional.
So what went down?
After a year of chatbots spitting out weird, unhelpful, and sometimes emotionally risky responses, state AGs finally said, ‘Enough waiting for Congress.’”
They sent a massive letter to the top 13 AI companies demanding real safeguards, or else risk violating state law.
And their wishlist? It’s… a lot.
They want:
Transparent third-party audits from outside experts before models launch
New safety checks to catch misleading or emotionally risky outputs
AI companies treating mental-health-related incidents with the same urgency as cybersecurity incidents
Which means: If a chatbot gives users harmful or delusional responses? Companies should notify people immediately, just like they would after a security or data breach.
And here’s the part AI companies definitely won’t love:
Those third-party testers? AGs want them to be able to publish their findings without company approval — in short, no more controlling the narrative.
That’s a big one right?
It’s definitely an attempt to make it much harder for AI companies to hide the weird stuff their models do.
So… why now?
Because several headline-making incidents over the last year freaked out regulators — especially when vulnerable users were involved.
BUT. Plot twist.
While the states are gearing up to tighten AI rules… the federal government is moving in the opposite direction.
The Trump administration has been extremely pro-AI and has tried multiple times to block states from setting their own AI rules.
Those attempts didn’t land.
Now, Trump says he’s dropping an executive order next week aimed at limiting what states are allowed to regulate.
So yeah… welcome to the AI policy tug-of-war where:
States want stronger guardrails.
The federal government wants fewer speed bumps.
AI companies are stuck in the middle of two completely different visions for how this whole thing should work.
Big picture
The next year of AI development is basically going to be shaped by this exact fight — who gets to draw the safety lines, and how strict those lines should be.
And right now? No one agrees on where that line should actually sit.
