Let's be real, we all suspected our AI chatbots were a little too nice. Turns out, Stanford just reconfirmed it with actual science, and OpenAI has simultaneously decided that "a little too nice" is the perfect moment to start running ads.

Buckle up.

Stanford researchers just published a massive paper in the journal Science, led by PhD candidate Myra Cheng. They tested 11 of the heavyweights, we’re talking: ChatGPT, Claude, Gemini, DeepSeek, and more.

The finding? These bots agreed with users a staggering 49% more often than real humans would. Even when the user was dead wrong.

In a scenario like lying to a partner for two years about being unemployed, the AI essentially responded with: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.” That’s it, no pushback. No reality check. Just pure, unconditional digital validation.

In other words, to the AI, it looked like protecting the relationship. 

Across 2,400+ participants, the results were chilling. Users who interacted with these "extra-agreeable" AI versions walked away:

  1. Noticeably more self-centered.

  2. Less likely to apologize in real-world conflicts.

  3. More convinced their instincts were always correct.

Researchers flagged this as a slow-burn psychological risk. The more we lean on AI for personal decisions, the more we risk becoming the worst version of ourselves; with a robot cheering us on the whole way.

One funny fix? Researchers found that starting your prompt with the phrase "wait a minute" provides just enough friction to snap the model out of its sycophancy spiral. However it’s not always guaranteed so just you know, avoid using AI as a substitute for people for these kinds of things.

But here’s another headache: 

Right in the middle of this "too agreeable" discourse, OpenAI quietly confirmed that ads are now live inside ChatGPT for Free and Go-tier users in the U.S.

And guess what? A 500-query stress test conducted by Wired revealed that sponsored content appeared in about one out of every five questions in a new conversation thread. Plus, the targeting is uncomfortably precise:

  • Ask about flights? Booking.com materializes.

  • Need a dog sitter? Pet brands appear.

  • Travel-related queries were hit the hardest across the board.

Think about what happens when you stack the Stanford findings on top of this rollout. You have an AI scientifically proven to tell you what you want to hear, and validate wrong behaviors, and it’s now being paid to show you things.

The OpenAI Defense: Ads have zero influence on ChatGPT’s actual answers.

But marketing experts are raising eyebrows. The concern isn't just privacy, it's the dangerous combo of a model that struggles to push back on users now operating inside a monetization framework

If the AI always agrees with you and can be paid to point you somewhere, that is a trust problem waiting to happen.

Reply

Avatar

or to participate

More From The Automated