Screenshot

Three Chinese AI labs allegedly created 24,000 fake accounts to steal Claude’s smartest tricks. Yes, really.

Picture this: You spend years becoming the smartest kid in class. Then one day you find out three other kids made 24,000 fake hall passes, snuck into your study hall, and copied your answers 16 million times. That is basically what Anthropic says just happened to Claude.

The Details: In a bombshell blog post, Anthropic accused Chinese AI giants DeepSeek, Moonshot AI, and MiniMax of using a technique called "distillation." Think of it as AI plagiarism: you prompt a smarter model (Claude) millions of times and use its clever answers to train your own cheaper model.

And hey, The "Heist" by the Numbers is no joke

  • MiniMax racked up a staggering 13 million exchanges targeting agentic coding, tool use and orchestration.

  • Moonshot AI clocked in at over 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development and computer vision

  • DeepSeek reportedly went after Claude’s ability to handle "censorship-sensitive" and policy-sensitive queries/questions with over 150,000 exchanges.

Why this is a big deal: This isn't just about hurt feelings. It is a massive security risk. According to Anthropic's blog post, models built from "stolen" data can skip the expensive safety training that prevents an AI from, say, giving you instructions on how to build a bioweapon, launch a cyberattack or worse allow for offensive cyber operations, disinformation campaigns, and mass surveillance.

The timing of this announcement is no accident, if you ask me it’s rather spicy.  The U.S. government is currently debating whether to tighten controls on selling advanced AI chips to China.

By going public now, Anthropic is essentially sending a carrier pigeon to Washington D.C. that says: "See? This is exactly why we need to keep the high-tech chips at home." cuz if you can't build the brain from scratch, you steal the thoughts of the brain that already exists.

The Big Picture 

As AI adoption goes global, the "trust" factor is everything. If the models powering future apps are built on siphoned data without safety checks, we are moving into a very "Wild West" era of technology.

Anthropic is now rolling out new detection tools to spot these "distillation attacks" in real-time. They are basically installing security cameras in the classroom to make sure nobody is peeking at their test paper.

The Lesson: If you want to be the smartest kid in class, you eventually have to do your own homework

Reply

Avatar

or to participate

More From The Automated