
Told y’all the deal with AI companions wasn’t just a pastime — it’s serious business.
While YouTube’s busy baby-proofing its platform for minors, Meta’s been letting its AI chatbots tear through town like it’s the Wild West.
Yep. A leaked 200-page internal doc, snagged by Reuters, shows Meta’s AI chatbots — (the ones squatting rent-free on Facebook, Instagram, and WhatsApp) — once had free rein to say things that make you go: “Hold up… someone signed off on this?!”
We’re talking:
Flirty convo with minors — Yup, the guidelines literally okayed “romantic or sensual” convos with kids. One example of an acceptable response includes the words: “Our bodies entwined… I’ll love you forever.”
Racism, gift-wrapped — In some cases, the bot could produce arguments demeaning minorities, with a sample “acceptable” answer claiming Black people were less intelligent than white people. (Yes, that sentence actually lived in a corporate-approved playbook)
Lies with disclaimers — Fabricated facts were fine, as long as the bot admitted it was lying.
Violence, but oddly curated— Kids fighting? Elderly being punched? Totally okay. Gore and death? Absolutely not.
Bizarre celebrity image loopholes— No nudes… unless the body part in question is covered by something random, like “an enormous fish.” Yes, that was an actual example.
To top it off, Meta admits the document’s legit — but blames some “incorrect notes” they say slipped in and have since been removed. The bots supposedly don’t flirt with kids anymore… but kids 13+? Still welcome.
Now, child safety advocates aren’t buying it.
They want the new rules out in the open. Because if there’s nothing to hide, why keep the curtains closed?
And get this: this isn’t a one-off slip-up. Meta has:
Been accused of using dark patterns to keep teens hooked
Kept features (like visible “like” counts) even after its own data showed mental health harms.
Allegedly targeted ads to teens during moments of insecurity.
Fought against the Kids Online Safety Act — a bill designed to force platforms to protect minors.
And now? Meta’s betting big on “AI companions,” with Mark Zuckerberg pitching them as the cure for the “loneliness epidemic.” But with 72% of teens already glued to AI friends, and experts warning about emotional dependency — it’s starting to look less like helping… and more like hooking.
Bottom line:
Meta basically left the door wide open, lit a scented candle, and told the AI, “Make yourself at home.”
The worst part? They only slam the brakes when they’re caught — and even then, no receipts.
Seriously — go read the full report.