There’s a new wave of AI-generated videos flooding the internet at the speed of light.

Video creation tools like Google’s Veo 3 and OpenAI’s Sora are turning basic prompts into videos so real, they’re literally breaking people’s brains.

We’re talking...

  • A girl stepping into the ocean, with sunlight dancing on the waves.

  • Chicken drumsticks so juicy you can see the oil drip.

  • Racist clips so disturbingly vivid you’d swear they’re real—but plot twist: it’s all fake.

  • And yes, even glass fruits being sliced with the crispest, most oddly satisfying sound you’ve ever heard.

And guess what? Every single one of these stars people who don’t actually exist.

Unsurprisingly, these AI creations are everywhere—from TikTok to Instagram—and they’re going mega viral. One ASMR-style video of a glass kiwi got 68 million views in just 8 days.

And the craziest part? It’s not just pros behind this…

People who’ve never touched a camera are now creating cinematic content and racking up followers. One woman quit her office job and now has over 20K followers on Instagram, all thanks to AI.

So, why’s everyone so hooked?

  • These videos blur the line between real and fake so well, people are basically playing “Spot the AI” for fun.

  • Lip-syncing is finally on point—no more creepy, laggy mouths.

  • And the visuals? Packed with tiny human details like skin texture, awkward pauses, eye twitches, and stray baby hairs. It’s uncanny.

There’s even a video where a fake YouTuber interviews a fake passerby on a fake Seoul street—and honestly? You’d never guess it wasn’t real.

But of course, with realism this good… things are getting sketchy to.

  • Some creators are pushing weird boundaries—like AI women in bikinis delivering news or eating noodles for views. (Yes, this is real.)

  • Others are using AI to spread fake news, with ultra-convincing interviews and misleading content.

  • And yep, the deepfake panic is back—but this time it’s on steroids.

A media professor said it best: we're sliding into a “post-truth society”, where vibes and aesthetics might matter more than actual facts. Yikes.

So, what now?

  • South Korea’s stepping in. Starting January, all AI-generated content—including Netflix-style dramas—will legally need a clear label.

  • AI firms and social media giants are cracking down with tough new rules to block harmful AI-generated videos before they spread.

  • Experts are pushing for something called “AI discernment”. Think of it as media literacy 2.0: where you stop trusting your eyes and start asking, "How was this made?"

Because honestly? If we don’t build stronger BS detectors, we’re toast.

The bottom line?

AI videos are fun, freaky, and full of wild potential—but they’re also messing with our ability to tell what’s real.

Welcome to the post-truth content era. And yup… good luck out there. 😅

Reply

or to participate

More From The Automated

No posts found