- The Automated
- Posts
- The EU’s first big deadline has arrived!
The EU’s first big deadline has arrived!
Plus, unlock how we built The Automated agency with AI + 50% discount for our members

Hello and welcome to the Automated, your AI tour guide.
After years of development, the EU’s AI Act, which officially went into effect on August 1 2024, hit its first compliance deadline on Sunday, February 2, granting regulators the power to ban AI systems deemed an “unacceptable risk.”
This milestone marks the start of strict oversight across a broad range of AI applications, from consumer products to public environments.
Here’s what we have for you today:
🧐 The EU’s AI Act: What’s Banned, What’s Not, and How to Avoid Fines!
🖼️ Cloudflare Joins the Fight Against AI Fakes with New Image Verification Feature.
🎬 7 ChatGPT prompts to kickstart your YouTube channel.
❌ DeepSeek: The countries and agencies that have banned the AI company’s tech.
🤖 ChatGPT Prompt Of The Day: Segment Your Data.
🧐The EU’s AI Act: What’s Banned, What’s Not, and How to Avoid Fines!

The EU’s AI Act just hit its first big deadline on February 2, and let’s just say—AI companies better have their act together.
As of Sunday (February 2), this law gives EU regulators the power to ban AI systems they consider an “unacceptable risk”—which, in plain terms, means “AI doing really shady stuff.”
So, what counts as shady?
Well, If your AI is:
Predicting crimes based on someone’s face,
Manipulating people’s decisions like a digital puppet master,
Exploiting vulnerabilities like age or disability,
And scraping the internet for faces to build creepy facial recognition databases.
And if you were thinking of using AI to detect emotions at work or school? That’s also a hard nope.
Basically, if it feels like it belongs in a Black Mirror episode, the EU probably banned it.
Also, AI systems are now categorized into four risk levels:
Minimal risk (like email spam filters) gets no regulatory oversight and is free to roam.
Limited risk (like customer service chatbots), on the other hand, gets light-touch regulations.
High risk (like AI making healthcare recommendations) definitely gets heavy oversight because, you know, people’s lives are involved.
And the most serious, dubbed “unacceptable risk,” includes AI for social scoring, emotion recognition at work or school, and anything trying to guess personal traits like sexual orientation through biometrics—all of which are banned.
So, what happens if companies break the rules?
Fines. Big fines. We’re talking up to €35 million (around $36 million) or 7% of annual revenue—basically, whichever one stings more.
Interestingly, over 100 companies—including Amazon, Google, and OpenAI—signed a voluntary pledge last year to start following the rules early.
Meanwhile, Meta, Apple, and French AI startup Mistral (one of the AI Act’s loudest critics) refused to sign. But here’s the thing: skipping the pledge doesn’t mean skipping the rules—so whether they like it or not, they’re still legally bound to comply.
Oh, and they’re a few exceptions too.
Law enforcement can still use biometric AI in public, but only if it’s to locate, say, an abducted person or prevent an imminent threat. Additionally, emotion-detecting AI is allowed for medical or safety reasons—though it requires special approval.
So what’s next?
Well, according to Rob Sumroy, head of technology at the British law firm Slaughter and May, this February deadline was just the warm-up. The real action kicks off in August, when enforcement begins, and regulators start handing out fines.
The EU is also working on additional guidelines to clarify how these AI rules interact with other laws like GDPR and cybersecurity regulations. But for now, AI companies have two choices:
Play by Europe’s new rules—or prepare to pay up. And trust me, nothing ruins a product launch like a €35 million fine.
Want to learn more? [Click here.]
🎁 Referral Radar: From The Automated to The Autonomous Agency 🚀
We’re excited to introduce The Autonomous Agency, the next evolution of The Automated.
After helping 16,000+ subscribers grow with AI-driven insights, we’re now opening up our playbook to help YOU build and scale your personal brand or business with the exact tools and strategies we’ve used.
Here’s how you can get early access and exclusive rewards:
✨ 1 Referral: Unlock exclusive Loom videos where we reveal how we built our processes for creating and managing content, automating workflows, and scaling efficiently.
✨ 3 Referrals: Earn a 50% discount on your first month with The Autonomous Agency. Let’s take your brand or business to the next level with powerful automation!
✨ 5 Referrals: Get a 30-minute one-on-one call where I’ll personally explain how the system works and how you can implement it for your brand or business.
📣 Let’s keep the momentum going—refer your friends and unlock these rewards today! The more you share, the more you gain
🖼️ Cloudflare Joins the Fight Against AI Fakes with New Image Verification Feature.

In a world where AI-generated content is everywhere, Cloudflare is stepping up to help users verify what’s real.
The web security giant has just integrated Adobe’s Content Credentials system, making it easier to track the authenticity of images online.
Adobe’s Content Credentials is a digital metadata system that attaches key details to images and videos—who owns them, where they’ve been posted, and whether they’ve been altered, including by AI tools.
This effort is part of the Content Authenticity Initiative (CAI), a cross-industry group that includes Microsoft, Nvidia, Getty Images, Shutterstock, and major news outlets like the BBC and The New York Times.
With Cloudflare joining the CAI, users hosting content on Cloudflare Images can enable a new “Preserve Content Credentials” setting with a single click.
Once activated, anyone viewing or downloading an image can verify its digital history via Adobe’s content authenticity web tool or Chrome extension.
Cloudflare’s move could massively expand the reach of Content Credentials.
The Content Credentials system, built on open-source standards from the Coalition for Content Provenance and Authenticity (C2PA), aims to protect artists' and photographers' attribution while helping users identify genuine images and videos versus AI-generated or manipulated content.
The company estimates that roughly 20% of the entire web runs through its network, meaning this verification system could become a standard feature across a huge chunk of online images.
“The future of the Internet depends on trust and authenticity,” said Cloudflare CEO Matthew Prince. “By integrating Content Credentials across our global network, we can help media and news organizations verify authenticity and maintain ownership of their work, wherever it moves online.”
With AI-generated content flooding the internet, tools like Content Credentials are becoming essential for maintaining trust and authenticity.
Click here to learn more about what this means for the future of online trust.
If you're frustrated by one-sided reporting, our 5-minute newsletter is the missing piece. We sift through 100+ sources to bring you comprehensive, unbiased news—free from political agendas. Stay informed with factual coverage on the topics that matter.
🧱Around The AI Block
🎬 7 ChatGPT prompts to kickstart your YouTube channel.
❌ DeepSeek: The countries and agencies that have banned the AI company’s tech.
🎉 The Beatles won a Grammy last night, thanks to AI.
🥇 DeepSeek founder Liang Wenfeng receives a hero’s welcome back home.
🌞 Meta turns to solar — again — in its data center-building boom.
🕵️♂️ US probes DeepSeek's use of banned chips after chatbot scores just 17% accuracy.
™️ OpenAI’s new trademark application hints at humanoid robots, smart jewelry, and more.
🛠️ Trending Tools
Genei: Helps you automatically summarize background reading and produce blogs, articles, and reports faster.
Huru: Is a job AI interview prep app that helps you practice unlimited interviews and get immediate feedback with AI.
EmailTree: Helps to automate your responses with an AI-powered tool that leverages your internal knowledge base for swift and accurate replies.
Pixite: Offers AI-driven custom clothing design and printing.
Sana AI: Is a free all-in-one assistant that analyzes, drafts, finds, and automates across your apps.
🤖ChatGPT Prompt Of The Day: Segment Your Data.
Segmenting data allows analysts to identify patterns and trends within specific groups, providing more targeted insights.
It enhances decision-making by breaking down complex datasets into manageable, relevant segments. This approach also improves the accuracy and effectiveness of predictions and strategies.
Here's a prompt that can help you with that.
Act as a data analysis expert. Your task is to segment data into groups based on [specified criteria]. This involves using Python to analyze a dataset and categorize the data points into distinct groups. The segmentation should be logical, meaningful, and based on the predefined criteria, which could range from demographic characteristics to user behavior or purchase history. Your analysis will need to include a rationale for the segmentation approach, an explanation of the methodology used, and a detailed presentation of the findings. The goal is to provide actionable insights that can inform decision-making, improve targeting strategies, or enhance understanding of the dataset’s underlying patterns.
We've Compiled a List of Over 100 ChatGPT Power Prompts.
This should help streamline your interactions with ChatGPT and get the results you need more efficiently.
Best of all, It's free!

That's all we've got for you today.
Did you like today's content? We'd love to hear from you! Please share your thoughts on our content below👇
What do you think of today's email? |
Your feedback means a lot to us and helps improve the quality of our newsletter.
Reply