European Union flags outside a government building

The European Union's AI Act is officially a thing now, and if you work with AI in any capacity, you probably should care. But here's the problem: almost nobody actually understands what it requires yet.

What Even Is the AI Act?

The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable. It's the world's first comprehensive AI regulation, and it uses a risk-based approach to categorize different AI applications.

Think of it like this: the EU looked at AI and said "some of this stuff is dangerous, some of it is fine, and we need rules that reflect that." So they created tiers. AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI's or using emotion recognition technology at work or in schools.

High-risk AI—stuff used in education, healthcare, policing, hiring—gets strict requirements. General purpose models like GPT-4 get transparency requirements. Low-risk chatbots basically get a free pass.

When Does This Actually Kick In?

Here's where it gets messy. The Act was approved in 2024, but implementation is phased. Some provisions start in 2025, others in 2026. The draft law needs approval from the Council of the European Union and the European Parliament, and the provisional agreement provides that the EU AI Act will apply two years after its entry into force, with some exceptions for specific provisions.

So right now we're in this weird limbo where the law exists but most companies aren't technically required to comply yet. Except they kind of are, because if you're building AI products for the EU market, you need to start preparing now or you'll be scrambling later.

I talked to someone at a mid-sized AI company last month, and they said their legal team is basically in "wait and see" mode. They're tracking developments but not making major changes until the requirements are crystal clear. That feels risky to me, but I get it—the specifics are still being ironed out.

What This Means for AI Companies

If you're OpenAI, Google, or Anthropic, you've got teams of lawyers and compliance people figuring this out. If you're a startup building AI tools? Good luck. Companies developing foundation models and applications that are considered to pose a "high risk" to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards.

The transparency requirements for foundation models are particularly interesting. You have to disclose what data you used for training. That's... a big deal. OpenAI doesn't publicly share their full training data. Neither does Google. Are they going to start? Or will they just not offer those models in the EU?

My guess is we'll see a lot of "EU-specific versions" of AI models with different training data or capabilities. Kind of like how websites have those annoying cookie banners now—same basic product, different compliance layer for EU users.

The Global Ripple Effect

Here's the thing about EU regulations: they tend to become global standards. Like the EU's General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be.

GDPR forced companies worldwide to change how they handle data, even for non-EU users, because it was easier to have one standard than to maintain separate systems. The AI Act might do the same thing. If you have to be transparent about training data for EU users, why not just be transparent for everyone?

Some people see this as the EU forcing its values on the world. Others see it as the EU being the only major economy willing to actually regulate tech companies. I'm somewhere in the middle—glad someone's setting guardrails, skeptical about whether these specific rules will work as intended.

What About the US?

In February 2024, House leadership during the 118th Congress announced the establishment of a bipartisan Task Force on Artificial Intelligence, which released a report in December 2024 with 66 key findings and 89 recommendations. The US is taking a more sector-specific approach—no big omnibus AI law, but regulations for AI in healthcare, finance, hiring, etc.

The American tech industry is generally relieved about this. They prefer the US's lighter-touch approach to the EU's comprehensive regulation. But it also means US companies have to navigate a patchwork of different rules instead of one clear standard.

Honestly? I think both approaches have problems. The EU's rules might be too rigid for how fast AI is evolving. The US's rules might be too fragmented to actually protect people. We'll find out which approach works better in a few years.

Why You Should Actually Care

Even if you're not in the EU, this matters. The AI Act will shape how AI companies build products, how they train models, what capabilities they offer, and how transparent they have to be. That affects everyone using AI tools, everywhere.

The push for AI regulation is a worldwide affair, evident in laws and policies on six continents, with intergovernmental bodies such as UNESCO and regional groups like the African Union all working on frameworks for governing AI. The EU just got there first, and other countries are watching to see what works and what doesn't.

My prediction? Five years from now, some version of risk-based AI regulation will be standard globally. The details will differ, but the basic framework—high-risk systems get scrutiny, low-risk systems don't—makes too much sense to ignore.

For now, though, we're in the messy early days where nobody's quite sure how this all shakes out. Including the people who wrote the law.