August 2nd, 2025. That's the date the European Union's AI Act moved from "future regulation we should probably think about" to "actual law that can fine you up to €15 million or 3% of global revenue."
And based on conversations I've had with people in tech, a shocking number of companies are not ready. Like, not even close.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It's been in the works for years. The final text was published over a year ago. And yet here we are, implementation day, and half the companies affected are still trying to figure out what "general-purpose AI" even means from a compliance perspective.
This is going to be messy.
What Actually Went Into Effect
Not everything in the AI Act is enforceable yet—it's being phased in over several years. But as of August 2nd, some pretty significant things are now active:
General-Purpose AI (GPAI) rules: If you provide AI models that can perform a wide range of tasks (think GPT-4, Claude, Gemini), you now have specific obligations. You must:
- Keep detailed documentation on how your model was built and tested
- Publish a summary of copyrighted material used for training
- Provide "model cards" explaining what your model does
- Prove compliance with EU copyright laws
The scientific panel: An independent panel of experts is now operational to advise on systemic risks from powerful AI models. They can issue "qualified alerts" when they identify risks.
Penalty regime: The European Commission can now fine providers of GPAI models for non-compliance. Up to €15 million or 3% of worldwide annual turnover, whichever is higher.
For comparison, EU member states still have until August 2026 to fully implement their national enforcement regimes. So right now we're in this weird period where the rules exist and fines are possible, but local enforcement machinery isn't fully built yet.
The "We Need to See Your Homework" Problem
The documentation requirements are more invasive than most AI companies expected.
You can't just say "we built a good AI model." You need to maintain a comprehensive technical dossier showing:
- Training data sources and characteristics
- Evaluation results and testing methodologies
- Risk assessments and mitigation measures
- Energy consumption metrics (for models with systemic risk)
And here's the kicker: this isn't just for new models. Models already on the market before August 2nd get a two-year grace period, but they still need to comply eventually.
I talked to someone at a mid-sized AI startup, and they said their compliance team is basically reverse-engineering documentation for models trained months ago. Good luck reconstructing exact training data provenance when your engineers were moving fast and breaking things.
The Copyright Nightmare
The requirement to "prove compliance with EU copyright laws" sounds reasonable until you think about what that actually means.
Most large language models are trained on scraped internet data. Some of that data is copyrighted. How much? Nobody really knows. The training sets are so large that comprehensive auditing is nearly impossible.
So you're being asked to prove compliance with something that's functionally unverifiable. And if you can't prove it? You're technically not in compliance.
Some companies are responding by providing lists of "copyrighted material used for training"—but these lists are necessarily incomplete because they don't actually know everything that was in their training data.
It's like if you were required to list every ingredient in a dish after it's already been cooked and eaten. You can make your best guess, but you're definitely missing stuff.
The practical effect is that companies are scrambling to implement better data tracking for future models while crossing their fingers about past ones.
Systemic Risk Models Get Extra Scrutiny
If your model is classified as having "systemic risk"—basically, if it's extremely capable or widely deployed—you face additional requirements:
- Adversarial testing to identify vulnerabilities
- Incident reporting for serious failures
- Energy efficiency disclosure
- Regular evaluation of downstream risks
The threshold for "systemic risk" isn't precisely defined yet, which creates this gray area. Is GPT-4 systemic risk? Almost certainly. What about smaller models with millions of users? Probably? Models with high capability but limited deployment? Maybe?
Nobody wants to self-identify as systemic risk because it triggers more oversight. But failing to identify yourself as systemic risk when you actually are could be a violation. It's a fun game of regulatory chicken.
The Transparency Theater
One requirement that sounds good but might be useless: clear labeling of AI-generated content.
Starting August 2nd, all deepfakes must be "clearly and distinguishably labeled as such." Great! Except... who's enforcing this for bad actors?
If someone's creating deepfake videos to commit fraud, do we really think they're going to add a label saying "this is AI-generated"? The requirement only affects legitimate creators who were probably going to be transparent anyway.
It's like requiring bank robbers to wear signs saying "I'm robbing this bank." Lovely rule, but it only works if the people violating it voluntarily comply.
Italy Sets the Pace (And the Bar)
One fascinating development: Italy became the first EU member state to pass comprehensive national AI legislation, beating the EU's own timeline.
Italy's law (passed September 23, 2025) goes beyond the minimum EU requirements:
- Criminal penalties for certain deepfake violations
- Mandatory content traceability mechanisms
- Stricter organizational safeguards for low-risk AI in sensitive sectors
- Requirements that a human must lead AI nonprofits (which... okay, that's oddly specific but probably smart)
Other countries are now scrambling to match Italy's framework because nobody wants to be seen as having weaker AI governance.
This is creating a patchwork situation where companies operating across Europe face different rules in different countries, despite the AI Act being theoretically a unified framework.
What Companies Are Actually Doing
Based on conversations with people in various tech companies, the compliance strategies break down into a few categories:
The "Full Compliance" approach: Mostly big tech companies with resources to spare. They're hiring compliance teams, building documentation infrastructure, and genuinely trying to meet every requirement. Expensive, but they can afford it.
The "Minimum Viable Compliance" approach: Startups doing just enough to avoid immediate fines while they figure out what "just enough" actually means. Lots of good-faith effort but also lots of guessing.
The "Wait and See" approach: Companies hoping enforcement will be slow or light initially, giving them time to see how the rules are actually interpreted in practice. Risky, but not irrational given enforcement capacity is still being built out.
The "US-Only" approach: Some companies are just geofencing the EU. If the compliance burden is too high, don't serve EU customers. This mostly works for startups; harder for established players.
The Enforcement Unknown
Here's what nobody knows yet: what will enforcement actually look like in practice?
The EU has a history of aggressive tech enforcement (looking at you, GDPR fines). But they also have a history of being slow to act, giving companies years to comply with new rules.
Will they start issuing €15 million fines immediately? Will they focus on the biggest players first? Will enforcement be heavy-handed or light-touch?
Different EU member states will probably enforce differently, creating additional complexity. France might be aggressive while Ireland is permissive, for example.
This uncertainty makes it hard for companies to calibrate their response. Is it worth spending millions on compliance if enforcement might be lax? Is it worth risking fines if enforcement might be severe?
The Competitive Implications
European AI startups are at a disadvantage here. They face compliance costs their US and Chinese competitors don't.
Yes, the rules apply to anyone serving EU customers. But a US startup can choose not to serve the EU and focus on their domestic market. A French startup doesn't have that option—they're subject to these rules by default.
This could genuinely stifle European AI innovation. The compliance overhead might be manageable for big companies but crushing for startups operating on venture capital and hope.
Some European founders I've talked to are seriously considering relocating to avoid these regulations. Which is probably not what the EU intended, but it's a predictable consequence of being first-mover on strict AI governance.
My Conflicted Take
I think the EU AI Act is simultaneously:
- Necessary (we do need AI governance)
- Overreaching (some requirements seem unrealistic)
- Underpowered (enforcement mechanisms are still unclear)
- Precedent-setting (other regions will copy this)
The copyright compliance requirement is particularly frustrating because it's asking for something functionally impossible while creating massive legal liability.
The systemic risk framework makes sense in theory but is too vague in practice, leaving companies guessing about their obligations.
The transparency requirements for deepfakes are well-intentioned but only affect good actors.
And the whole thing is being rolled out before anyone—including regulators—fully understands how to implement it.
But here's the thing: doing nothing wasn't an option. AI is too consequential to leave completely unregulated. And the EU has historically been the entity willing to go first on tech regulation, for better or worse.
What Happens Next
Over the next year, we'll see:
- Companies scrambling to achieve compliance
- Regulatory guidance clarifying ambiguous requirements
- Probably some high-profile enforcement actions to set precedent
- Other countries adopting similar frameworks
- Lots of legal challenges to specific provisions
The companies that figure out compliance first will have a competitive advantage. The ones that wait too long risk becoming cautionary tales.
For the rest of us, the AI Act represents a major shift in how AI is governed globally. Even if you're not in the EU, this affects you. Because when the EU sets standards, the world tends to follow—sometimes willingly, sometimes not.
August 2nd, 2025 might not feel like a watershed moment right now. But I suspect we'll look back on this as the day AI regulation became real, for better or worse.
Time will tell if this was necessary guardrails or innovation-killing overreach. Probably some of both.
Welcome to the era of regulated AI. Hope everyone's documentation is in order.