In January 2025, President Trump signed an executive order that basically torched Biden's AI safety framework and replaced it with something that can be summarized as "innovate fast, regulate never." And depending on who you ask, this is either the best thing that could happen to American AI or the beginning of a regulatory disaster.
Welcome to 2025, where AI policy is being rewritten in real-time and states are scrambling to fill the vacuum left by federal inaction.
What Actually Changed at the Federal Level
Let me give you the quick version: Biden's Executive Order 14110 from 2023 focused heavily on AI safety, ethics, and oversight. It mandated risk assessments, required transparency around training data, emphasized bias prevention, and generally tried to put guardrails around AI development.
Trump's new executive order, "Removing Barriers to American Leadership in AI," killed all of that and replaced it with a pro-innovation, pro-competitiveness agenda. The explicit goal is to eliminate federal policies that might "hinder American AI dominance." Translation: safety and ethics are taking a backseat to winning the global AI race.
The order tasks the National Security Advisor and special advisors to develop a new AI action plan within 180 days. Until then, we're in a policy vacuum where the previous rules are gone and the new ones don't exist yet. Fun times.
Why This Matters More Than You Think
On one level, I get the logic. China is pouring resources into AI. Europe is regulating heavily with the EU AI Act. If the U.S. over-regulates, we could lose our current leadership position in AI development. That's a real concern, and it's one Silicon Valley has been shouting about for years.
But here's the problem: "don't hinder innovation" is not the same thing as "have no rules at all." And right now, it's unclear what, if any, baseline safety requirements will exist for AI systems. We're essentially deregulating a technology that's advancing faster than we can fully understand its implications.
The Biden framework wasn't perfect—it was heavy, prescriptive, and probably would have slowed down certain types of AI development. But it at least tried to address real concerns around bias, transparency, and safety. The new approach seems to be "let's move fast and figure out the consequences later," which is a risky bet when you're dealing with technology this powerful.
The State-Level Chaos That's Filling the Void
With federal regulation essentially on pause, states are stepping in, and the result is a complete patchwork of rules that vary wildly depending on where you are. Colorado became the first state to pass comprehensive AI regulation with the Colorado AI Act, which took effect in February 2025. It focuses on high-risk AI systems in employment and consumer contexts, requiring impact assessments and transparency.
California has enacted multiple AI laws covering everything from personal data protection to healthcare AI. Utah requires government entities to develop AI policies. Texas has its own governance act in the works. New York City has bias audit requirements for hiring tools. It's a mess, and if you're a company trying to deploy AI nationally, you have to navigate dozens of different regulatory frameworks.
This is exactly what happened with privacy law when GDPR passed in Europe and California followed with CCPA. Now we have 50 different state privacy laws, and compliance is a nightmare. We're repeating the same mistake with AI.
What the EU Is Doing (And Why It Matters)
While the U.S. is deregulating, Europe is going hard in the opposite direction. The EU AI Act is the world's first comprehensive AI law, and key provisions are taking effect in 2025. By February, AI literacy requirements and prohibited practices go into force. By August, rules for general-purpose AI models kick in.
The EU approach is risk-based: minimal-risk AI (like spam filters) is mostly unregulated, while high-risk AI (healthcare, employment, law enforcement) faces strict requirements around transparency, human oversight, and accountability. It's comprehensive, it's strict, and it's going to shape how AI companies operate globally.
Here's why that matters for American companies: if you want to operate in Europe, you have to comply with the EU AI Act regardless of what U.S. policy says. So in practice, many U.S. companies will end up following EU rules anyway because they can't afford to be shut out of the European market. We're essentially letting Europe set the global AI standards by default.
The China Factor Everyone's Obsessed With
A huge driver behind the U.S. deregulation push is China. Beijing has been developing its own AI governance framework, and while it includes safety measures, it's also clearly designed to maintain government control and promote Chinese AI leadership. The fear in Washington is that if we regulate too heavily, we'll cede AI dominance to China.
But here's what bothers me about that argument: it assumes regulation and innovation are opposites. They're not. Smart regulation can actually foster innovation by creating clear rules, reducing uncertainty, and building public trust. Bad regulation stifles innovation. The goal should be smart regulation, not no regulation.
The competition with China is real, but the answer isn't to have a race to the bottom on safety standards. It's to build better AI faster while maintaining baseline protections that prevent obvious harms. That's harder than just removing all rules, but it's the right approach.
What This Means for Regular People
If you're not building AI systems, why should you care about any of this? Because AI is already affecting your life in ways you might not realize. Algorithms decide whether you get a job interview, what interest rate you pay on a loan, which social media content you see, and increasingly, aspects of your healthcare.
Under the Biden framework, there were at least some requirements around transparency and bias prevention for these systems. Under the new approach, it's less clear what protections exist. States are trying to fill the gap, but the patchwork approach means your protections depend on where you live.
If you live in Colorado, AI systems making high-risk decisions about you need to meet certain standards. If you live in a state without AI regulation, you're basically hoping companies self-regulate, which is... optimistic.
My Conflicted Take on Where This Is Headed
I'm genuinely torn on this. The over-regulation crowd has valid points: AI is moving incredibly fast, innovation matters, global competition is real, and heavy-handed rules could push development overseas. I don't want the U.S. to lose its AI leadership, and I do think some of Biden's framework was too prescriptive.
But the under-regulation crowd is also ignoring real risks. AI bias is a documented problem. Algorithmic discrimination happens. These systems can and do cause harm, especially to marginalized communities. The "innovate first, regulate later" approach might work for consumer apps, but it's dangerous for AI systems making high-stakes decisions about people's lives.
What I want—and what I think most people want—is a middle ground: baseline safety standards that prevent obvious harms without strangling innovation. Requirements for transparency and accountability without micromanaging every technical decision. Clear rules that apply nationally, not 50 different state approaches.
But that requires political will and technical competence, and I'm not sure we have enough of either right now.
The Uncomfortable Reality
Here's what I think will happen: the federal government will continue to avoid comprehensive AI regulation, either because of political gridlock or deliberate policy choice. States will keep passing their own laws, creating a compliance nightmare. Companies will default to EU standards for global products because that's easier than maintaining separate versions.
And five years from now, we'll look back at 2025 as the moment when the U.S. had a chance to shape AI governance on its own terms and chose not to. Instead, we'll be living with a patchwork of state laws and de facto EU standards, which is arguably the worst of both worlds.
Unless something changes dramatically—a major AI-related disaster, political consensus that seems unlikely, or unexpected leadership from somewhere—we're stuck in this weird middle space where nobody's quite sure what the rules are or where they're going.
For companies building AI, that uncertainty is its own form of regulation. For users affected by AI systems, it's a gamble on whether protections exist. And for the U.S. as a whole, it's a bet that we can maintain AI leadership without any coherent policy framework.
I hope that bet pays off. But I'm not confident it will.