The Trump administration spent the past week trying to kill state-level AI regulation. First they pushed for a moratorium in the defense bill. Then they floated an executive order to sue states and withhold federal funding. Now? The executive order is reportedly on hold, and even Republicans are pushing back. This is messy, important, and reveals a lot about who actually controls AI policy.
What Trump Wanted
The draft executive order—titled "Eliminating State Law Obstruction of National AI Policy"—would have established an AI Litigation Task Force to challenge state AI laws deemed unconstitutional or in conflict with federal policy. States with "onerous" AI regulations would face loss of federal broadband funding and other grants.
The order would have directed the Attorney General to sue states whose AI laws "unconstitutionally regulate interstate commerce" or "compel AI developers to disclose information in a manner that would violate the First Amendment." It's aggressive federal preemption dressed up as protecting innovation.
Trump publicly called for "one Federal Standard instead of a patchwork of 50 State Regulatory Regimes" on Truth Social. AI and Crypto Czar David Sacks was heavily involved in drafting the order. The Silicon Valley CEOs dining at the White House—including Elon Musk, Jensen Huang, and Greg Brockman—all vocally oppose state-by-state AI regulation.
Why It Fell Apart
The proposal faced immediate backlash from an unusual coalition: progressive Democrats who want stronger AI safety rules, conservative Republicans who oppose federal overreach into state authority, and civil society groups concerned about corporate capture of policy.
Florida Governor Ron DeSantis, Arkansas Governor Sarah Huckabee Sanders, and Mike Davis (a Bannon-aligned conservative) all called it a Big Tech bailout. That's not the typical opposition Trump expects from his right flank.
Democratic Senator Ed Markey accused Republicans of "trying to sneak their AI regulation moratorium into the defense bill" and said Trump was siding "with his billionaire Big Tech buddies." New York State Assemblymember Alex Bores called the draft EO "a blank check to Donald Trump's tech billionaire backers."
Most importantly: the White House can't actually preempt state regulation through executive order. That's Congress's job. The order would have faced immediate legal challenges, and many constitutional scholars said it would likely fail.
What States Are Actually Doing
California, Colorado, and several other states have passed or proposed AI safety bills covering issues like algorithmic discrimination, deepfakes, transparency requirements, and liability for AI harms. These aren't fringe regulations—they're addressing real problems that federal law doesn't cover.
California's SB 53, for example, requires companies to report AI incidents and perform safety testing for large models. Colorado enacted comprehensive AI transparency rules. New York is considering bills on AI in hiring and housing decisions.
The Trump administration's argument is that this creates a "confusing 50-state crazy-quilt of AI regulations" that stifles innovation. The counterargument is that states are filling a vacuum because Congress has completely failed to pass any meaningful AI legislation.
The Corporate Lobby
OpenAI, Anthropic (though they've been more nuanced), Andreessen Horowitz, and other Silicon Valley players have been aggressively lobbying against state AI regulation. They want one federal standard they can shape, not fifty different state regimes.
This makes business sense. Complying with different state rules is expensive and complex. But it also concentrates regulatory capture at the federal level, where big companies have more lobbying resources than citizen groups.
The fact that Sam Altman personally invested in Software Applications Inc. (the Sky team that OpenAI just acquired) through an external fund, and that these same CEOs are dining at the White House while these policies are being drafted, creates obvious conflicts of interest.
What Actually Happens Now
The executive order is reportedly on hold after the backlash. The attempt to add a moratorium to the National Defense Authorization Act failed in the Senate 99-1 back in July. Republicans tried again this week and it doesn't look like they have the votes.
That leaves the status quo: states can regulate AI however they want, and federal policy remains mostly non-existent. The Biden administration issued export controls on AI chips and some sector-specific guidance. Trump revoked Biden's AI Executive Order in January and replaced it with a pro-innovation framework that's light on actual rules.
Congress shows no signs of passing comprehensive AI legislation. The divisions are too deep, the technology moves too fast, and the lobbying is intense from all sides.
Why This Matters
State-level experimentation is how the U.S. has historically developed policy for new technologies. California emissions standards shaped national auto policy. State privacy laws preceded GDPR-style federal consideration. Lab safety regulations often start local.
AI is no different. We don't know what good AI regulation looks like yet. Having states try different approaches and learn from each other's mistakes is valuable. Killing that experimentation to impose a federal standard that doesn't exist yet would be premature.
But there's also a real risk that 50 different state regimes create compliance nightmares that make it impossible for smaller AI companies to compete. Only big players like OpenAI, Google, and Microsoft can afford teams of lawyers to navigate divergent state laws.
The Cynical Read
This whole fight isn't really about innovation vs. safety. It's about who gets to write the rules. Big Tech wants to write them at the federal level where they have the most influence. State lawmakers want to write them locally where they're more responsive to constituents.
Trump's executive order would have tilted the playing field toward corporate interests by threatening states with loss of federal funding if they regulate AI too aggressively. That's not principled federalism, it's selective preemption based on which regulations benefit campaign donors.
The fact that it fell apart suggests there are still enough people—across the political spectrum—who recognize that dynamic and oppose it.
My Take
I'm conflicted on this. I do think 50 different state AI regimes could be a mess. But I'm more worried about no regulation at all, which is effectively what we'd get if states are blocked from acting and Congress continues doing nothing.
The draft executive order was bad policy with bad motivation. Using federal funding threats to coerce states into not regulating AI isn't "protecting innovation"—it's regulatory capture dressed up as federalism.
At the same time, some state AI bills are poorly written, technically confused, or would create compliance requirements that don't actually improve safety. Not all state action is good just because it's state action.
What we need is Congress to actually do its job and pass thoughtful federal AI regulation with appropriate preemption provisions. But Congress is dysfunctional, so states are filling the void. Blocking states without providing a federal alternative isn't a solution, it's abdication.
I'm glad this executive order is on hold. But the underlying tension—between state experimentation and federal consistency, between corporate interests and public safety—isn't going away. This fight will keep happening with different players and different proposals until someone figures out a workable balance.