White House AI Plan to Override State Laws
White House AI legislative framework released Friday urges Congress to set national AI standards, preempting state laws while avoiding a new regulator.

What to Know
- The White House released a national AI policy framework on Friday, March 20 urging Congress to set federal AI standards while leaning on existing agencies — no new regulator created
- The proposal would preempt state-level AI laws the Trump administration calls a burdensome 'patchwork,' though states like California have already passed their own measures
- Critics including Public Citizen called the plan a giveaway to Big Tech, while the Center for Democracy and Technology flagged internal contradictions in the framework
- The plan also expands deepfake protections, addresses children's AI safety, and ties AI policy to data center permitting and grid infrastructure goals
The White House AI legislative framework dropped Friday, and the message to Congress is blunt: set one national standard or keep watching states fill the vacuum themselves. The Trump administration released a sweeping policy document outlining recommendations that would establish federal AI rules across child safety, intellectual property, and free speech — while explicitly calling on Congress to preempt the state-level AI laws that have been piling up since Washington dragged its feet.
What Does the White House AI Legislative Framework Actually Propose?
The framework is less a law and more a wish list handed to Congress. It calls for national AI standards that would supersede the growing body of state AI laws the administration has labeled a 'patchwork' of contradictory requirements — a phrase they've now used so many times it's practically official policy branding. The proposal would preserve states' authority to enforce existing laws on fraud, consumer protection, and child exploitation, but anything that looks like standalone AI regulation? Federal rules would take the wheel.
The administration framed the entire thing around competition with China. Winning the 'AI race,' economic competitiveness, national security — these are the throughlines. If you're reading between the lines, the argument is essentially: we can't let 50 state legislatures slow down American AI companies while Beijing runs laps around us.
The Trump Administration is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people.
The Critics Didn't Wait Long
The White House AI legislative framework was barely public before the pushback started. The Center for Democracy and Technology acknowledged 'some sound statements of principles' but said the document fails to resolve the tensions it creates. CDT Vice President of Policy Samir Jain zeroed in on the kids' safety angle — specifically, how the framework claims to prioritize children while staying vague on how competing approaches actually get reconciled.
The sharpest critique came on the free speech contradiction. Jain pointed out that the framework explicitly says government should not pressure AI companies to change content based on 'partisan or ideological agendas' — and then noted, correctly, that the administration's own 'woke AI' executive order does exactly that. That's not a minor inconsistency. That's the White House telling Congress to constrain itself with one hand while using the other to pull the same lever it just said nobody should touch.
It rightly says that the government should not coerce AI companies to ban or alter content based on 'partisan or ideological agendas,' yet the administration's 'woke AI' executive order does exactly that.
States Aren't Waiting — And the Friction Is Real
Here's the uncomfortable truth the administration's framework glosses over: states moved because Congress didn't. California's SB 243, enacted in October, requires AI companion chatbots to identify themselves as non-human and restricts interactions with minors — the kind of targeted, consumer-facing rule that most people would consider reasonable. The White House framework actually agrees with the goal, calling for parental controls and protections against child exploitation by AI platforms. But instead of building on what states have done, it wants to replace them.
This isn't just a legal question. Companies operating across state lines genuinely deal with conflicting compliance requirements — that part of the 'patchwork' complaint is real. A startup that has to meet California, Texas, Colorado, and New York AI rules simultaneously faces a compliance burden that can crush smaller players while barely denting the budgets of the Big Tech firms that wrote those state lobbying campaigns in the first place. Federal preemption can cut both ways.
A draft executive order from November had already outlined steps to challenge state AI laws and restrict federal funding to states that enacted contradictory measures — so Friday's framework is less a pivot than a formalization of a position the administration has held for months.
Deepfakes, Data Centers, and the Big Tech Question
The framework extends deepfake protections beyond the Take It Down Act — the bipartisan law Trump signed that criminalized non-consensual intimate images and deepfake porn at the federal level. The new proposal calls for broader protections against unauthorized AI-generated deepfakes, though it carves out exceptions for parody, satire, news reporting, and other First Amendment-protected expression. Whether that carveout becomes a loophole depends entirely on how courts interpret it.
On copyright, the administration took a notably hands-off position: AI training on copyrighted material should be left to the courts, and Congress 'should not take any actions' that would interfere with judicial resolution of the fair use question. That's a big win for AI developers who've been fighting those lawsuits — and a loss for the artists, journalists, and publishers who've been waiting for legislative clarity.
The infrastructure piece deserves more attention than it's getting. The framework calls for faster permitting on data centers, a 'Ratepayer Protection Pledge' to keep residential electricity costs stable as AI buildout accelerates, and expanded use of on-site power generation. That's an acknowledgment that the energy demands of large language models are becoming a political problem — and that the administration sees federal AI policy as inseparable from grid and infrastructure policy.
Public Citizen's Robert Weissman was unsparing. He called the proposal 'a national framework to protect Big Tech at the expense of everyday Americans' and said it amounted to a payback to the corporations that donated to Trump's inauguration. He predicted it would be 'dead on arrival in Congress.' That last part might be wishful thinking — but the political coalition against federal preemption is real, and it includes both progressive consumer advocates and conservative state-rights Republicans who don't love Washington telling their legislatures what they can and can't regulate.
It is an extraordinary payback to the Big Tech companies that have lined up to throw pocket change at Trump's inauguration, and for his ballroom, and for the Melania movie, and to settle bad faith lawsuits and more.
Frequently Asked Questions
What is the White House AI legislative framework?
The White House AI legislative framework is a policy document released on March 20, 2026, that recommends Congress establish national AI standards covering child safety, free speech, intellectual property, and deepfakes. It calls for preempting state-level AI laws and relies on existing federal agencies rather than creating a new AI regulator.
Would the White House proposal preempt state AI laws?
Yes. The framework explicitly urges Congress to override state AI laws the administration considers burdensome. States would retain authority to enforce existing laws on fraud, consumer protection, and child exploitation — but standalone AI regulations at the state level would be superseded by federal standards under the proposal.
What does the proposal say about AI-generated deepfakes?
The framework calls for a federal law protecting individuals from unauthorized AI-generated deepfakes, expanding on the Take It Down Act signed by Trump. The proposed law would include exceptions for parody, satire, and news reporting protected under the First Amendment.
Why are critics opposed to the White House AI framework?
Critics argue the proposal prioritizes Big Tech interests over consumer protection. The Center for Democracy and Technology flagged internal contradictions, including a conflict between the framework's anti-coercion language and the administration's 'woke AI' executive order. Public Citizen called it a payback to corporations that donated to Trump's inauguration.
