Skip to content

Trump Wants to Kill Every State AI Law. Democrats Are Fighting Back.

DS
LDS Team
Let's Data Science
10 minAudio · 1 listens
Listen Along
0:00/ 0:00
AI voice
The White House released a four-page national AI framework on March 20, 2026, calling on Congress to wipe out AI regulations in all 50 states. Within hours, Democrats introduced a bill to stop it.

On a Friday morning in late March, the Trump White House handed the AI industry what lobbyists had spent two years asking for: a federal blueprint that, if enacted, would gut AI laws in California, Colorado, New York, Texas, and every other state that dared to regulate the technology first.

The document was four pages long. Its policy asks were high-level. But buried in the final section was the provision that mattered most to both supporters and opponents: a call for sweeping federal preemption of state AI laws that the administration deemed "cumbersome" and "unduly burdensome" to American innovation.

By afternoon, a group of House Democrats had introduced the GUARDRAILS Act to stop it.

The White House Wants One Rule for All 50 States

The National Policy Framework for Artificial Intelligence, released March 20, 2026, lays out seven priorities the Trump administration wants Congress to codify into federal law. The document builds on Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence," signed by President Trump on December 11, 2025, which directed federal agencies to identify and challenge state laws that obstruct national AI policy.

The seven pillars of the framework are:

  • Child protection
  • Community safeguarding
  • Intellectual property rights
  • Free speech
  • Innovation and competition
  • Workforce development
  • Federal preemption — the one that drew the most immediate fire

On preemption, the framework is explicit. States should be barred from regulating AI model development, which the administration characterizes as inherently interstate commerce. States should also be blocked from holding AI developers liable for unlawful conduct by third parties who use their systems. The administration frames this as replacing a "patchwork of conflicting state laws" with a "minimally burdensome" national standard.

White House officials argued the logic plainly: "This framework can succeed only if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation."

What the Seven Pillars Actually Say

The framework's other provisions offer a mix of genuine consumer protection and industry-friendly deferrals.

On child safety, the document calls for "commercially reasonable" age-assurance requirements, meaning platforms could satisfy the rule through parental attestation rather than independent verification. It reaffirms that existing laws like COPPA remain enforceable, one of the few unambiguous wins for child advocates.

On intellectual property, the framework takes an aggressive position: AI training on copyrighted material "does not violate copyright laws." It then retreats, saying Congress should defer to courts on fair-use disputes rather than legislate a definitive answer. For creators, the framework promises safeguards against unauthorized replication of their voice or likeness, but leaves enforcement mechanisms vague.

On free speech, the framework calls for prohibiting government coercion of AI platforms to suppress lawful political content. Critics immediately noted this provision appears aimed at preventing future administrations from pressuring companies to moderate politically sensitive AI outputs, not at protecting individual users from the companies themselves.

On innovation, the framework proposes regulatory sandboxes and expanded access to federal datasets. It explicitly rejects creating any new federal AI regulator, favoring existing sector-specific agencies instead.

The administration also addresses energy costs, an increasingly live issue as data centers consume more electricity. The framework asks Congress to require AI companies to cover increased power costs they impose on local ratepayers, and to streamline federal permitting for on-site data center power generation.

The Industry Cheers, the Competition Frames Everything

The AI industry praised the framework almost immediately. Patrick Hedger, director of policy at NetChoice, said "the Trump White House understands that it was a light-touch regulatory environment, not 50 different confusing and conflicting regulatory regimes, that enabled the internet revolution and that innovation and investment in winning the AI future for America will require a similar approach." Daniel Castro, director of the Center for Data Innovation, said the framework "avoids the worst instincts in today's AI debate" including "alarmism" about unemployment and treating AI training as a copyright violation.

Senate Commerce Committee Chair Ted Cruz offered the clearest political signal from Republican leadership. "I look forward to working with the White House and members of the Commerce Committee to advance meaningful AI legislation that safeguards free speech, establishes regulatory sandboxes, protects children and provides a national standard for AI in the United States," Cruz said.

Speaker Mike Johnson and House Majority Leader Steve Scalise jointly called on Congress to "take action" to "ensure we continue to harness [AI's] potential and beat China in the global AI race." Energy and Commerce Chairman Brett Guthrie, Judiciary Chair Jim Jordan, and Science Committee Chair Brian Babin all signaled support.

The administration's framing throughout is explicitly competitive with China and dismissive of the European approach. Michael Kratsios, director of the White House Office of Science and Technology Policy, had already traveled to Davos to contrast Trump's approach with the EU AI Act, calling European regulations "an absolute disaster." The framework positions the United States in a deliberate break from the EU's rights-based model, betting that speed and minimal friction will produce global AI dominance.

The Counterarguments Are Loud and Specific

The opposition mobilized within hours of the framework's release.

House Democrats introduced the GUARDRAILS Act, the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act, on the same day. Sponsors included Reps. Don Beyer of Virginia, Doris Matsui of California, Ted Lieu of California, Sara Jacobs of California, and April McClain Delaney of Maryland. Sen. Brian Schatz of Hawaii filed companion legislation in the Senate.

Rep. Beyer was direct about what he saw in the framework. "The Trump White House aims to kill state AI laws without setting even minimally acceptable federal guardrails, exposing the American public to the growing risks accompanying completely unchecked artificial intelligence," he said. "Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public."

Rep. Sara Jacobs called the executive order underlying the framework "an unconstitutional attempt to do an end run" around Congress and "a clear overreach of executive authority." Rep. Ted Lieu argued that "only Congress can establish a national framework that preempts state laws" and called on Congress to serve as an independent check on the executive branch.

Rep. Yvette Clarke of New York did not mince words. She described the framework as "written by Big Tech, for Big Tech."

The critique from consumer advocates was equally sharp. Robert Weissman, co-president of Public Citizen, called the framework "a hollow document with only one tough and meaningfully binding provision, delivering Big Tech's top policy priority: It aims to preempt all state laws and rules dealing with AI." Weissman acknowledged that state legislatures are not keeping up with the risks AI companies are imposing but argued they are still "trying to meet the novel and enormous challenges of the moment" — which he said is precisely why industry wants to shut them down.

Brad Carson, president of Americans for Responsible Innovation, was direct: "If you think the current state of play in social media guardrails are A-OK, then you'll be fine with the framework. If — like most — you believe we made catastrophic mistakes re social media, then you should fervently oppose this vacuous 'framework.'"

At the state level, California and New York have active AI statutes already in force. Colorado's landmark AI Act was delayed to June 30, 2026, but remains on track to take effect. A coalition of state attorneys general, child safety groups, and AI safety researchers had already mounted significant resistance to earlier preemption efforts in 2025, and those laws remain operative until Congress acts.

Law firms studying the framework were quick to identify where the fights will happen.

Attorneys at Freshfields wrote that "the scope of preemption will be heavily contested and is likely to generate significant litigation." They noted that state laws remain operative until Congress actually passes preempting legislation, and that regulatory tension between federal agencies and state requirements will create compliance complexity in the interim.

The preemption question is especially difficult because the framework tries to preserve some state authority while eliminating other parts. States would keep enforcement power over child protection, consumer fraud, and zoning for data centers. But they would lose the ability to regulate how AI models are built, trained, or deployed.

Public Knowledge, a digital rights organization, warned that preemption proposals of this type have been drafted so broadly as to "encompass many consumer protection, civil rights, privacy, and health and safety laws." The group noted that the administration offered "no affirmative federal protections for consumers, workers, or civil rights" as a replacement.

The legislative path is narrow. Republicans hold a razor-thin House majority. Both Democrats and some Republicans have previously resisted preemption efforts. Rep. Valerie Foushee of North Carolina, co-chair of the House Democratic Commission on AI and the Innovation Economy, said the framework "lacks meaningful guardrails" and overlooks real-world impacts on jobs and communities. Even Sen. Marsha Blackburn, a Tennessee Republican who worked with the White House on AI legislation, had previously proposed a "duty of care" standard for AI developers that the framework explicitly rejected.

Freshfields attorneys advised companies to monitor legislative developments and engage through industry associations, noting that provisions with bipartisan support, such as child safety and ratepayer protection, are most likely to advance first.

The Bottom Line

The White House AI framework is less a law than a political declaration. It tells Congress, states, industry, and trading partners what the Trump administration believes AI governance should look like: light federal rules, no new regulators, deference to courts on the hard questions, and a national standard that sidelines every state legislature that moved faster than Washington.

Whether it becomes law depends on a House majority with no margin for error, a Senate where Democrats have already introduced countervailing legislation, and a series of constitutional questions that courts will spend years resolving.

The states are not waiting to find out. Neither are the lawyers.

Sources

Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths