The AI Moratorium Hits a Road Block
Opponents of the moratorium now include two GOP senators, 14 GOP attorneys general, and a major AI company CEO
The One Big Beautiful Bill (OBBB) Act, recently passed by the House of Representatives on a near party-line vote, includes a sweeping 10-year moratorium on AI enforcement of AI regulations. While this budget package drew widespread attention for its AI provisions, the moratorium’s future remains uncertain under Senate reconciliation rules, which exclude non-budgetary measures.
Despite these procedural hurdles, the nearly unanimous Republican support for the AI moratorium initially appeared to signal a remarkable party consensus on a major issue of technology regulation—one with far-reaching implications for federal-state balance in the tech policy arena.
However, recent public statements from GOP senators and Republican attorneys general suggest that this perceived Republican unanimity may have been overstated.
GOP Senators Voice Concerns
The cracks in Republican support became evident during recent Senate hearings and public statements. At a May 21 hearing on AI impersonations, Senator Marsha Blackburn noted that her state of Tennessee “certainly needs protections” from AI impersonations, adding that “until we pass something that is federally preemptive, we can’t call for a moratorium.” It is interesting that Blackburn’s statement defends state-level some AI regulation on their merits, and not just because of federalism concerns.
Blackburn’s statements followed earlier statements by Senator Josh Hawley of Missouri. In an interview, Hawley expressed concerns about the moratorium both because of federalism concerns and because of his belief in the necessity of some AI regulations: “I would think that, just as a matter of federalism, we’d want states to be able to try out different regimes that they think will work for their state. And I think in general, on AI, I do think we need some sensible oversight that will protect people’s liberties.”
Bipartisan State Opposition
Perhaps the most significant challenge to the moratorium came on May 16, when 40 state attorneys general—including 14 Republicans—sent a letter to Congressional leaders opposing the moratorium.
The attorneys general argued that such sweeping federal preemption would undermine states’ traditional authority to protect residents from emerging AI-related harms, including deepfakes, algorithmic discrimination, and data privacy violations. They warned that the moratorium would leave consumers vulnerable while Congress remains slow to enact comprehensive federal AI legislation:
Imposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections. State [attorneys general] have stepped in to protect their citizens from a myriad of privacy and social media harms after witnessing, over a period of years, the fallout caused by tech companies’ implementation of new technology coupled with a woefully inadequate federal response. In the face of Congressional inaction on the emergence of real-world harms raised by the use of AI, states are likely to be the forum for addressing such issues. This bill would directly harm consumers, deprive them of rights currently held in many states, and prevent State [attorneys general] from fulfilling their mandate to protect consumers.
The letter urged Congress to reject the proposed language and instead pursue collaborative federal-state approaches to AI governance:
To the extent Congress is truly willing and able to wrestle with the opportunities and challenges raised by the emergence of AI, we stand ready to work with you and welcome federal partnership along the lines recommended earlier. And we acknowledge the uniquely federal and critical national security issues at play and wholeheartedly agree that our nation must be the AI superpower. This moratorium is the opposite approach, however, neither respectful to states nor responsible public policy. As such, we respectfully request that Congress reject the AI moratorium language added to the budget reconciliation bill.
Industry Opposition

This isn’t an example of opposition to the moratorium by Republican officials, but I wanted to include a reference to the recent op-ed by the Anthropic CEO Dario Amodei. Anthropic, as many of you know, is the company behind Claude — a Large Language Model with reporedly 18.9 monthly active users worldwide. In his piece, Amodei opposed the moratorium and instead called for federal regulation of the AI industry.
While acknowledging that the “motivations behind the moratorium”—namely, “prevent[ing] a burdensome patchwork of state laws [that] could compromise America’s competitive position against China”—are “understandable,” Amodei argued that “a 10-year moratorium is far too blunt an instrument”:
A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.
“Instead of a moratorium,” Amodei called for federal “transparency standard for AI companies, so that emerging risks are made clear to the American people”:
This national standard would require frontier A.I. developers — those working on the world’s most powerful models — to adopt policies for testing and evaluating their models. Developers of powerful A.I. models would be required to publicly disclose on their company websites not only what is in those policies, but also how they plan to test for and mitigate national security and other catastrophic risks. They would also have to be upfront about the steps they took, in light of test results, to make sure their models were safe before releasing them to the public.

