A federal regulatory moratorium on state AI rules: 50 sets of AI regulations won’t help consumers

A federal moratorium on state-level AI regulations isn’t some skull-crushing overreach, it’s a prudent way to stop bad local rules and unify eventual AI rules of the road.

Consumers who want the latest and greatest in technological goods and tools know that Washington, D.C. holds great sway on whether they can access it. 

Legislative bills and regulatory announcements explode from the nation’s capital each and every day creating rules that product makers and innovators must follow in order to bring goods and services to market.

Less known, however, is how much state regulatory agencies and state laws attempt to govern the same tools and products. Whether it’s the State of New York or Commonwealth of Massachusetts, politicians and regulators are just as busy as their colleagues in federal positions. Sometimes this makes sense, and other times it’s more of a burden than help.

We can already see this conflict arising when it comes to figuring out the rules of the game for artificial intelligence models, apps, and systems.

In order to quell the state regulatory fever that could fragment AI’s nascent regulatory existence, federal lawmakers are seeking a “moratorium” on state-level rules for at least the next ten years. And it’s something that’s urgently needed.

Section 43201 of HR 1, otherwise known as the “Big Beautiful Bill,” contains a prevision that preempts state regulations on AI:

“No State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.”

As R Street Institute’s Adam Thierer has elucidated in a number of des articles, blogs, and even congressional témoignage, the reason for federal authorities to put some breaks on state-level rules is that AI lawmaking has truly grown out of control.

Landmark AI bills have already reached various states of success in states across the union, ranging from more restrictive bills in California and Colorado, to more amendable ones in Texas and Florida. At last look, there are over 1,000 state bills related to AI technology.

And as these rules populate, it’s clear that any AI developer or firm will have to do a yeoman’s job of compliance for 330 million Americans in 50 jurisdictions. And likely, the biggest stats will create the baseline rules all others will have to follow.

When populous states like California make regulatory rules that differ from other states, companies and firms that wish to promote their goods and services across the entire country will likely have to default to California rules. That much is clear.

In policy circles, we refer to this as the “California Spillover Effect,” which I’ve explained specifically in the area of energy policy, but the same applies to tech policy as well. 

While this small moratorium is sound, a number of liberal and conservative politicians and groups have feigned outrage. They view halting state rules as giving a “blank check” to tech firms that offer AI products and services that many of us are beginning to rely on. Even Joe Rogan opined on the moratorium on his show, viewing it as a dangerous concession that will risk humanity once “killer AI” rears its ugly head.

Believing there should be more guardrails on AI is a reasonable position, and one many people are likely to agree on.

However, this analyzes the context of AI rulemaking from a black and white perspective on if it should exist or not. That’s not the question nor the aim. Neither is it about encroaching on states’ rights, as some Republicans have offered.

Rather, it’s about reducing regulatory burdens for an industry that is so new and obviously digital. Forcing compliance on AI models and companies in 50 different jurisdictions would mean terrible products geoblocked depending on which US state we happen to live in.

If we want consumers to benefit from AI across the country, that will mean having a broad national-based framework – but only when we know how to do it. The technology is so new and how best to erect those guardrails is still uncertain.

While our aim is to have some kind of federal framework, we cannot allow consumers in California, Vermont, or Nebraska to be locked out from innovation because their state lawmakers happen to be more spooked by AI than state regulators elsewhere.

Perhaps that’s a lesson for other tech issues as well. A state-by-state approach on regulating technology may work for some issue areas, but for others, not so much.

A federal moratorium on state laws related to AI, in the end, is a prudent way to allow us to catch the benefits before we’re deprived of them.

Yaël Ossowski est directrice adjointe du Consumer Choice Center

Partager

Suivre:

Plus de messages

Abonnez-vous à notre newsletter