Trump plans bonfire of US state-level AI regulation

US President Donald Trump instructed his administration to begin work on creating a national regulation of artificial intelligence (AI) across the country in an attempt to circumvent what he called cumbersome legal frameworks at the state level.

The latest executive order (EO) to come out of the Oval Office is Ensuring National Artificial Intelligence Policybased on an order dated January 2025 entitled Removing barriers to American leadership in artificial intelligencein which Trump criticized his predecessor Joe Biden for allegedly trying to paralyze the industry through regulation.

Trump said his administration has since made “tremendous gains” that have led to trillions of dollars in investment in Artificial Intelligence Projects in the USA.

In his subsequent EO, Trump said that to win, American artificial intelligence companies must be allowed to innovate without excessive regulation, but they are hampered by “excessive” regulation at the state level. This creates a patchwork of 50 different regulatory regimes, making compliance much more difficult, especially for startups, he said.

Trump also accused some states of passing laws that require organizations to build “ideological bias” into AI models, noting Colorado Law which prohibits algorithmic discrimination. He said this could cause AI models to produce false results to avoid “differential treatment or impact on protected groups.”

“My Administration must work with… Congress to ensure there is a minimally burdensome National Standard, not 50 conflicting State Standards,” Trump wrote.

“The resulting framework should prohibit state laws that conflict with the policies set forth in this order. This framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are protected. A carefully designed national framework can ensure that the United States wins the AI ​​race, as we must.”

Task Force

On the basis that U.S. policy is intended to “maintain and enhance” its global dominance in AI through “minimally burdensome national policies,” the order directs U.S. Attorney General Pam Bondi to create an AI Litigation Task Force next month to challenge state AI laws that the administration deems inconsistent with EO on various grounds—for example, those that “unconstitutionally regulate interstate commerce” or those that Bondi simply deems illegal itself.

The Executive Director further directs that, within 90 days, Secretary of Commerce Howard Lutnick, in consultation with others, will release an assessment of existing state artificial intelligence laws that will identify any conflicts with broader policy, as well as those that may be referred to the Task Force.

At a minimum, this assessment is intended to identify those that require AI models to alter truthful results or cause developers or deployers to handle information in an unconstitutional manner, particularly with respect to First Amendment freedom of speech.

The EO makes various other provisions limiting certain federal funding to states with restrictive artificial intelligence laws, particularly with respect to broadband deployment. It directs agencies such as the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to consider national reporting and disclosure standards that could preempt conflicting laws in areas such as truth-telling, and proposes legislation to create a uniform federal AI policy to preempt conflicting state laws, although with some exceptions in areas such as child safety, AI computing, and data centers. infrastructure, as well as government procurement and the use of AI.

Kevin Kirkwood, CISO of a cybersecurity company I'll leavesaid that regardless of Trump's chosen implementation mechanism, the underlying idea of ​​creating a federal structure that would preempt state laws was not necessarily without merit.

“You can't force a distributed ecosystem to conform to a single vision just because you write it into an edict, but let's not confuse tactics with principles,” he said. “The underlying message is sound: AI regulation should be national in scope, not cobbled together from state capitals that haven't even agreed on what constitutes an algorithm.

“Artificial intelligence… is a national and global level of infrastructure. Allowing 50 states to create conflicting, piecemeal laws about how AI can be developed, deployed, or tested creates friction, uncertainty, and enormous compliance overhead. Whether it comes from Congress or executive order, a unified federal structure is necessary to ensure the U.S. remains competitive, cohesive, and able to set global norms.”

While acknowledging the argument that federal preemption undermines local control, Kirkwood said that when it comes to AI, local control will lead to fragmented standards that benefit no one “except maybe the lawyers.”

“California may want strong AI safety rules, but unless New York and Florida agree, developers will have to navigate a maze of conflicting rules,” he said. “Regulatory confusion like this doesn't protect people; it paralyzes innovation. It's not hard to imagine a future in which startups build for the least regulated government and shield everyone else. It's a race to the bottom disguised as consumer protection.”

Missing the point?

But Ryan McCurdy, VP of Marketing, Database Change Management Platform Liquidbasesaid the executive missed the point, although he acknowledged that federal alignment on AI was a good idea.

“A single set of rules means nothing if it doesn’t address the core problem at the root of every AI failure: the lack of control over the data structures that power those models,” he said. “Model-level rules will not protect the public if the underlying data is inconsistent, drifting, or untracked.

“So the real question is whether the national standard will require evidence,” McCurdy said. “Evidence of how models learn, evidence of how data evolves, evidence of how organizations prevent unauthorized or risky changes. This is the difference between real oversight and press releases.

“If the US wants to lead in artificial intelligence, it needs more than just a single set of rules,” he said. “We need a standard that forces artificial intelligence systems to be explainable, controllable and accountable from the ground up.”

Leave a Comment