Il TRUMP AMERICA AI Act sbaglia la politica sull'intelligenza artificiale, lasciando l'America ultima in ambito tecnologico

America’s leadership in artificial intelligence has never been guaranteed. It must be earned through explicitly embracing policy decisions that lead to an environment allowing experimentation rather than permission. That legacy is found throughout the Trump administration’s AI Action Plan, and the broader vision that emphasizes innovation and American technological dominance.

Unfortunately, in Dicembre, Senator Marsha Blackburn unveiled the TRUMP AMERICA AI Act,  and while she may get brownie points for naming it after the President and its patriotic branding, the underlying policy suggestions cut directly against those goals of his administration.

In practice, the bill would impose sweeping liability, empower unelected regulators with expansive discretion, and entrench and embolden a compliance-first culture driven by a fear that chills innovation. The likely outcome is not safer AI, but slower progress, fewer choices for consumers, and it leaves the US in a worse position in the global AI race.

A Regulatory Regime Built on Fear, Not Innovation

Within the section-by-section of the proposal is a presumption that AI systems are inherently dangerous and must be preemptively controlled by the federal government. Section 3 places a broad “duty of care” on AI developers to prevent and mitigate “foreseeable harm,” requires ongoing risk assessments, and grants the Federal Trade Commission authority to establish “minimum reasonable safeguards.”

That language sounds measured. In reality, it is dangerously vague.

“Foreseeable harm” is not a technical standard. It will be interpreted by regulators, trial lawyers, and political activists to mean whatever aligns with their priorities in a given moment. Once codified, this creates constant legal uncertainty, especially for startups, open-source developers, and small firms that lack teams of lawyers and compliance officers.

This approach is how the United States begins to resemble Europe: regulate first, innovate later, and watch leadership slip away, and it becomes irrelevant.

Further Empowering the FTC Means Empowering De Facto Censorship

Even more troubling is how these new powers could be used to shape not just safety practices, but speech itself.

The proposal would grant the FTC expanded authority to define and enforce AI “safeguards” and to police so-called unfair or harmful practices in the development and deployment of models. In theory, this is about consumer protection. In practice, it creates a powerful mechanism for regulatory content governance.

This concern is not speculative. In 2023, FTC leadership, under Chair Lina Khan, openly signaled that it believed it could use its existing Section 5 authority to scrutinize and potentially penalize AI developers for the types of data used to train their models and the types of outputs their systems produced. The implication was clear: if regulators disapprove of certain inputs or outputs, enforcement action could follow.

That is both an extraordinary and dangerous proposition.

Once regulators begin to define what constitutes “acceptable” training data or “responsible” model behavior, they inevitably begin to shape the boundaries of permissible speech. AI developers, facing vague standards and enormous legal risk, will respond rationally: they will over-censor. They will filter aggressively. They will avoid controversial topics. They will strip models of nuance. All out of fear that a regulator might bring an enforcement action.

This is how censorship emerges in modern regulatory systems: one that leverages incentives that punish deviation from bureaucratic norms. When the FTC becomes the effective arbiter of what AI systems should be allowed to produce, we are no longer talking about safety. We are talking about centralized control over digital speech infrastructure.

That should alarm anyone who cares about open markets, free expression, or democratic accountability.

The Political Temptation to Rewrite Section 230

Section 6 of the bill moves beyond AI regulation into one of Washington’s most politically charged battles: reforming Section 230. This is an unsurprising inclusion. Section 230 has been a bipartisan punching bag for years, criticized by conservatives who believe platforms suppress certain viewpoints and by progressives who believe platforms allow too much harmful content to remain online.

The TRUMP AMERICA AI Act attempts to thread this needle by creating new carve-outs to Section 230 immunity by including a so-called “Bad Samaritan” exception for platforms that allegedly facilitate unlawful content, and new requirements that platforms notify users about filtering and parental control tools.

The political motivation is understandable. However, the policy consequences are still deeply problematic.

Section 230 has been one of the most important legal foundations for the modern internet. It allows platforms, including emerging AI-powered services, to host user-generated content, provide tools, and offer open systems without being treated as the publisher of every word that flows through their services.

Weakening that protection, especially through vague standards, will inevitably lead to less speech online, something the Trump administration has full-throatedly rejected on the international stage when other countries have policies leading to such outcomes. 

When platforms face heightened liability risk and the dark specter of lawsuits, they become more cautious. They remove more content. They restrict more accounts. They limit more features. And once again, the companies best positioned to survive this environment are the largest incumbents with the most sophisticated moderation infrastructure and legal departments. Ironically enough, the companies that are best positioned to comply are also the same companies politicians often repeatedly complain about.

For smaller competitors and new AI-native platforms, a world without the certainty of Section 230 becomes yet another barrier to entry as a moat gets built. Combined with the proposal’s broader liability and compliance framework, Section 6 further entrenches the dominance of existing players while narrowing the space for innovation.

A Litigation Bonanza That Will Freeze Deployment

Section 10 of the proposal would expose the industry to a tidal wave of litigation. The Act authorizes lawsuits against AI developers for defective design, failure to warn, and “unreasonably dangerous” systems, while extending liability to deployers as well.

This is a plaintiff’s lawyer’s dream and a consumer’s nightmare.

Open-ended liability regimes incentivize companies to avoid risk at all costs. Startups will hesitate, questioning whether or not to enter the market. Open-source communities will retreat, stunting the development of cutting-edge products and services.

Innovation does not thrive when it is under permanent and significant legal threat. 

Criminalizing Data Use and Undermining the Foundations of AI

Perhaps the most economically destructive provision is Section 18, which creates a new federal right to sue companies for using covered data — including publicly available and copyrighted material — for AI training without explicit consent, with statutory damages and punitive damages available.

This would fundamentally destabilize how modern AI systems are built.

Machine learning systems learn from data in much the same way humans do: by reading, observing, synthesizing, and generalizing. This section flips the learning process on its head, punishing the search for knowledge and creating a system that advantages only the largest incumbents with vast compliance departments and licensing budgets, shutting everyone else out in the process. Consumers are left worse for wear because a frontier technology that is going to increasingly become a nexus for drawing knowledge will be stunted out of fear of legal liability.

A Direct Contradiction of the President’s AI Vision

President Trump’s stated posture on AI has emphasized three core principles:

  • America must lead the world in AI.
  • Innovation should not be strangled by bureaucracy.
  • Federal policy should empower builders.

The TRUMP AMERICA AI Act moves the needle in the opposite direction on all three fronts. It prioritizes compliance, punishing creativity and boldness. It empowers regulators to view themselves as a hammer and treat entrepreneurs like nails. It fails to appreciate the opportunity that our country faces, favoring a model that presumes risk. It inexplicably seeks to control the process of innovation, which has historically occurred through an organic emergent order.

Even the limited sections that are focused on growth around data center site reforms are embedded within a framework that ultimately makes deploying AI systems and leveraging their vast benefits needlessly difficult. The proposal will lead to stagnation, not leadership in AI.

Consumers Will Pay the Price

When innovation slows, consumers suffer. It’s basic economics.

Fewer competitors mean fewer choices. Increased compliance costs get passed onto consumers, which means higher prices. Litigation risk means more conservative products. The AI tools that dare to dream, ones that could have empowered teachers, helped small businesses, improved medical diagnostics, and expanded creative opportunities for artists, will simply either never reach the market or have their transformative capabilities significantly curtailed.

Making matters worse, the proposal would incentivize America’s most talented builders to begin looking abroad to other jurisdictions that may be more receptive to what they are trying to accomplish. Remember, the United States is in a global race for leadership in AI. Our leadership position is not guaranteed, and our values are exported with our tech.

The world will look vastly different with a tech stack run under the values of the United Kingdom, the UAE, or China. I don’t believe those are outcomes the President, members of Congress, or consumers would like to see.

Un percorso migliore da seguire

If the goal is to ensure American AI dominance, promote consumer welfare, and produce long-term prosperity, the path forward is not sweeping liability expansion and regulatory micromanagement. It is:

  • A clear preemptive federal standard on the development and deployment of models.
  • Clear, narrow rules targeting actual harms, not speculative fears.
  • Regulatory Sandboxes and Safe harbors that balance protecting consumers and innovation.
  • A bias toward permissionless innovation, not bureaucratic approval.

America became a technological superpower by trusting its innovators. The TRUMP AMERICA AI Act forgets that legacy. If the legislation were to resemble anything like what the section-by-section outlines, it would not make America great in AI; It would make America slower, more cautious, more bureaucratic, and less competitive in the very race we cannot afford to lose.

Condividere

Seguire:

Altri post

Iscriviti alla nostra Newsletter