U.S. Sen. Marsha Blackburn (R-TN) has unveiled a sweeping legislative draft called The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act (TRUMP AMERICA AI) Act.
The 291-page document seeks to overhaul artificial intelligence regulation in the United States by establishing a federal framework for AI governance while repealing key legal protections for online platforms, expanding liability for AI developers and introducing stringent content moderation requirements. At the heart of the bill is the complete repeal of Section 230 of the Communications Decency Act – the legal shield that has protected online platforms from liability for user-generated content since 1996.
Without this protection, platforms like Substack, Facebook and YouTube could face lawsuits over controversial posts, effectively forcing them into aggressive censorship to avoid legal risk. BrightU.AI's Enoch explains that Section 230 – a pivotal piece of U.S. internet legislation enacted in 1996 – provides legal immunity to online platforms such as social media networks, forums and websites from liability for content posted by third-party users.
This change means platforms must preemptively restrict or remove content that could be deemed "harmful," regardless of its accuracy—potentially chilling investigative journalism and dissenting viewpoints on public health, government policies and other contentious issues.
The bill introduces a federal products liability framework for AI systems, exposing developers to lawsuits for:
Critically, terms like "harm," "foreseeable" and "contributing factor" remain undefined, leaving enforcement to regulators and courts. This retroactive liability model incentivizes AI companies to preemptively restrict what their systems generate—limiting controversial or politically sensitive outputs.
Under the GUARD Act provisions, AI chatbot developers must implement age verification, effectively requiring digital ID checks for users. Critics warn this could lead to mass data collection and privacy erosion.
Additionally, platforms must modify algorithmic features like infinite scrolling, autoplay and personalized recommendations to prevent "compulsive usage" and psychological harm—effectively placing core engagement mechanics under federal oversight.
The bill explicitly states that AI training on copyrighted material does not qualify as fair use, opening the door for widespread litigation against AI developers like OpenAI and Meta. It also establishes liability for unauthorized AI-generated replicas of voices or likenesses, enforceable via lawsuits and fines.
The NIST (National Institute of Standards and Technology) is directed to develop content provenance and watermarking standards, creating a technical infrastructure to track digital media origins—raising concerns about surveillance disguised as authentication.
While Blackburn frames the bill as eliminating a "patchwork of state laws," it does not fully preempt state AI regulations, allowing stricter local rules in some areas. However, it centralizes enforcement under federal agencies like the Federal Trade Commission (FTC), Department of Justice and Department of Energy, consolidating power in Washington.
The bill imposes annual third-party bias audits for high-risk AI systems, requiring companies to prove their algorithms avoid "viewpoint discrimination." Additionally, AI developers must provide ethics training based on FTC-approved curricula.
A new Advanced Artificial Intelligence Evaluation Program will monitor AI risks, including:
Supporters argue the bill will protect children, creators and conservatives while ensuring U.S. dominance in AI. Critics warn it will stifle innovation, force platforms into self-censorship, and expand government surveillance.
The TRUMP AMERICA AI Act represents one of the most ambitious attempts to regulate AI and online speech in U.S. history. By repealing Section 230, expanding liability, and mandating content controls, it shifts enforcement from direct government censorship to corporate self-policing under legal threat.
For independent journalists, researchers and free speech advocates, the bill raises alarms about who gets to define "harm"—and whether truth itself may become too risky to publish. As the legislative process unfolds, the battle over AI governance, free expression, and federal power will only intensify.
Watch Jason Fyk and Edward Szall discussing former U.S. Rep. Louie Gohmert's (R-TX) support of a challenge to the CDA in this clip.
This video is from the High Hopes channel on Brighteon.com.
Sources include: