Senator Marsha Blackburn (R-Tenn.) has launched the US Congress’s AI regulation debate by releasing a discussion draft for a comprehensive federal AI bill. This proposal aims to formalize the executive order signed by President Donald Trump last December and introduces stringent rules for AI developers, platforms, and creative industries. Blackburn states the bill’s mission is to “protect children, creators, conservatives and communities from harm”.
What’s in the draft?
The draft imposes a duty of care on AI developers, requiring them to prevent and mitigate foreseeable harm to users. This means AI companies could be held accountable for real-world damages caused by their platforms. The bill also takes a firm stance on copyright, declaring that “an AI model’s unauthorized reproduction, copying, or processing of copyrighted works for the purpose of training, fine-tuning, developing, or creating AI does not constitute fair use under the Copyright Act”. If this provision remains, AI companies would need to rethink how they train large models, potentially sparking a wave of new licensing agreements or lawsuits from creators.
Other key points include:
- Online platforms, including social media, must implement tools and safeguards to protect users under 17 from online harms.
- The bill safeguards individuals’ and creators’ voice and visual likenesses, aiming to prevent unauthorized digital replicas.
- Federal transparency rules would require clear labeling, authentication, and detection of AI-generated content.
- Certain companies and federal agencies must report AI-related job impacts, such as layoffs and displacement, to the US Department of Labor quarterly.
- The bill proposes ending Section 230 protections, which currently shield platforms from liability for user-generated content. This change could hold platforms directly responsible for AI-driven harms.
Why does this matter for players, creators, and platforms?
If enacted, this bill would force AI developers and platforms to significantly revamp their operations. For creators, the copyright provisions could finally give them leverage against AI companies using their work without permission. Expect more licensing deals or legal battles if this language holds.
For platforms-especially those with younger users-the under-17 protections mean stricter age verification, content filters, and potentially more friction during onboarding. Eliminating Section 230 would be a seismic shift, exposing platforms to lawsuits over AI-generated content or moderation failures, likely increasing compliance costs and slowing new AI feature rollouts.
The bill also addresses longstanding conservative concerns about political bias in AI. It mandates third-party audits to prevent discrimination based on political affiliation. While critics argue these concerns are exaggerated, this language is included and could influence how platforms audit and adjust their models.
What’s next?
This is only a draft; the final law could look very different. Lawmakers will spend months negotiating, with tech and creative industry lobbyists already mobilizing. The bill may be watered down or amended with new provisions before reaching a vote.
The bottom line
- AI companies and platforms face significant new compliance challenges if the bill passes.
- Creators could gain greater control over how their work is used in AI training.
- Removing Section 230 would expose platforms to increased lawsuits and liability.
- Expect a lengthy battle in Congress before any law is enacted.