Despite widespread rhetoric about AI deregulation, the United States is actively regulating artificial intelligence—just not where most people are looking. Rather than taking a hands-off approach, Washington strategically intervenes in AI’s foundational components like semiconductors and model weights while maintaining lighter oversight of consumer-facing applications like chatbots.
What you should know: The US government’s AI regulation strategy focuses on controlling the building blocks of AI systems rather than end-user applications.
- Both the Trump and Biden administrations have heavily regulated AI chips, with Biden restricting chip access to China for national security reasons and Trump pursuing deals with countries like the UAE.
- Export controls target advanced semiconductors and even model weights—the underlying parameters that enable AI systems to function.
- This approach contrasts sharply with the EU’s AI Act, which primarily regulates visible applications in healthcare, employment, and law enforcement.
The big picture: Global AI regulation is evolving through three distinct waves, each targeting different layers of the technology stack.
- The first wave, exemplified by EU regulations, focused on preventing societal harms from AI applications through bans on high-risk uses.
- The second wave, led by US and Chinese rivals, takes a national security approach by controlling foundational AI components to maintain military advantage.
- An emerging third wave combines both societal and security concerns, breaking down regulatory silos for more comprehensive oversight.
Why this matters: The deregulation narrative obscures how governments actually control AI development and deployment.
- Countries are strategically targeting different components of AI’s technology stack—from hardware and data centers to software operating behind applications like ChatGPT.
- China restricts AI models to combat deepfakes and inauthentic content, while the US controls exports of advanced chips and model weights.
- This regulatory complexity is often buried in dense administrative language like “Implementation of Additional Export Controls,” making it difficult for the public to understand government intervention.
What they’re saying: Researchers Sacha Alanoca, a doctoral researcher at Stanford University, and Maroussia Lévesque, a doctoral researcher at Harvard Law School, argue that transparency about AI regulation is essential for effective global cooperation.
- “It’s hard to justify a hands-off stance on societal harms while Washington readily intervenes on chips for national security,” the authors note.
- They emphasize that “recognizing the full spectrum of regulation, from export controls to trade policy, is the first step toward effective global cooperation.”
- “Without that clarity, the conversation on global AI governance will remain hollow.”
Key details: The research reveals that hybrid regulatory approaches combining security and societal concerns work more effectively than siloed strategies.
- US AI policy represents “light touch at the surface, iron grip at the core” rather than true deregulation.
- No global AI framework can succeed while the US, home to the world’s largest AI labs, maintains the illusion of staying out of regulation entirely.
- The shift from application-focused to infrastructure-focused regulation represents a fundamental change in how governments approach AI governance.
Don’t be fooled. The US is regulating AI – just not the way you think