×
How Powerful Must AI Be To Be Dangerous? Regulators Did The Math To Find Out
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI regulation embraces mathematical metrics: Governments are turning to computational power measurements to identify potentially dangerous AI systems that require oversight.

  • The U.S. government and California are using a threshold of 10^26 floating-point operations per second (flops) to determine which AI models need reporting or regulation.
  • This equates to 100 septillion calculations per second, a level of computing power that some lawmakers and AI safety advocates believe could enable AI to create weapons of mass destruction or conduct catastrophic cyberattacks.
  • California’s proposed legislation adds an additional criterion, requiring regulated AI models to also cost at least $100 million to build.

Global regulatory landscape: Other jurisdictions are adopting similar approaches, with varying thresholds and requirements.

  • The European Union’s AI Act sets a lower threshold of 10^25 flops, which already covers some existing AI systems.
  • China is also considering using computing power measurements to determine which AI systems require safeguards.
  • These regulations aim to distinguish between current high-performing generative AI systems and potentially more powerful future generations.

Controversy and criticism: The use of flops as a regulatory metric has sparked debate within the tech industry and AI community.

Rationale behind the metric: Proponents of the flops-based approach argue that it’s currently the best available method for assessing AI capabilities and potential risks.

  • Anthony Aguirre, executive director of the Future of Life Institute, suggests that floating-point arithmetic provides a simple way to evaluate an AI model’s capability and risk.
  • Regulators acknowledge that the metric is imperfect but view it as a starting point that can be adjusted as the technology evolves.

Alternative perspectives: Some experts propose different approaches to AI regulation and risk assessment.

Regulatory flexibility: Policymakers are aware of the need for adaptable regulations in the rapidly evolving field of AI.

  • Both California’s proposed legislation and President Biden’s executive order treat the flops metric as temporary, allowing for future adjustments.
  • The debate highlights the challenge of balancing innovation with responsible AI development and deployment.

Broader implications: The struggle to regulate AI highlights the complexities of governing emerging technologies.

  • As AI capabilities continue to advance rapidly, regulators face the challenge of developing frameworks that can effectively mitigate risks without stifling innovation.
  • The ongoing debate underscores the need for collaboration between policymakers, AI researchers, and industry leaders to establish meaningful and adaptable regulatory approaches.
  • While the flops metric may not be perfect, it represents an initial attempt to quantify and address the potential dangers of increasingly powerful AI systems, setting the stage for more refined regulatory measures in the future.
How do you know when AI is powerful enough to be dangerous? Regulators try to do the math

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.