AI regulation embraces mathematical metrics: Governments are turning to computational power measurements to identify potentially dangerous AI systems that require oversight.
- The U.S. government and California are using a threshold of 10^26 floating-point operations per second (flops) to determine which AI models need reporting or regulation.
- This equates to 100 septillion calculations per second, a level of computing power that some lawmakers and AI safety advocates believe could enable AI to create weapons of mass destruction or conduct catastrophic cyberattacks.
- California’s proposed legislation adds an additional criterion, requiring regulated AI models to also cost at least $100 million to build.
Global regulatory landscape: Other jurisdictions are adopting similar approaches, with varying thresholds and requirements.
- The European Union’s AI Act sets a lower threshold of 10^25 flops, which already covers some existing AI systems.
- China is also considering using computing power measurements to determine which AI systems require safeguards.
- These regulations aim to distinguish between current high-performing generative AI systems and potentially more powerful future generations.
Controversy and criticism: The use of flops as a regulatory metric has sparked debate within the tech industry and AI community.
Rationale behind the metric: Proponents of the flops-based approach argue that it’s currently the best available method for assessing AI capabilities and potential risks.
- Anthony Aguirre, executive director of the Future of Life Institute, suggests that floating-point arithmetic provides a simple way to evaluate an AI model’s capability and risk.
- Regulators acknowledge that the metric is imperfect but view it as a starting point that can be adjusted as the technology evolves.
Alternative perspectives: Some experts propose different approaches to AI regulation and risk assessment.
Regulatory flexibility: Policymakers are aware of the need for adaptable regulations in the rapidly evolving field of AI.
- Both California’s proposed legislation and President Biden’s executive order treat the flops metric as temporary, allowing for future adjustments.
- The debate highlights the challenge of balancing innovation with responsible AI development and deployment.
Broader implications: The struggle to regulate AI highlights the complexities of governing emerging technologies.
- As AI capabilities continue to advance rapidly, regulators face the challenge of developing frameworks that can effectively mitigate risks without stifling innovation.
- The ongoing debate underscores the need for collaboration between policymakers, AI researchers, and industry leaders to establish meaningful and adaptable regulatory approaches.
- While the flops metric may not be perfect, it represents an initial attempt to quantify and address the potential dangers of increasingly powerful AI systems, setting the stage for more refined regulatory measures in the future.
How do you know when AI is powerful enough to be dangerous? Regulators try to do the math