×
New scorecard from Future of Life Institute assesses companies’ AI safety readiness
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Artificial intelligence safety experts have conducted the first comprehensive safety evaluation of leading AI companies, revealing significant gaps in risk management and safety measures across the industry.

Key findings and scope: The Future of Life Institute‘s 2024 AI Safety Index evaluated six major AI companies – Anthropic, Google DeepMind, Meta, OpenAI, x.AI, and Zhipu AI – across multiple safety dimensions.

  • The assessment covered six key categories: Risk Assessment, Current Harms, Safety Frameworks, Existential Safety Strategy, Governance & Accountability, and Transparency & Communication
  • The evaluation used a standard US GPA grading system, ranging from A+ to F
  • Companies were assessed based on publicly available information and their responses to a survey conducted by FLI
  • Access the FLI AI Safety Index 2024 here

Critical concerns identified: The expert panel discovered widespread vulnerabilities and inadequate safety protocols across all evaluated companies.

  • All flagship AI models were found to be susceptible to adversarial attacks
  • None of the companies demonstrated adequate strategies for maintaining human control over advanced AI systems
  • Competitive pressures were identified as a key factor driving companies to bypass crucial safety considerations

Expert perspectives: Leading AI researchers emphasized the gravity of the findings and the importance of safety accountability.

  • Stuart Russell, UC Berkeley Professor of Computer Science, highlighted that current safety measures lack quantitative guarantees and may represent a technological dead end
  • Yoshua Bengio, Turing Award winner and Mila founder, stressed the importance of such evaluations for promoting accountability and responsible development
  • MIT Professor Max Tegmark emphasized the significance of the expert panel’s decades of combined experience in AI risk assessment

Review panel composition: The evaluation was conducted by a diverse group of respected AI experts and thought leaders.

  • The panel included prominent figures such as Turing Award winner Yoshua Bengio and Stuart Russell
  • Reviewers represented various institutions including UC Berkeley, Université de Montreal, Carnegie Mellon University, and youth AI advocacy organizations
  • The panel combined academic expertise with practical experience in AI development and safety

Looking ahead: Implications for AI governance: The findings underscore the urgent need for improved safety measures and accountability in AI development, while raising questions about the viability of current technological approaches to ensuring AI safety.

 

AI Safety Index Released - Future of Life Institute

Recent News

Veo 2 vs. Sora: A closer look at Google and OpenAI’s latest AI video tools

Tech companies unveil AI tools capable of generating realistic short videos from text prompts, though length and quality limitations persist as major hurdles.

7 essential ways to use ChatGPT’s new mobile search feature

OpenAI's mobile search upgrade enables business users to access current market data and news through conversational queries, marking a departure from traditional search methods.

FastVideo is an open-source framework that accelerates video diffusion models

New optimization techniques reduce the computing power needed for AI video generation from days to hours, though widespread adoption remains limited by hardware costs.