×
AI safety research gets $40M offering from Open Philanthropy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Open Philanthropy has announced a $40 million grant initiative for technical AI safety research, with potential for additional funding based on application quality.

Program scope and structure: The initiative spans 21 research areas across five main categories, focusing on critical aspects of AI safety and alignment.

  • The research areas include adversarial machine learning, model transparency, theoretical studies, and alternative approaches to mitigating AI risks
  • Applications are being accepted through April 15, 2025, beginning with a 300-word expression of interest
  • The program is structured to accommodate various funding needs, from basic research expenses to establishing new research organizations

Key research priorities: The initiative emphasizes understanding and addressing potential risks in AI systems while improving their reliability and transparency.

  • Adversarial machine learning research will focus on jailbreaks, control evaluations, and alignment stress tests
  • Model transparency investigations will explore white-box techniques, activation monitoring, and feature representations
  • Studies will examine sophisticated misbehavior in Large Language Models (LLMs), including alignment faking and encoded reasoning
  • Projects exploring theoretical aspects will investigate inductive biases and approaches to aligning superintelligence

Grant flexibility and support: Open Philanthropy has designed the program to be inclusive and accessible to various research entities and funding needs.

  • Grant types include support for research expenses, discrete projects lasting 6-24 months, and academic start-up packages
  • Funding is available for both existing nonprofits and the establishment of new research organizations
  • The program encourages applications even from those uncertain about their project’s exact fit, maintaining a low barrier to entry

Application process: The initiative emphasizes accessibility and transparency in its application procedures.

  • Initial submissions require only a brief 300-word expression of interest
  • Detailed information about research areas, eligibility criteria, and example projects is available in the full Request for Proposals
  • Questions can be directed to [email protected]

Future implications: This substantial funding initiative signals a growing recognition of the importance of AI safety research while potentially reshaping the landscape of technical AI safety development.

  • The program’s broad scope and significant funding could accelerate progress in understanding and addressing AI risks
  • The initiative’s experimental nature in gauging funding demand may influence future investment patterns in AI safety research
  • The diverse range of supported research areas suggests a comprehensive approach to addressing AI safety challenges
Open Philanthropy Technical AI Safety RFP - $40M Available Across 21 Research Areas

Recent News

AI agents reshape digital workplaces as Moveworks invests heavily

AI agents evolve from chatbots to task-completing digital coworkers as Moveworks launches comprehensive platform for enterprise-ready agent creation, integration, and deployment.

McGovern Institute at MIT celebrates a quarter century of brain science research

MIT's McGovern Institute marks 25 years of translating brain research into practical applications, from CRISPR gene therapy to neural-controlled prosthetics.

Agentic AI transforms hiring practices in recruitment industry

AI recruitment tools accelerate candidate matching and reduce bias, but require human oversight to ensure effective hiring decisions.