A major shift is occurring in artificial intelligence safety funding, as one of the field’s largest donors – Good Ventures, a philanthropic foundation that previously provided over half of all AI safety funding – recalibrates its giving strategy. This transformation has created unexpected gaps in the funding landscape, but also presents a unique opportunity for new donors to make significant impacts. With established organizations now operating at reduced capacity and promising new initiatives emerging, the current moment offers philanthropists a rare chance to shape the future of AI safety before the field attracts broader institutional attention.
Current funding landscape: Good Ventures, which previously provided over 50% of philanthropic AI safety funding, has recently stopped or reduced funding across several key areas.
- Funding has been cut or reduced for Republican-leaning think tanks, post-alignment causes, rationality community initiatives, and high school outreach programs
- Non-US think tanks, technical safety non-profits, and political campaigns are experiencing funding gaps
- Organizations deemed below Good Ventures’ funding threshold are also affected, including most agent foundations work
Emerging opportunities: The funding shift has created a significant disparity in capital access, with many organizations now operating with limited resources.
- Affected organizations can typically access only about 25% of available philanthropic capital
- The funding bar for these organizations is now 1.5 to 3 times higher than before
- This creates opportunities for donors to fill critical gaps in underfunded areas
Notable funding targets: Several organizations have been identified as particularly worthy of consideration for additional funding.
- SecureBio, a leading biorisk organization focusing on AI-Bio intersection, could utilize up to $1M in additional funding
- Regional opportunities exist for non-US donors, particularly in supporting AI governance non-profits like CLTR (UK) and CeSIA (France)
- The Center for AI Safety and its Political Action Fund, led by Dan Hendrycks, have demonstrated success in AI policy but lack Open Philanthropy funding
- Technical evaluation organizations like METR and Apollo Research require additional support for compute resources and research expansion
Strategic considerations: The current funding environment presents unique advantages for strategic donors.
- Smaller donors can act as “angel donors” for promising new organizations, potentially identifying opportunities that larger organizations might miss
- Diversifying funding sources can help reduce centralization risks
- Supporting organizations with funding gaps can achieve effectiveness comparable to or exceeding that of major foundations
Looking ahead: The AI safety funding landscape appears to be in transition, with potential implications for future opportunities.
- New donors, including Jed McCaleb’s Navigation Fund, are beginning to make larger grants
- The mainstreaming of AI safety is likely to attract more diverse funding sources
- Current funding gaps may represent a temporary opportunity for heightened impact
- Future opportunities may emerge in scalable AI alignment efforts, suggesting the potential for even greater impact
Broader implications: The evolving funding dynamics in AI safety present both challenges and opportunities, with the current moment potentially offering uniquely impactful giving opportunities before the field becomes more saturated with donors.
It looks like there are some good funding opportunities in AI safety right now