The Center on Long-Term Risk is recruiting researchers for its Summer Research Fellowship focused on empirical AI safety work aimed at reducing suffering risks in the far future. The eight-week program offers mentorship, collaboration opportunities, and integration with CLR’s research team, with applications due by April 15th. This year’s fellowship notably shifts focus toward empirical AI safety research while seeking candidates who might transition to full-time s-risk research.
The big picture: CLR’s 2025 Summer Research Fellowship targets AI safety researchers interested in reducing long-term suffering risks through an eight-week collaborative research program.
- Fellows will work on independent projects while receiving guidance from experienced mentors and regular interaction with CLR’s research team.
- The program emphasizes empirical AI safety research connected to s-risk reduction, creating opportunities for meaningful contributions to this specialized field.
Key differences this year: The 2025 fellowship has been redesigned with several significant changes from previous iterations.
- CLR is specifically seeking applicants interested in empirical AI safety work relevant to s-risks, even if they’re less familiar with CLR’s specific research approach.
- The organization has streamlined the first application round and expects to make only two to four offers, targeting individuals seriously considering transitioning into s-risk research.
Research priorities: The fellowship focuses on three main areas of empirical AI safety research relevant to reducing future suffering risks.
- The personas/characters track examines how AI models develop different personalities, potential training paths that could create malevolent tendencies, and preference formation in misaligned models.
- Multi-agent dynamics research explores how models behave during extended interactions with other agents and predicting behavior across diverse scenarios.
- The AI for strategy research track investigates how AI assistants might contribute to macrostrategy research and methods for verifying AI-generated research quality.
How to apply: Interested candidates must submit applications by Tuesday, April 15th at 11:59 PM Pacific Time through CLR’s website.
Center on Long-Term Risk: Summer Research Fellowship 2025 - Apply Now