×
Center on Long-Term Risk opens fellowship for AI safety researchers focused on reducing suffering risks
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The Center on Long-Term Risk is recruiting researchers for its Summer Research Fellowship focused on empirical AI safety work aimed at reducing suffering risks in the far future. The eight-week program offers mentorship, collaboration opportunities, and integration with CLR’s research team, with applications due by April 15th. This year’s fellowship notably shifts focus toward empirical AI safety research while seeking candidates who might transition to full-time s-risk research.

The big picture: CLR’s 2025 Summer Research Fellowship targets AI safety researchers interested in reducing long-term suffering risks through an eight-week collaborative research program.

  • Fellows will work on independent projects while receiving guidance from experienced mentors and regular interaction with CLR’s research team.
  • The program emphasizes empirical AI safety research connected to s-risk reduction, creating opportunities for meaningful contributions to this specialized field.

Key differences this year: The 2025 fellowship has been redesigned with several significant changes from previous iterations.

  • CLR is specifically seeking applicants interested in empirical AI safety work relevant to s-risks, even if they’re less familiar with CLR’s specific research approach.
  • The organization has streamlined the first application round and expects to make only two to four offers, targeting individuals seriously considering transitioning into s-risk research.

Research priorities: The fellowship focuses on three main areas of empirical AI safety research relevant to reducing future suffering risks.

  • The personas/characters track examines how AI models develop different personalities, potential training paths that could create malevolent tendencies, and preference formation in misaligned models.
  • Multi-agent dynamics research explores how models behave during extended interactions with other agents and predicting behavior across diverse scenarios.
  • The AI for strategy research track investigates how AI assistants might contribute to macrostrategy research and methods for verifying AI-generated research quality.

How to apply: Interested candidates must submit applications by Tuesday, April 15th at 11:59 PM Pacific Time through CLR’s website.

Center on Long-Term Risk: Summer Research Fellowship 2025 - Apply Now

Recent News

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.

Veo 3 brings audio to AI video and tackles the Will Smith Test

Google's latest AI video generation model introduces synchronized audio capabilities, though still struggles with realistic eating sounds when depicting the celebrity in its now-standard benchmark test.

How subtle biases derail LLM evaluations

Study finds language models exhibit pervasive positional preferences and prompt sensitivity when making judgments, raising concerns for their reliability in high-stakes decision-making contexts.