×
Schmidt Sciences offers $500K grants for AI safety research in inference-time computing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Schmidt Sciences is launching a significant funding initiative to address technical AI safety challenges in the inference-time compute paradigm, a critical yet under-researched area in AI development. With plans to distribute grants of up to $500,000 to qualified research teams, this RFP targets projects that can produce meaningful results within 12-18 months on safety implications and opportunities presented by this emerging AI paradigm. The initiative represents an important push to proactively address potential risks as AI systems evolve toward using more computational resources during inference rather than just training.

The big picture: Schmidt Sciences has opened applications for a research funding initiative focused on technical AI safety challenges related to the inference-time compute paradigm, with submissions due by April 30, 2025.

  • The program will fund teams up to $500,000 for research projects that can deliver significant outputs within 12-18 months.
  • Applications begin with a 500-word description of the proposed research idea addressing the core question about critical technical AI safety challenges or opportunities in the inference-time compute paradigm.

Key research focus: The RFP specifically targets both understanding how the inference-time compute paradigm affects model safety and exploring how this paradigm could be leveraged to make large language models safer.

  • The core question asks researchers to identify and address the most critical technical AI safety challenges or opportunities emerging from this paradigm.
  • Schmidt Sciences emphasizes the need for tangible research outcomes that advance scientific understanding of inference-time compute safety.

Potential research directions: The RFP outlines several illustrative examples of project ideas spanning both persistent AI safety problems and new risks specific to inference-time compute.

  • Research can investigate enduring problems like adversarial robustness, contamination, and scalable oversight alongside newer concerns such as chain-of-thought faithfulness.
  • Projects may focus on scientifically understanding risks, designing safer models, or actively harnessing inference-time compute as a safety tool.

Encouraged approaches: Schmidt Sciences is particularly interested in research that discovers novel failure modes, demonstrates replicable problems, designs robust evaluations, or constructs targeted safety interventions.

  • Successful applications will likely propose work that produces concrete, practical insights rather than purely theoretical exploration.
  • For further details, interested researchers can contact [email protected] or visit the Schmidt Sciences website.
Schmidt Sciences Technical AI Safety RFP on Inference-Time Compute – Deadline: April 30

Recent News

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.

Veo 3 brings audio to AI video and tackles the Will Smith Test

Google's latest AI video generation model introduces synchronized audio capabilities, though still struggles with realistic eating sounds when depicting the celebrity in its now-standard benchmark test.

How subtle biases derail LLM evaluations

Study finds language models exhibit pervasive positional preferences and prompt sensitivity when making judgments, raising concerns for their reliability in high-stakes decision-making contexts.