×
Schmidt Sciences offers $500K grants for AI safety research in inference-time computing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Schmidt Sciences is launching a significant funding initiative to address technical AI safety challenges in the inference-time compute paradigm, a critical yet under-researched area in AI development. With plans to distribute grants of up to $500,000 to qualified research teams, this RFP targets projects that can produce meaningful results within 12-18 months on safety implications and opportunities presented by this emerging AI paradigm. The initiative represents an important push to proactively address potential risks as AI systems evolve toward using more computational resources during inference rather than just training.

The big picture: Schmidt Sciences has opened applications for a research funding initiative focused on technical AI safety challenges related to the inference-time compute paradigm, with submissions due by April 30, 2025.

  • The program will fund teams up to $500,000 for research projects that can deliver significant outputs within 12-18 months.
  • Applications begin with a 500-word description of the proposed research idea addressing the core question about critical technical AI safety challenges or opportunities in the inference-time compute paradigm.

Key research focus: The RFP specifically targets both understanding how the inference-time compute paradigm affects model safety and exploring how this paradigm could be leveraged to make large language models safer.

  • The core question asks researchers to identify and address the most critical technical AI safety challenges or opportunities emerging from this paradigm.
  • Schmidt Sciences emphasizes the need for tangible research outcomes that advance scientific understanding of inference-time compute safety.

Potential research directions: The RFP outlines several illustrative examples of project ideas spanning both persistent AI safety problems and new risks specific to inference-time compute.

  • Research can investigate enduring problems like adversarial robustness, contamination, and scalable oversight alongside newer concerns such as chain-of-thought faithfulness.
  • Projects may focus on scientifically understanding risks, designing safer models, or actively harnessing inference-time compute as a safety tool.

Encouraged approaches: Schmidt Sciences is particularly interested in research that discovers novel failure modes, demonstrates replicable problems, designs robust evaluations, or constructs targeted safety interventions.

  • Successful applications will likely propose work that produces concrete, practical insights rather than purely theoretical exploration.
  • For further details, interested researchers can contact [email protected] or visit the Schmidt Sciences website.
Schmidt Sciences Technical AI Safety RFP on Inference-Time Compute – Deadline: April 30

Recent News

AI’s impact on productivity: Strategies to avoid complacency

Maintaining active thinking habits while using AI tools can prevent cognitive complacency without sacrificing productivity gains.

OpenAI launches GPT-4 Turbo with enhanced capabilities

New GPT-4.1 model expands context window to one million tokens while reducing costs by 26 percent compared to its predecessor, addressing efficiency concerns from developers.

AI models struggle with basic physical tasks in manufacturing

Leading AI systems fail at basic manufacturing tasks that human machinists routinely complete, highlighting a potential future where knowledge work becomes automated while physical jobs remain protected from AI disruption.