×
Schmidt Sciences offers $500K grants for AI safety research in inference-time computing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Schmidt Sciences is launching a significant funding initiative to address technical AI safety challenges in the inference-time compute paradigm, a critical yet under-researched area in AI development. With plans to distribute grants of up to $500,000 to qualified research teams, this RFP targets projects that can produce meaningful results within 12-18 months on safety implications and opportunities presented by this emerging AI paradigm. The initiative represents an important push to proactively address potential risks as AI systems evolve toward using more computational resources during inference rather than just training.

The big picture: Schmidt Sciences has opened applications for a research funding initiative focused on technical AI safety challenges related to the inference-time compute paradigm, with submissions due by April 30, 2025.

  • The program will fund teams up to $500,000 for research projects that can deliver significant outputs within 12-18 months.
  • Applications begin with a 500-word description of the proposed research idea addressing the core question about critical technical AI safety challenges or opportunities in the inference-time compute paradigm.

Key research focus: The RFP specifically targets both understanding how the inference-time compute paradigm affects model safety and exploring how this paradigm could be leveraged to make large language models safer.

  • The core question asks researchers to identify and address the most critical technical AI safety challenges or opportunities emerging from this paradigm.
  • Schmidt Sciences emphasizes the need for tangible research outcomes that advance scientific understanding of inference-time compute safety.

Potential research directions: The RFP outlines several illustrative examples of project ideas spanning both persistent AI safety problems and new risks specific to inference-time compute.

  • Research can investigate enduring problems like adversarial robustness, contamination, and scalable oversight alongside newer concerns such as chain-of-thought faithfulness.
  • Projects may focus on scientifically understanding risks, designing safer models, or actively harnessing inference-time compute as a safety tool.

Encouraged approaches: Schmidt Sciences is particularly interested in research that discovers novel failure modes, demonstrates replicable problems, designs robust evaluations, or constructs targeted safety interventions.

  • Successful applications will likely propose work that produces concrete, practical insights rather than purely theoretical exploration.
  • For further details, interested researchers can contact [email protected] or visit the Schmidt Sciences website.
Schmidt Sciences Technical AI Safety RFP on Inference-Time Compute – Deadline: April 30

Recent News

Report: Government’s AI adoption gap threatens US national security

Federal agencies, hampered by scarce talent and outdated infrastructure, remain far behind private industry in AI adoption, creating vulnerabilities that could compromise critical government functions and regulation of increasingly sophisticated systems.

Anthropic’s new AI tutor guides students through thinking instead of giving answers

Anthropic's AI tutor prompts student reasoning with guiding questions rather than answers, addressing educators' concerns about shortcut thinking.

AI security tools finally shift from gimmicks to useful automation, says analyst

After a rocky start, AI security tools are now automating analyst workflows and triaging alerts, moving beyond flashy features to deliver real productivity gains for overburdened security teams.