Schmidt Sciences is launching a significant funding initiative to address technical AI safety challenges in the inference-time compute paradigm, a critical yet under-researched area in AI development. With plans to distribute grants of up to $500,000 to qualified research teams, this RFP targets projects that can produce meaningful results within 12-18 months on safety implications and opportunities presented by this emerging AI paradigm. The initiative represents an important push to proactively address potential risks as AI systems evolve toward using more computational resources during inference rather than just training.
The big picture: Schmidt Sciences has opened applications for a research funding initiative focused on technical AI safety challenges related to the inference-time compute paradigm, with submissions due by April 30, 2025.
Key research focus: The RFP specifically targets both understanding how the inference-time compute paradigm affects model safety and exploring how this paradigm could be leveraged to make large language models safer.
Potential research directions: The RFP outlines several illustrative examples of project ideas spanning both persistent AI safety problems and new risks specific to inference-time compute.
Encouraged approaches: Schmidt Sciences is particularly interested in research that discovers novel failure modes, demonstrates replicable problems, designs robust evaluations, or constructs targeted safety interventions.