×
AI Pioneer Ilya Sutskever Launches New Company to Tackle the Most Critical Problem of Our Time: Safe Superintelligence
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Ilya Sutskever, co-founder of OpenAI, launches new AI company focused solely on developing safe superintelligence, raising questions about the future of AI safety research and the competitive landscape.

Key details of Sutskever’s new venture: Safe Superintelligence Inc. (SSI) was founded just one month after Sutskever’s departure from OpenAI, where he served as chief scientist:

  • Sutskever co-founded SSI with ex-Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy.
  • The company’s singular mission is to build “safe superintelligence” or “SSI”, which they believe is the most important technical problem of our time.
  • Unlike OpenAI’s nonprofit origins, SSI is designed from the ground up as a for-profit entity, with a business model aimed at insulating safety, security, and progress from short-term commercial pressures.

Sutskever’s AI safety concerns and background: At OpenAI, Sutskever worked extensively on AI safety issues related to the potential rise of superintelligent AI systems:

  • In 2023, Sutskever co-authored a blog post predicting that human-level AI could arrive within a decade and may not necessarily be benevolent, necessitating research into control and restriction methods.
  • Sutskever worked closely with Jan Leike, who co-led OpenAI’s Superalignment team focused on AI safety, before both departed the company in May 2024 after reported disagreements with OpenAI’s leadership over safety approaches.

SSI’s approach and philosophy: The new company aims to advance AI capabilities rapidly while ensuring safety remains the top priority:

  • SSI’s team, investors, and business model are all aligned around the sole focus of achieving safe superintelligence.
  • The company plans to tackle safety and capabilities in parallel as engineering and scientific challenges, with no distractions from management overhead, product cycles, or near-term commercial pressures.
  • SSI currently has offices in Palo Alto and Tel Aviv where it is recruiting top technical talent.

Analyzing the implications: Sutskever’s new venture raises important questions about the future trajectory of AI development and the role of safety considerations:

  • The launch of SSI underscores the critical importance of AI safety research as the technology advances towards human-level intelligence, but also highlights ongoing debates and disagreements within the field over the best approaches.
  • Sutskever’s departure from OpenAI and the founding of a new company focused solely on safe superintelligence suggests a potential splintering of the AI safety community and a more competitive landscape as different teams race to develop powerful AI systems.
  • SSI’s for-profit structure and apparent ease of raising capital point to the significant economic incentives surrounding AI development, even as the long-term safety implications remain uncertain and highly consequential for society.

Recent News

How reinforcement learning may unintentionally lead to misaligned AGI

AI systems enhanced with reinforcement learning can optimize for goals in potentially uncontrollable ways, raising new concerns about transparency and safety.

How task-specific small language models can outperform their larger AI cousins

Twitter's shift to paid API access threatens the survival of numerous research projects and small-scale automation tools that previously relied on free data access.

Seeking interpretability: The parallels between biological and artificial neural networks

Recent studies reveal that artificial neural networks process information in ways strikingly similar to biological brains, advancing understanding in both neuroscience and machine learning.