×
AI Pioneer Ilya Sutskever Launches New Company to Tackle the Most Critical Problem of Our Time: Safe Superintelligence
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Ilya Sutskever, co-founder of OpenAI, launches new AI company focused solely on developing safe superintelligence, raising questions about the future of AI safety research and the competitive landscape.

Key details of Sutskever’s new venture: Safe Superintelligence Inc. (SSI) was founded just one month after Sutskever’s departure from OpenAI, where he served as chief scientist:

  • Sutskever co-founded SSI with ex-Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy.
  • The company’s singular mission is to build “safe superintelligence” or “SSI”, which they believe is the most important technical problem of our time.
  • Unlike OpenAI’s nonprofit origins, SSI is designed from the ground up as a for-profit entity, with a business model aimed at insulating safety, security, and progress from short-term commercial pressures.

Sutskever’s AI safety concerns and background: At OpenAI, Sutskever worked extensively on AI safety issues related to the potential rise of superintelligent AI systems:

  • In 2023, Sutskever co-authored a blog post predicting that human-level AI could arrive within a decade and may not necessarily be benevolent, necessitating research into control and restriction methods.
  • Sutskever worked closely with Jan Leike, who co-led OpenAI’s Superalignment team focused on AI safety, before both departed the company in May 2024 after reported disagreements with OpenAI’s leadership over safety approaches.

SSI’s approach and philosophy: The new company aims to advance AI capabilities rapidly while ensuring safety remains the top priority:

  • SSI’s team, investors, and business model are all aligned around the sole focus of achieving safe superintelligence.
  • The company plans to tackle safety and capabilities in parallel as engineering and scientific challenges, with no distractions from management overhead, product cycles, or near-term commercial pressures.
  • SSI currently has offices in Palo Alto and Tel Aviv where it is recruiting top technical talent.

Analyzing the implications: Sutskever’s new venture raises important questions about the future trajectory of AI development and the role of safety considerations:

  • The launch of SSI underscores the critical importance of AI safety research as the technology advances towards human-level intelligence, but also highlights ongoing debates and disagreements within the field over the best approaches.
  • Sutskever’s departure from OpenAI and the founding of a new company focused solely on safe superintelligence suggests a potential splintering of the AI safety community and a more competitive landscape as different teams race to develop powerful AI systems.
  • SSI’s for-profit structure and apparent ease of raising capital point to the significant economic incentives surrounding AI development, even as the long-term safety implications remain uncertain and highly consequential for society.

Recent News

Amazon chief says GenAI is growing 3X faster than cloud computing

Amazon's AWS division sees AI services growing three times faster than traditional cloud offerings as enterprise customers rush to adopt artificial intelligence tools.

Microsoft’s 10 new AI agents fortify its grip on enterprise AI

Microsoft's enterprise AI agents gain rapid adoption as 100,000 organizations deploy automated business tools across customer service, finance, and supply chain operations.

Former BP CEO joins AI data center startup

Energy veterans and tech companies forge new alliances as AI computing centers strain power grids and demand sustainable solutions.