×
AI Pioneer Ilya Sutskever Launches New Company to Tackle the Most Critical Problem of Our Time: Safe Superintelligence
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Ilya Sutskever, co-founder of OpenAI, launches new AI company focused solely on developing safe superintelligence, raising questions about the future of AI safety research and the competitive landscape.

Key details of Sutskever’s new venture: Safe Superintelligence Inc. (SSI) was founded just one month after Sutskever’s departure from OpenAI, where he served as chief scientist:

  • Sutskever co-founded SSI with ex-Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy.
  • The company’s singular mission is to build “safe superintelligence” or “SSI”, which they believe is the most important technical problem of our time.
  • Unlike OpenAI’s nonprofit origins, SSI is designed from the ground up as a for-profit entity, with a business model aimed at insulating safety, security, and progress from short-term commercial pressures.

Sutskever’s AI safety concerns and background: At OpenAI, Sutskever worked extensively on AI safety issues related to the potential rise of superintelligent AI systems:

  • In 2023, Sutskever co-authored a blog post predicting that human-level AI could arrive within a decade and may not necessarily be benevolent, necessitating research into control and restriction methods.
  • Sutskever worked closely with Jan Leike, who co-led OpenAI’s Superalignment team focused on AI safety, before both departed the company in May 2024 after reported disagreements with OpenAI’s leadership over safety approaches.

SSI’s approach and philosophy: The new company aims to advance AI capabilities rapidly while ensuring safety remains the top priority:

  • SSI’s team, investors, and business model are all aligned around the sole focus of achieving safe superintelligence.
  • The company plans to tackle safety and capabilities in parallel as engineering and scientific challenges, with no distractions from management overhead, product cycles, or near-term commercial pressures.
  • SSI currently has offices in Palo Alto and Tel Aviv where it is recruiting top technical talent.

Analyzing the implications: Sutskever’s new venture raises important questions about the future trajectory of AI development and the role of safety considerations:

  • The launch of SSI underscores the critical importance of AI safety research as the technology advances towards human-level intelligence, but also highlights ongoing debates and disagreements within the field over the best approaches.
  • Sutskever’s departure from OpenAI and the founding of a new company focused solely on safe superintelligence suggests a potential splintering of the AI safety community and a more competitive landscape as different teams race to develop powerful AI systems.
  • SSI’s for-profit structure and apparent ease of raising capital point to the significant economic incentives surrounding AI development, even as the long-term safety implications remain uncertain and highly consequential for society.

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.