×
OpenAI Co-Founder Secures $1B for New AI Safety Venture
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI co-founder launches rival AI venture: Ilya Sutskever, former chief scientist at OpenAI, has secured $1 billion in funding for his new artificial intelligence company, Safe Superintelligence (SSI), aimed at developing advanced AI systems with a focus on safety.

Funding details and investors: The substantial investment in SSI comes from notable venture capital firms, highlighting the growing interest in AI safety and development.

  • Andreessen Horowitz (a16z), a prominent VC firm known for its stance against California’s AI safety bill, is among the investors backing SSI.
  • Sequoia Capital, which has also invested in OpenAI, has contributed to the funding round, demonstrating its continued interest in the AI sector.
  • The $1 billion raised will be allocated to developing AI systems that significantly exceed human capabilities while prioritizing safety measures.

Company vision and timeline: SSI’s leadership has outlined ambitious goals for the company, emphasizing a long-term approach to AI development and safety.

  • The company’s CEO stated that SSI currently has no product offerings and does not expect to release any for several years.
  • This timeline suggests a focus on fundamental research and development rather than immediate commercialization.
  • The emphasis on creating superintelligent AI systems that surpass human abilities indicates SSI’s commitment to pushing the boundaries of AI technology.

Background and industry context: Sutskever’s new venture comes amid a complex history with OpenAI and reflects broader trends in the AI industry.

  • Sutskever co-founded OpenAI with Sam Altman but later attempted to remove Altman from his position as CEO, adding a layer of intrigue to the launch of SSI.
  • The involvement of high-profile investors who have also backed OpenAI suggests a growing ecosystem of competing yet interconnected AI research entities.
  • SSI’s focus on safety aligns with increasing concerns about the potential risks associated with advanced AI systems.

Implications for the AI landscape: The launch of SSI with substantial funding could have far-reaching effects on the AI industry and the development of superintelligent systems.

  • The entry of a new, well-funded player in the AI safety space may accelerate research and innovation in this critical area.
  • Competition between SSI and established entities like OpenAI could drive advancements in AI capabilities and safety measures.
  • The long-term approach taken by SSI might influence industry standards for responsible AI development and deployment.

Analyzing the investment strategy: The significant funding secured by SSI raises questions about investor expectations and the valuation of AI safety research.

  • The willingness of major VC firms to invest heavily in a company without immediate product plans underscores the perceived long-term value of AI safety research.
  • This investment strategy may signal a shift in how the tech industry views the importance of addressing potential risks associated with advanced AI systems.
  • The involvement of investors with seemingly conflicting positions on AI regulation (such as a16z’s stance on the California AI safety bill) highlights the complex dynamics at play in the AI industry.
OpenAI cofounder raises $1 billion for his OpenAI rival.

Recent News

The first mini PC with CoPilot Plus and Intel Core Ultra processors is here

Asus's new mini PC integrates dedicated AI hardware and Microsoft's Copilot Plus certification into a Mac Mini-sized desktop computer.

Leap Financial secures $3.5M for AI-powered global payments

Tech-driven lenders are helping immigrants optimize their income and credit by tracking remittances and financial flows to their home countries.

OpenAI CEO Sam Altman calls former business partner Elon Musk a ‘bully’

The legal battle exposes growing friction between Silicon Valley's competing visions for ethical AI development and corporate governance.