His company is already bigger than Warner Brothers. Warner Brothers!
The race to develop artificial general intelligence (AGI) has attracted massive investment despite uncertain timelines and feasibility. Ilya Sutskever, OpenAI‘s former chief scientist known for controversial statements about AI consciousness, has launched a new venture that’s drawing attention for its unusual business model.
The big picture: Safe Superintelligence, Sutskever’s new AI company, has achieved a $30 billion valuation without offering any products or clear technological differentiators.
- The company recently secured an additional $1 billion in funding from prominent investors including Andreessen Horowitz and Sequoia Capital
- Its valuation now exceeds established companies like Warner Bros, Nokia, and Dow Chemical
- The company explicitly states it will not release any products until it develops “safe superintelligence”
Unconventional business approach: Sutskever’s strategy deliberately avoids the traditional startup path of iterative product development and market competition.
- The company aims to remain “fully insulated from outside pressures” of product development and market competition
- Their website offers vague promises about approaching “safety and capabilities in tandem”
- No technical details or specific methodologies are provided to explain how they will achieve their goals
Investment context: The massive valuation comes amid growing skepticism about AGI timeline predictions and feasibility.
- The company’s valuation increased from $5 billion to $30 billion since its launch in June
- This rapid valuation growth coincides with broader industry hype about AGI, particularly from OpenAI’s Sam Altman
- Venture capitalists typically invest in pre-product companies, but backing a venture with such an indefinite timeline is unusual
Leadership background: Sutskever’s departure from OpenAI adds an intriguing dimension to the company’s trajectory.
- He left OpenAI last summer following a failed attempt to remove CEO Sam Altman
- Sutskever previously made headlines by suggesting that current neural networks might be “slightly conscious”
- His track record includes both significant technical achievements and controversial statements about AI capabilities
Market reality check: The extraordinary gap between current AI capabilities and true superintelligence raises questions about investor expectations.
- Many experts debate whether AGI is achievable at all, let alone within a commercially viable timeframe
- The company’s success depends on achieving breakthroughs that have eluded researchers for decades
- There is no clear consensus on what constitutes superintelligence or how to ensure its safety
Investment risks and implications: The unprecedented valuation and business model represent a significant gamble on future technological developments.
- Investors face potentially unlimited waiting periods with no interim products or revenue
- The lack of concrete technical explanations or methodologies increases investment risk
- The company’s approach challenges traditional venture capital metrics and timelines
Reality versus ambition: Safe Superintelligence’s ambitious goals and massive valuation reflect both the optimism and potential disconnect in current AI investment trends. While the company’s focus on safety is commendable, the absence of concrete technological foundations or interim milestones raises questions about whether this represents genuine innovation or simply effective fundraising in an overheated market.
There’s Something Very Weird About This $30 Billion AI Startup by a Man Who Said Neural Networks May Already Be Conscious