The evolution of AI companies’ approaches to open-source development has become a contentious issue, particularly between industry pioneers. Elon Musk’s recent $97.4 billion bid to acquire OpenAI, a company he co-founded in 2015, has highlighted growing tensions around AI transparency and accessibility.
The key conflict: Elon Musk’s criticism of OpenAI for not open-sourcing ChatGPT contrasts sharply with his own company xAI’s practices regarding their Grok AI model.
- Musk’s $97.4 billion bid to acquire OpenAI was rejected by the company’s board
- Throughout 2024, Musk filed four separate legal actions against OpenAI, claiming it had strayed from its open-source mission
- Musk has publicly insisted that OpenAI should return to being an “open source, safety-focused force for good”
Current state of affairs: xAI’s approach to open-source development reveals inconsistencies with Musk’s public stance on AI transparency.
- Only Grok 1, xAI’s initial model, has been open-sourced, released in March 2025, four months after its launch
- Subsequent models, including the recently released Grok 3, remain closed-source
- This practice mirrors the very approach Musk criticizes OpenAI for taking
Technical context: The definition of “open-source” in AI has evolved significantly in recent years, creating new debates about transparency.
- Traditional open-source meant making program source code publicly available
- In AI, companies now often release model “weights” – the parameters determining connections between neural network nodes
- This shift has sparked industry-wide discussions about what constitutes true AI transparency
Market impact: Chinese AI startup DeepSeek’s open-source R1 reasoning model has demonstrated the potential advantages of transparent AI development.
- DeepSeek’s model operates at a fraction of the cost compared to major Western AI companies
- This development has challenged the closed-source approach favored by companies like OpenAI and xAI
- The success of open-source models raises questions about the long-term viability of proprietary AI development
Strategic implications: The disconnect between Musk’s public statements and xAI’s practices suggests deeper industry tensions around AI development approaches, with potential consequences for future market dynamics and technological progress in the field.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...