Signal/Noise
Signal/Noise
2025-11-13
While everyone obsesses over ChatGPT’s latest features, a quieter transformation is reshaping the entire AI landscape: the rise of cheap, capable Chinese models is forcing Western companies to abandon their premium pricing strategies, just as AI moves from experimental toy to critical business infrastructure. This isn’t about model quality anymore—it’s about who controls the economic foundation of the AI economy.
The Great AI Price War Has Already Been Won
Chinese AI models are quietly eating Silicon Valley’s lunch, and most Western executives haven’t even noticed they’re at war. DeepSeek’s pricing runs up to 40 times cheaper than OpenAI’s for comparable performance. Chinese open-weight models now dominate usage rankings on developer platforms, with seven of the top 20 models coming from China. This isn’t about subsidized dumping—it’s about fundamentally different cost structures and business models that make Western premium pricing obsolete.
The implications cascade beyond pricing. When Alibaba’s Qwen becomes the default choice for US startups building AI features, when Chinese models power coding assistants that American developers use daily, when Moonshot’s Kimi handles enterprise workflows that once required expensive ChatGPT subscriptions, you’re witnessing infrastructure capture in real time. These aren’t just cheaper alternatives—they’re becoming the foundation layer that everything else builds on.
Google’s response is telling: rushing to add agentic shopping features and AI calling capabilities not because users desperately need them, but because they need reasons to justify premium pricing in a world where the underlying intelligence is becoming commoditized. When your core product—language understanding and generation—can be delivered at 1/40th the cost by competitors, you either find new value propositions or watch your margins evaporate.
The Infrastructure Layer Is Everything
While markets obsess over model capabilities and ChatGPT personalities, the real battle is being fought in the infrastructure layer—and here’s where the money actually flows. Consider the numbers: major tech companies spent $360 billion on AI infrastructure last year alone. Nvidia hit a $5 trillion valuation. Data centers for AI will consume electricity equivalent to 44 million US households. This infrastructure buildout dwarfs the Manhattan Project in scale and the space race in strategic importance.
The China dimension adds urgency. Bloomberg’s analysis reveals that China leads in data volume (28% of global generation) and power infrastructure (double the US’s electricity capacity), while the US maintains advantages in elite talent and advanced chips. But here’s the kicker: Chinese companies are proving that older-generation chips and smaller models can deliver comparable results through better algorithms and training efficiency. DeepSeek and others are turning hardware constraints into competitive advantages.
This creates a fascinating dynamic where AI capabilities are becoming democratized even as the infrastructure to run them at scale becomes more concentrated. The winners won’t be those with the smartest models—they’ll be those who can deliver intelligence cheapest and most reliably at global scale. That’s why Google is partnering with Hugging Face, why Microsoft is racing to secure cloud infrastructure, and why every major tech company is building their own data centers rather than renting capacity.
From Lab Experiment to Mission-Critical Infrastructure
AI has crossed the Rubicon from experimental technology to mission-critical infrastructure, and most organizations are discovering they’re utterly unprepared for this transition. The shift is visible everywhere: from Warriors’ front office using AI to evaluate trades worth millions, to youth soccer clubs using it to settle parent disputes about playing time, to OpenAI providing open-weight models to the US military for sensitive operations.
This transition creates a new category of risk that most enterprises haven’t fully grasped. When your hiring depends on AI screening, your customer service runs on AI agents, and your financial forecasting relies on AI models, system failures don’t just cause inconvenience—they cause business continuity crises. The recent focus on “AI brain rot” and model reliability isn’t academic anymore; it’s operational risk management.
The speed of this transition is forcing uncomfortable choices. Companies can either adopt AI tools that may have reliability issues, or fall behind competitors who are moving faster with higher risk tolerance. The middle ground—careful, methodical adoption—is disappearing as markets punish hesitation. This explains why 57% of B2B companies have already put AI agents into production despite widespread concerns about transparency and control. They’re not choosing AI because it’s perfect; they’re choosing it because standing still is riskier than moving fast with imperfect tools.
Questions
- If Chinese models can deliver 90% of the capability at 2.5% of the cost, what happens to the $360 billion Western companies spent building AI infrastructure on the assumption of premium pricing?
- When mission-critical business functions depend on AI systems that even their creators can’t fully explain or control, how do you quantify and manage that systemic risk?
- Is the real AI arms race between the US and China actually being fought in power grids and data centers rather than research labs and talent acquisition?
Past Briefings
Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.
THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...
Mar 17, 2026Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting
THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...
Mar 16, 2026Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.
THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...