Signal/Noise
Signal/Noise
2025-11-21
The AI industry is fracturing along a new axis: hardware sovereignty versus algorithmic supremacy. While everyone debates AI bubbles and model capabilities, the real power grab is happening in the physical layer—where chips are made, data centers are built, and who controls the infrastructure that makes AI possible.
The Hardware Sovereignty Play
OpenAI’s partnership with Foxconn isn’t just another supply chain deal—it’s a declaration of independence from the existing AI infrastructure cartel. By co-designing data center racks, cabling, and power systems in U.S. facilities, OpenAI is doing what no pure AI company has attempted: building vertical integration from silicon to service.
This matters because the current AI stack is a house of cards. OpenAI pays Nvidia for chips, Microsoft for cloud compute, and relies on Taiwan’s TSMC for manufacturing. Each dependency is a chokepoint. Foxconn’s U.S. factories in Ohio and Texas suddenly become strategic assets in a way that has nothing to do with iPhones.
The timing isn’t coincidental. Trump’s draft executive order threatening to withhold federal funding from states that regulate AI shows how hardware and policy are converging. The message is clear: AI infrastructure is now national security infrastructure. Companies that control the physical layer will increasingly dictate the terms of the digital layer.
Meanwhile, Europe’s Digital Omnibus is pulling in the opposite direction, loosening data restrictions and allowing AI training on personal data with fewer safeguards. The EU is essentially trading privacy for AI competitiveness, recognizing that regulatory purity is a luxury they can’t afford when competing with China and the U.S.
What’s fascinating is how this hardware scramble is happening while the software layer remains unsettled. Anthropic’s Claude Opus 4.5 is reportedly crushing Excel tasks, Google’s Gemini 3 Pro is pushing boundaries, yet none of these advances matter if you don’t control the factories where the inference happens.
The Great Unbundling Begins
While everyone obsesses over Nvidia’s earnings, a more fundamental shift is occurring: the AI stack is unbundling faster than anyone anticipated. Microsoft’s Ignite conference revealed the chaos hiding beneath the corporate messaging—they now have Agent 365, Agent HQ, Data IQ, Fabric IQ, and Foundry IQ, with customers confused about which solution solves what problem.
This isn’t poor product management; it’s the natural result of AI eating every layer of the software stack simultaneously. When your chatbot can write Excel formulas, design presentations, and manage infrastructure, the traditional boundaries between applications disappear. Microsoft is desperately trying to re-bundle services that AI has made obsolete.
The same fragmentation is visible everywhere. Google is testing ads in AI Mode because their search advertising model breaks when people stop clicking links. The EU is allowing personal data for AI training because their privacy-first approach was killing competitiveness. Even Foxconn is partnering with OpenAI because making phones isn’t enough when the future runs on data centers.
This unbundling creates winners and losers in unexpected places. AMD’s stock is up 99% this year—not because their chips are better than Nvidia’s, but because customers desperately want alternatives to avoid vendor lock-in. Traditional software companies are getting hollowed out from both ends: AI native startups are eating their core functionality while big tech platforms are absorbing their distribution.
The most telling signal? VCs are demanding proof of defensible moats before writing checks. The era of ‘ChatGPT wrapper’ startups is ending not because the technology isn’t impressive, but because anyone can access the same APIs. The new question isn’t ‘what can your AI do?’ but ‘what prevents someone else from doing it cheaper tomorrow?’
The Regulation Arbitrage Accelerates
Buried in today’s news is a fascinating contradiction: California is implementing the first major chatbot safety regulations while the federal government drafts orders to override state AI laws entirely. This isn’t just federalism—it’s regulatory arbitrage at scale.
California’s SB 243 requires companies to report safety concerns and remind users they’re talking to computers, not humans. Meanwhile, Trump’s draft executive order would create a DOJ task force to challenge such laws and potentially withhold broadband funding from non-compliant states. The message is brutally clear: states can either embrace federal AI priorities or lose federal infrastructure money.
The timing matters because AI companies are racing to establish facts on the ground before regulation catches up. OpenAI is committing $1.4 trillion to infrastructure; Nvidia is projecting $65 billion in quarterly chip sales; Microsoft is embedding AI across every product line. By the time courts resolve federal vs. state authority, the market structure will be locked in.
Europe’s Digital Omnibus reveals the endgame: regulators eventually capitulate to industry demands because economic competitiveness trumps consumer protection. The EU is now allowing AI training on personal data and reducing consent requirements—exactly what privacy advocates warned against three years ago.
What’s remarkable is how AI safety concerns are being weaponized for competitive advantage. When Anthropic calls for transparency requirements, they’re not just concerned about safety—they’re trying to impose costs on competitors who built their models with less documentation. When established players demand licensing schemes, they’re creating barriers for newcomers.
The real winner isn’t any particular company or country, but the AI industry itself. By creating regulatory complexity, they ensure that only well-funded players can navigate compliance costs, effectively turning regulation into a moat rather than a constraint.
Questions
- If AI infrastructure is now national security infrastructure, what happens when Amazon Web Services hosts Chinese AI models?
- When every software application becomes AI-powered, do traditional software categories still matter?
- Are we building the regulatory framework for today’s AI or tomorrow’s—and does the difference matter anymore?
Past Briefings
Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.
THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...
Mar 17, 2026Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting
THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...
Mar 16, 2026Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.
THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...