back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-21

While everyone debates AI bubbles and state regulations, a fundamental shift is happening beneath the surface: the complete dissolution of boundaries between AI companies, creating a singular interconnected system where competition becomes collaboration, and individual corporate strategy becomes collective orchestration of the entire AI economy.

The Great AI Convergence: How Competition Became Collusion

The Microsoft-Nvidia-Anthropic deal announced this week isn’t just another partnership—it’s the latest evidence that the AI industry has evolved beyond competition into something resembling a single, distributed organism. Microsoft invests $5 billion in Anthropic while Anthropic commits to buying $30 billion in Microsoft compute. Nvidia invests in Anthropic while Anthropic commits to developing on Nvidia chips. It’s a perfect circle of mutual dependence that would make any antitrust lawyer’s head spin.

But this isn’t an anomaly—it’s the new normal. Google’s Alphabet owns DeepMind while investing in Anthropic. Amazon backs Anthropic while competing with it through Bedrock. OpenAI partners with Microsoft while Microsoft hedges with Anthropic. Every major player is simultaneously competitor, customer, supplier, and investor to every other player.

The strategic brilliance is that this structure makes traditional antitrust enforcement nearly impossible. There’s no single monopoly to break up when everyone owns everyone else. Instead of one company controlling AI, we have something far more sophisticated: a distributed monopoly where the entire industry functions as a single entity with aligned incentives. When Satya Nadella says ‘we are increasingly going to be customers of each other,’ he’s describing the architecture of the post-competitive economy.

This convergence solves the fundamental problem of AI development: the astronomical costs require risk-sharing that transcends traditional corporate boundaries. No single company can afford the full stack alone, so they’ve collectively created a system where success means everyone wins—and failure means everyone loses together. It’s not market consolidation; it’s market transcendence.

The Insurance Revolt: When Risk Becomes Uninsurable

While tech executives paint rosy pictures of AI’s future, insurers—the people whose literal job is pricing risk—are running for the exits. The growing reluctance to provide AI coverage isn’t just about technical uncertainty; it’s about the recognition that AI represents a fundamentally new category of systemic risk that traditional insurance models can’t handle.

Unlike traditional technology risks, AI failures don’t scale linearly. A faulty software update might crash some systems. But an AI model making decisions across millions of transactions, healthcare diagnoses, or financial trades can create cascading failures that spread faster than any human response. The potential for multibillion-dollar claims isn’t hypothetical—it’s inevitable in a system where AI touches everything.

The insurance retreat reveals something crucial that the AI hype machine obscures: even sophisticated risk assessment professionals can’t confidently model AI’s potential for catastrophic failure. When Warren Buffett won’t insure something, that tells you more about its real risk profile than any venture capital valuation.

This creates a paradox for AI adoption. Companies need insurance to deploy AI at scale, but insurers won’t provide coverage for systems whose failure modes they can’t understand or price. The result is that AI deployment is happening with dramatically less risk mitigation than any comparable technology in history. We’re essentially flying blind at 30,000 feet, and the people who usually sell us parachutes have decided they’d rather stay on the ground.

The insurance industry’s caution should be a wake-up call, but instead it’s being ignored in favor of moving fast and breaking things. That strategy works fine until the things that break are critical infrastructure, financial systems, or human lives.

The Regulatory Preemption Gambit: Federal Power Grab Disguised as Innovation Policy

Trump’s draft executive order to override state AI regulations isn’t really about federal versus state authority—it’s about creating a regulatory vacuum that benefits the AI industry at the expense of democratic oversight. The proposed DOJ AI litigation task force would systematically challenge any state that dares to impose meaningful constraints on AI development, using interstate commerce and First Amendment arguments as weapons.

The genius of this strategy is that it frames opposition to AI regulation as support for innovation and constitutional rights. Who could be against free speech and economic growth? But look closer and you’ll see something more sinister: the order explicitly targets states that require AI systems to alter ‘truthful outputs’—essentially arguing that AI companies have a constitutional right to deploy systems that generate harmful content without accountability.

The threat to withhold federal broadband funding from non-compliant states reveals the real game. This isn’t about constitutional principles; it’s about using federal leverage to prevent any meaningful constraints on AI development. States like California and Colorado have tried to implement basic transparency requirements—publish how you train models, report safety measures—and even these modest steps are deemed unacceptable.

What’s particularly telling is the order’s call for a ‘minimally burdensome national standard.’ Translation: a federal framework so weak it provides political cover for inaction while preventing states from implementing stronger protections. It’s regulatory capture disguised as regulatory clarity.

The tech industry has learned that it’s easier to capture one federal regulator than fifty state regulators. By preempting state action without providing meaningful federal oversight, they create the best of all worlds: the appearance of regulation with none of the substance. It’s a masterclass in using federalism as a shield rather than a principle.

Questions

  • If the AI industry has evolved beyond traditional competition, should we be regulating it like a utility rather than a collection of competing firms?
  • What happens when the technologies powering our economy become too risky for the insurance industry to cover—and should that tell us something about deployment speed?
  • Is the push for federal preemption of AI regulation actually about preventing any regulation at all, and what does that mean for democratic oversight of transformative technology?

Past Briefings

Mar 18, 2026

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...

Mar 16, 2026

Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.

THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...