back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-21

While everyone debates AI bubbles and state regulations, a fundamental shift is happening beneath the surface: the complete dissolution of boundaries between AI companies, creating a singular interconnected system where competition becomes collaboration, and individual corporate strategy becomes collective orchestration of the entire AI economy.

The Great AI Convergence: How Competition Became Collusion

The Microsoft-Nvidia-Anthropic deal announced this week isn’t just another partnership—it’s the latest evidence that the AI industry has evolved beyond competition into something resembling a single, distributed organism. Microsoft invests $5 billion in Anthropic while Anthropic commits to buying $30 billion in Microsoft compute. Nvidia invests in Anthropic while Anthropic commits to developing on Nvidia chips. It’s a perfect circle of mutual dependence that would make any antitrust lawyer’s head spin.

But this isn’t an anomaly—it’s the new normal. Google’s Alphabet owns DeepMind while investing in Anthropic. Amazon backs Anthropic while competing with it through Bedrock. OpenAI partners with Microsoft while Microsoft hedges with Anthropic. Every major player is simultaneously competitor, customer, supplier, and investor to every other player.

The strategic brilliance is that this structure makes traditional antitrust enforcement nearly impossible. There’s no single monopoly to break up when everyone owns everyone else. Instead of one company controlling AI, we have something far more sophisticated: a distributed monopoly where the entire industry functions as a single entity with aligned incentives. When Satya Nadella says ‘we are increasingly going to be customers of each other,’ he’s describing the architecture of the post-competitive economy.

This convergence solves the fundamental problem of AI development: the astronomical costs require risk-sharing that transcends traditional corporate boundaries. No single company can afford the full stack alone, so they’ve collectively created a system where success means everyone wins—and failure means everyone loses together. It’s not market consolidation; it’s market transcendence.

The Insurance Revolt: When Risk Becomes Uninsurable

While tech executives paint rosy pictures of AI’s future, insurers—the people whose literal job is pricing risk—are running for the exits. The growing reluctance to provide AI coverage isn’t just about technical uncertainty; it’s about the recognition that AI represents a fundamentally new category of systemic risk that traditional insurance models can’t handle.

Unlike traditional technology risks, AI failures don’t scale linearly. A faulty software update might crash some systems. But an AI model making decisions across millions of transactions, healthcare diagnoses, or financial trades can create cascading failures that spread faster than any human response. The potential for multibillion-dollar claims isn’t hypothetical—it’s inevitable in a system where AI touches everything.

The insurance retreat reveals something crucial that the AI hype machine obscures: even sophisticated risk assessment professionals can’t confidently model AI’s potential for catastrophic failure. When Warren Buffett won’t insure something, that tells you more about its real risk profile than any venture capital valuation.

This creates a paradox for AI adoption. Companies need insurance to deploy AI at scale, but insurers won’t provide coverage for systems whose failure modes they can’t understand or price. The result is that AI deployment is happening with dramatically less risk mitigation than any comparable technology in history. We’re essentially flying blind at 30,000 feet, and the people who usually sell us parachutes have decided they’d rather stay on the ground.

The insurance industry’s caution should be a wake-up call, but instead it’s being ignored in favor of moving fast and breaking things. That strategy works fine until the things that break are critical infrastructure, financial systems, or human lives.

The Regulatory Preemption Gambit: Federal Power Grab Disguised as Innovation Policy

Trump’s draft executive order to override state AI regulations isn’t really about federal versus state authority—it’s about creating a regulatory vacuum that benefits the AI industry at the expense of democratic oversight. The proposed DOJ AI litigation task force would systematically challenge any state that dares to impose meaningful constraints on AI development, using interstate commerce and First Amendment arguments as weapons.

The genius of this strategy is that it frames opposition to AI regulation as support for innovation and constitutional rights. Who could be against free speech and economic growth? But look closer and you’ll see something more sinister: the order explicitly targets states that require AI systems to alter ‘truthful outputs’—essentially arguing that AI companies have a constitutional right to deploy systems that generate harmful content without accountability.

The threat to withhold federal broadband funding from non-compliant states reveals the real game. This isn’t about constitutional principles; it’s about using federal leverage to prevent any meaningful constraints on AI development. States like California and Colorado have tried to implement basic transparency requirements—publish how you train models, report safety measures—and even these modest steps are deemed unacceptable.

What’s particularly telling is the order’s call for a ‘minimally burdensome national standard.’ Translation: a federal framework so weak it provides political cover for inaction while preventing states from implementing stronger protections. It’s regulatory capture disguised as regulatory clarity.

The tech industry has learned that it’s easier to capture one federal regulator than fifty state regulators. By preempting state action without providing meaningful federal oversight, they create the best of all worlds: the appearance of regulation with none of the substance. It’s a masterclass in using federalism as a shield rather than a principle.

Questions

  • If the AI industry has evolved beyond traditional competition, should we be regulating it like a utility rather than a collection of competing firms?
  • What happens when the technologies powering our economy become too risky for the insurance industry to cover—and should that tell us something about deployment speed?
  • Is the push for federal preemption of AI regulation actually about preventing any regulation at all, and what does that mean for democratic oversight of transformative technology?

Past Briefings

Feb 24, 2026

OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning

THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...

Feb 23, 2026

Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.

Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...

Feb 20, 2026

We’re Building the Agentic Web Faster Than We’re Protecting It

Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...