back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-21

While everyone debates AI bubbles and state regulations, a fundamental shift is happening beneath the surface: the complete dissolution of boundaries between AI companies, creating a singular interconnected system where competition becomes collaboration, and individual corporate strategy becomes collective orchestration of the entire AI economy.

The Great AI Convergence: How Competition Became Collusion

The Microsoft-Nvidia-Anthropic deal announced this week isn’t just another partnership—it’s the latest evidence that the AI industry has evolved beyond competition into something resembling a single, distributed organism. Microsoft invests $5 billion in Anthropic while Anthropic commits to buying $30 billion in Microsoft compute. Nvidia invests in Anthropic while Anthropic commits to developing on Nvidia chips. It’s a perfect circle of mutual dependence that would make any antitrust lawyer’s head spin.

But this isn’t an anomaly—it’s the new normal. Google’s Alphabet owns DeepMind while investing in Anthropic. Amazon backs Anthropic while competing with it through Bedrock. OpenAI partners with Microsoft while Microsoft hedges with Anthropic. Every major player is simultaneously competitor, customer, supplier, and investor to every other player.

The strategic brilliance is that this structure makes traditional antitrust enforcement nearly impossible. There’s no single monopoly to break up when everyone owns everyone else. Instead of one company controlling AI, we have something far more sophisticated: a distributed monopoly where the entire industry functions as a single entity with aligned incentives. When Satya Nadella says ‘we are increasingly going to be customers of each other,’ he’s describing the architecture of the post-competitive economy.

This convergence solves the fundamental problem of AI development: the astronomical costs require risk-sharing that transcends traditional corporate boundaries. No single company can afford the full stack alone, so they’ve collectively created a system where success means everyone wins—and failure means everyone loses together. It’s not market consolidation; it’s market transcendence.

The Insurance Revolt: When Risk Becomes Uninsurable

While tech executives paint rosy pictures of AI’s future, insurers—the people whose literal job is pricing risk—are running for the exits. The growing reluctance to provide AI coverage isn’t just about technical uncertainty; it’s about the recognition that AI represents a fundamentally new category of systemic risk that traditional insurance models can’t handle.

Unlike traditional technology risks, AI failures don’t scale linearly. A faulty software update might crash some systems. But an AI model making decisions across millions of transactions, healthcare diagnoses, or financial trades can create cascading failures that spread faster than any human response. The potential for multibillion-dollar claims isn’t hypothetical—it’s inevitable in a system where AI touches everything.

The insurance retreat reveals something crucial that the AI hype machine obscures: even sophisticated risk assessment professionals can’t confidently model AI’s potential for catastrophic failure. When Warren Buffett won’t insure something, that tells you more about its real risk profile than any venture capital valuation.

This creates a paradox for AI adoption. Companies need insurance to deploy AI at scale, but insurers won’t provide coverage for systems whose failure modes they can’t understand or price. The result is that AI deployment is happening with dramatically less risk mitigation than any comparable technology in history. We’re essentially flying blind at 30,000 feet, and the people who usually sell us parachutes have decided they’d rather stay on the ground.

The insurance industry’s caution should be a wake-up call, but instead it’s being ignored in favor of moving fast and breaking things. That strategy works fine until the things that break are critical infrastructure, financial systems, or human lives.

The Regulatory Preemption Gambit: Federal Power Grab Disguised as Innovation Policy

Trump’s draft executive order to override state AI regulations isn’t really about federal versus state authority—it’s about creating a regulatory vacuum that benefits the AI industry at the expense of democratic oversight. The proposed DOJ AI litigation task force would systematically challenge any state that dares to impose meaningful constraints on AI development, using interstate commerce and First Amendment arguments as weapons.

The genius of this strategy is that it frames opposition to AI regulation as support for innovation and constitutional rights. Who could be against free speech and economic growth? But look closer and you’ll see something more sinister: the order explicitly targets states that require AI systems to alter ‘truthful outputs’—essentially arguing that AI companies have a constitutional right to deploy systems that generate harmful content without accountability.

The threat to withhold federal broadband funding from non-compliant states reveals the real game. This isn’t about constitutional principles; it’s about using federal leverage to prevent any meaningful constraints on AI development. States like California and Colorado have tried to implement basic transparency requirements—publish how you train models, report safety measures—and even these modest steps are deemed unacceptable.

What’s particularly telling is the order’s call for a ‘minimally burdensome national standard.’ Translation: a federal framework so weak it provides political cover for inaction while preventing states from implementing stronger protections. It’s regulatory capture disguised as regulatory clarity.

The tech industry has learned that it’s easier to capture one federal regulator than fifty state regulators. By preempting state action without providing meaningful federal oversight, they create the best of all worlds: the appearance of regulation with none of the substance. It’s a masterclass in using federalism as a shield rather than a principle.

Questions

  • If the AI industry has evolved beyond traditional competition, should we be regulating it like a utility rather than a collection of competing firms?
  • What happens when the technologies powering our economy become too risky for the insurance industry to cover—and should that tell us something about deployment speed?
  • Is the push for federal preemption of AI regulation actually about preventing any regulation at all, and what does that mean for democratic oversight of transformative technology?

Past Briefings

Jan 1, 2026

Signal/Noise

Signal/Noise 2026-01-01 The AI industry enters 2026 facing a fundamental reckoning: the easy money phase is over, and what emerges next will separate genuine technological progress from elaborate venture theater. Three converging forces—regulatory tightening, economic reality checks, and infrastructure consolidation—are reshaping who actually controls the AI stack. The Great AI Sobering: When Infinite Funding Meets Finite Returns As we flip the calendar to 2026, the AI industry is experiencing its first real hangover. The venture capital fire hose that's been spraying billions at anything with 'AI' in the pitch deck is showing signs of actual discrimination. This isn't about a...

Dec 30, 2025

Signal/Noise

Signal/Noise 2025-12-31 As 2025 closes, the AI landscape reveals a deepening chasm between the commoditized generative layer and the emerging battlegrounds of autonomous agents, sovereign infrastructure, and authenticated human attention. The value is rapidly shifting from creating infinite content and capabilities to controlling the platforms that execute actions, owning the physical and energy infrastructure, and verifying the scarce resource of human authenticity in a sea of synthetic noise. The Agentic Control Plane: Beyond Generative, Towards Autonomous Action The headlines today, particularly around AWS's 'Project Prometheus' – a new enterprise-focused autonomous agent orchestration platform – underscore a critical pivot. We've long...

Dec 29, 2025

Signal/Noise: The Invisible War for Your Intent

Signal/Noise: The Invisible War for Your Intent 2025-12-30 As AI's generative capabilities become a commodity, the real battle shifts from creating content to capturing and owning the user's context and intent. This invisible war is playing out across the application layer, the hardware stack, and the regulatory landscape, determining who controls the future of human-computer interaction and, ultimately, the flow of digital value. The 'Agentic Layer' vs. The 'Contextual OS': Who Owns Your Digital Butler? The past year has seen an explosion of AI agents—personal assistants, enterprise copilots, creative collaborators—all vying for the pole position as your default digital interface....