Signal/Noise
Signal/Noise
2025-11-20
While everyone debates whether AI is a bubble, three quiet power moves reveal the real game: corporate control is consolidating around compute infrastructure, government pushback is finally materializing, and the human resistance to AI slop is creating unexpected market dynamics that could reshape who wins and loses in the next phase.
The Great Compute Consolidation: Infrastructure as the New Oil
Microsoft and Nvidia’s reported $15 billion investment in Anthropic isn’t about backing a ChatGPT competitor—it’s about locking down the entire AI supply chain. Anthropic commits to buying $30 billion in Microsoft compute powered by Nvidia chips. This isn’t venture capital; it’s vertical integration disguised as partnership.
Meanwhile, Nokia splits off its AI business weeks after Nvidia’s $1 billion 6G investment, and data center construction companies like Dycom see earnings soar. The pattern is clear: while everyone argues about AI capabilities, the smart money is buying the picks and shovels—and then buying the miners too.
Yann LeCun’s departure from Meta to start his own ‘world models’ company, with Meta as a partner, signals another dimension of this consolidation. The AI godfathers aren’t just switching teams; they’re creating captive R&D shops funded by Big Tech parents. It’s the ultimate regulatory arbitrage—maintain the appearance of competition while ensuring all innovation flows back to the same handful of players.
This isn’t the messy startup ecosystem that produced the internet. It’s more like the oil industry’s evolution into a handful of vertically integrated giants. The difference is that instead of controlling refineries and gas stations, these companies control compute infrastructure and model development. Once that infrastructure is locked down, switching costs become prohibitive.
Government Awakens: The Regulatory Counterstrike Begins
Trump’s draft executive order targeting state AI laws isn’t just political theater—it’s the opening salvo in a regulatory turf war that will define AI’s future. The order directs DOJ to sue states that pass AI regulations, positioning federal inaction as a strategic choice to benefit Silicon Valley donors.
This comes as the broader regulatory landscape shifts dramatically. The Cambridge study showing 51% of novelists believe AI will replace them entirely isn’t just about creative industries—it’s evidence that AI’s societal impact has moved beyond tech circles into mainstream political consciousness. When half the people in a creative profession think they’re about to be automated out of existence, that creates political pressure.
The European response is predictably more aggressive, but the real tell is how these companies are preparing for regulatory fragmentation. Nokia’s business split and the various corporate restructurings happening now suggest companies are positioning for a world where they’ll need to comply with radically different regulatory regimes in different markets.
What’s fascinating is how this regulatory awakening intersects with the infrastructure consolidation. Companies that control the underlying compute infrastructure can more easily comply with varying regulatory requirements because they control the entire stack. It’s another reason why the current consolidation isn’t just about market power—it’s about regulatory resilience.
The Human Immune Response: Why ‘AI Slop’ Might Save Us
Here’s the contrarian take everyone’s missing: the rise of ‘AI slop’—low-quality, mass-produced AI content—might actually be creating the market dynamics that prevent AI domination. Despite marketers dismissing concerns about AI-generated garbage flooding platforms, consumer behavior tells a different story.
Pinterest adding tools to filter out AI content, BeReal reporting that 47% of Gen Z doesn’t like AI-generated content, and the broader backlash against ‘AI slop’ suggests something more profound than typical tech adoption curves. Humans seem to have developed an immune response to synthetic content faster than anyone predicted.
This creates a weird inversion: the companies pushing hardest on AI content generation might be training consumers to value human-created content more highly. It’s like how auto-tune made acoustic performances more valuable, or how digital photography made film photography cool again.
The business implications are massive. Companies that can credibly signal ‘human-made’ or ‘AI-free’ may command premium pricing, while those flooding the market with synthetic content face a race to the bottom. This doesn’t mean AI fails—it means the successful AI applications will be those that augment rather than replace human creativity.
Target’s integration with ChatGPT for shopping represents the sweet spot: using AI for functionality (finding products) rather than content creation. The winning formula seems to be AI as a backend utility, not a frontend replacement for human judgment.
Questions
- If infrastructure control is the real AI moat, what happens when China finishes building its parallel compute stack?
- Are we watching the birth of AI guilds—exclusive communities of verified human creators commanding premium prices?
- Which breaks first: the regulatory pressure to break up Big Tech, or Big Tech’s ability to maintain the illusion of competition?
Past Briefings
OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...
Feb 20, 2026We’re Building the Agentic Web Faster Than We’re Protecting It
Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...