back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-01-02

Without today’s specific AI news, the strategic imperative remains clear: in an era where AI capabilities commoditize rapidly, the winners will be those who control the chokepoints—data, compute, or the contexts where decisions get made. The question isn’t who builds the best model, but who owns the infrastructure that makes models indispensable.

The Infrastructure Wars Heat Up

While everyone debates model capabilities, the real battle is being fought in the infrastructure layer. The companies positioning themselves as the picks-and-shovels providers—offering compute, data pipelines, and deployment infrastructure—are building the most defensible positions. Unlike models that can be replicated or open-sourced, infrastructure creates genuine switching costs and network effects. When a company builds their AI workflows around your infrastructure, moving becomes exponentially more expensive with each integration point. This is why we’re seeing massive investments in specialized AI chips, edge computing networks, and developer tooling platforms. The strategic insight here is that in a world where model weights approach commodity status, controlling where those models run becomes the ultimate moat. Cloud providers understand this intuitively—they don’t care if you use GPT-4, Claude, or Llama as long as you’re running inference on their hardware. The companies building AI-native infrastructure today are positioning themselves to tax every AI interaction tomorrow, regardless of which model actually processes the request.

Context Capture Becomes the New Land Grab

The most sophisticated AI plays aren’t about building better foundation models—they’re about capturing irreplaceable context within specific workflows. Consider why Microsoft’s Copilot strategy is brilliant: they’re not trying to build the best LLM, they’re embedding AI into the contexts where knowledge work actually happens. Your emails, documents, meetings, and code repositories become the training ground for increasingly personalized AI assistants. This context capture creates a different kind of lock-in than traditional software. When an AI system knows your communication patterns, project history, and decision-making context, switching to a competitor means starting from zero on the learning curve. The companies winning this game are those identifying the highest-value contexts—the places where humans make expensive decisions with incomplete information. Legal research, medical diagnosis, financial analysis, and strategic planning all represent contexts where AI augmentation is valuable enough to justify significant switching costs. The race isn’t to build general intelligence; it’s to become indispensable within specific, high-value decision-making contexts where the cost of being wrong exceeds the cost of the AI system itself.

The Attention Economy’s AI Reckoning

AI’s most underestimated impact might be its complete disruption of the attention economy. When content creation costs approach zero, human attention becomes the ultimate scarce resource—and the current advertising-based internet model breaks down entirely. We’re already seeing early signals: AI-generated content flooding social feeds, making organic reach even more challenging; AI assistants potentially replacing search behavior, threatening Google’s core business model; and personalized AI tutors competing directly with educational content creators. The strategic question becomes: in a world where anyone can generate infinite content, what makes humans choose to pay attention to yours? The answer increasingly lies in trust, community, and exclusive access to information or experiences that can’t be replicated by AI. This shift fundamentally changes the value proposition of media companies, social platforms, and content creators. Those building genuine communities and unique information advantages will thrive, while those competing purely on content volume or engagement optimization will find themselves in a race to the bottom against AI systems that can produce content faster and cheaper than any human ever could.

Questions

  • If AI capabilities commoditize rapidly, why are we still measuring success by model benchmarks rather than infrastructure control?
  • What happens to the venture capital model when the marginal cost of software development approaches zero?
  • Are we building AI systems to augment human decision-making, or are we training humans to become better prompts for AI systems?

Past Briefings

Mar 19, 2026

The Moat Was the Cost of Building Software. Claude Code Just Mass-Produced a Bridge

THE NUMBER: $100 billion — The amount Jeff Bezos is reportedly raising to buy manufacturing companies and automate them with AI, per the Wall Street Journal. Yesterday we wrote about Travis Kalanick's Atoms venture — $1 billion raised on a $15 billion valuation to bring AI to the physical world. Today one of the richest people on the planet walked into the same room at nearly 100x the scale. The atoms economy just got its first mega-fund. A VC told Todd Saunders something this week that lit up X like a signal flare: "The moat in software was the cost...

Mar 18, 2026

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...