back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-05

While AI companies race to build bigger models and grab headlines with trillion-dollar valuations, the real action is happening in the unglamorous business of making AI actually work reliably at scale. The gap between AI demos and production reality is creating a hidden infrastructure play that will determine which companies survive the inevitable consolidation.

The Great AI Reality Check: When Silicon Valley Dreams Meet Production Nightmares

Beneath the venture capital euphoria and billion-dollar AI startups lies an uncomfortable truth: most AI systems are brittle, unreliable, and nowhere near production-ready. Anthropic’s internal research reveals that even their own engineers can only “fully delegate” 0-20% of their work to Claude, despite claiming massive productivity gains. Meanwhile, coding agents—supposedly the poster child for AI automation—are failing spectacularly when faced with real-world complexity. They break when context windows overflow, fumble basic refactoring, and lack the operational awareness to handle production environments. This isn’t a temporary growing pain; it’s a fundamental architecture problem. The AI industry has optimized for demo-ability over deployability, creating systems that wow in controlled settings but crumble under real-world pressure. The companies that recognize this gap and build boring, reliable infrastructure will capture disproportionate value as the market matures. Look for businesses focused on data quality, model reliability, and operational monitoring—the plumbing that makes AI actually work.

The Data Gold Rush: How Training Data Became the New Oil (And Why It’s Getting Dirty)

The AI training data market has exploded from virtually nothing to a multi-billion dollar industry, with companies like Micro1 crossing $100M ARR in eight months by connecting domain experts with AI labs hungry for high-quality human feedback. But this gold rush is creating its own problems. Academic researchers are warning of a “slop problem”—low-quality, AI-generated content polluting training datasets and degrading model performance. Meanwhile, the race for specialized human trainers has created a new gig economy where Harvard professors earn $100/hour grading AI outputs. This isn’t sustainable. As models become more capable, the bar for useful human feedback rises exponentially. Companies are already struggling to find experts who can meaningfully improve frontier models. The winning strategy isn’t just accumulating more data—it’s building systems that can identify and filter high-quality training signals while maintaining data integrity at scale. The firms that solve this curation problem will control the chokepoint between raw human expertise and AI capability.

The Platform Wars Are Over Before They Started

While OpenAI panics about ChatGPT’s “code red” competitive situation and races to build AI agents, the real platform battle is being won by the infrastructure layer. Nvidia’s position remains unassailable not because of GPU performance, but because they control the entire stack from silicon to software. Their CUDA ecosystem creates switching costs that make even trillion-dollar competitors think twice about alternatives. Meanwhile, Google’s Gemini 3 launch signals a different strategy: embedding AI so deeply into existing workflows that users never have to choose a “primary” AI assistant. This isn’t about building the best chatbot; it’s about becoming invisible infrastructure. Meta’s poaching of Apple’s top designers reveals another angle—the winners will be companies that make AI feel like a natural extension of existing tools rather than a separate application. The consumer AI platform war was decided before it began: the platforms that already own distribution (Google, Apple, Microsoft) will win by making AI a feature, not a product.

Questions

  • If AI coding agents can’t handle production complexity, what does this mean for the $7 trillion infrastructure buildout everyone is betting on?
  • When training data quality becomes the limiting factor, do we end up with a few AI monopolies controlling the best datasets?
  • Is the current AI bubble actually two bubbles—one for capabilities that will deflate, and another for infrastructure that will grow?

Past Briefings

Mar 18, 2026

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...

Mar 16, 2026

Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.

THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...