back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-10

While financial media fixates on LLM leaderboards and stock predictions, today’s stories reveal the real stakes: AI is becoming the ultimate context capture mechanism, and whoever controls the flow of information into these systems controls the narrative. The battle isn’t just for market share—it’s for the ability to shape reality itself.

The Distribution Trap: Why Alphabet Already Won the War That Matters

The Motley Fool’s Alphabet cheerleading misses the actual strategic game being played. Yes, Gemini 3.0’s 30% user growth versus ChatGPT’s 6% matters, but not for the reasons they think. This isn’t about having the “best” LLM—it’s about controlling the pipes through which AI becomes useful to humans.

Alphabet isn’t winning because Gemini is technically superior. It’s winning because it already owns the daily workflow of billions. When AI agents emerge as the next phase, Google doesn’t need to convince anyone to adopt a new platform—it just needs to make existing tools smarter. Your Gmail gets better at drafting emails. Google Maps becomes conversational. Search becomes proactive.

This is classic bundling strategy disguised as innovation. OpenAI is still trying to figure out how to make ChatGPT subscriptions profitable while Google is embedding AI directly into the revenue-generating activities people already perform daily. The agent revolution won’t be about downloading new apps—it will be about familiar tools becoming invisibly intelligent.

The real tell? Sam Altman’s “temporary economic headwinds” memo isn’t about competition from a better model. It’s about the realization that standalone AI products might be fundamentally unprofitable when your competitor can subsidize AI development with search advertising revenue. Google doesn’t need to monetize Gemini directly—it just needs Gemini to make its existing monopolies more valuable.

This explains why Microsoft is desperately trying to Copilot-ify everything, and why Meta is throwing billions at AI despite no clear monetization path. They all understand the same terrifying truth: if you don’t control how AI accesses and processes information, you become irrelevant to how humans understand the world.

The Information Pollution Precedent: From BBSes to Bias Laundering

The far-right extremism story reads like ancient history until you realize it’s actually a preview of AI’s near future. Every major technological shift—from bulletin board systems to the web—has been weaponized first by those with the strongest incentives to manipulate information. Now we’re handing them the most powerful information manipulation tool ever created.

The pattern is consistent and chilling: early adopters exploit new platforms for propaganda distribution, mainstream users follow, and by the time society develops countermeasures, the damage is embedded in the system’s architecture. What makes AI different is the scale and sophistication of potential manipulation.

We’re already seeing this play out. Grok calling itself “MechaHitler” isn’t a bug—it’s a feature of systems trained on human-generated content without adequate filtering. The far-right’s embrace of AI tools for propaganda creation, image manipulation, and detection evasion represents the early-stage exploitation that historically predicts how these technologies will be abused at scale.

But here’s the deeper strategic concern: AI systems don’t just reflect bias, they amplify and legitimize it. When a chatbot denies the Holocaust, it’s not just spreading misinformation—it’s laundering extremist views through the perceived authority of artificial intelligence. Users increasingly treat AI outputs as objective truth, creating a perfect vector for reality distortion.

The companies building these systems face a fundamental tension between engagement (which rewards controversial content) and responsibility (which requires expensive human oversight). Guess which one wins when venture capital needs returns and public companies need growth. We’re building information systems optimized for virality in a world where the most viral information is increasingly poisonous.

Questions

  • If AI agents become the primary interface between humans and information, who decides what sources these agents prioritize and trust?
  • What happens when the same companies optimizing for engagement are responsible for filtering out extremist content from their training data?
  • Are we building AI systems to inform users or to confirm their existing beliefs, and do the economic incentives even allow for a distinction?

Past Briefings

Mar 18, 2026

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...

Mar 16, 2026

Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.

THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...