back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-10

While markets obsess over which LLM scores higher on benchmarks, the real AI story is playing out in two parallel universes: Google’s quiet conquest of everyday workflows through product integration, and extremists turning AI into a propaganda factory. Both reveal the same truth—AI’s value isn’t in raw capability, but in reaching the right humans at the right moment.

Google’s Stealth AI Takeover: Why LLM Leaderboards Are Missing the Point

Everyone’s watching the wrong game. While pundits debate whether Gemini 3.0 beats ChatGPT on reasoning benchmarks, Google is executing the most obvious AI strategy that somehow everyone else missed: putting AI where people already are.

The 30% user growth for Gemini isn’t about having the smartest model—it’s about friction reduction. When your AI lives inside Gmail, Maps, and Search, adoption becomes inevitable rather than intentional. Users don’t need to remember to open a separate ChatGPT tab; the AI is just there when they’re drafting emails or planning routes.

This reveals the fundamental misunderstanding driving most AI investments. Companies are building standalone AI products when they should be building AI into existing workflows. OpenAI created a destination; Google created an ambient layer. One requires behavior change, the other exploits existing behavior.

Sam Altman’s internal memo about “temporary economic headwinds” suggests OpenAI finally understands this dynamic. They’ve built the best restaurant in a food court while Google turned every grocery store into a kitchen. The restaurant might serve better food, but most people eat at home.

The strategic implication is profound: AI’s winner won’t be determined by model quality but by distribution density. Google doesn’t need the best AI—they need good enough AI in enough places that switching becomes impossible. Every Gmail suggestion and Maps query creates switching costs that OpenAI can’t match by making ChatGPT slightly smarter.

This is why Alphabet’s valuation discount makes sense as an opportunity rather than a warning. The market is pricing in a horse race between equivalent competitors when Google is actually playing a different sport entirely.

The Propaganda Engine: How Extremists Are Industrializing Hate with AI

Far-right groups have always been early technology adopters—from 1980s bulletin boards to Stormfront in 1995—because marginalized movements need leverage more than mainstream ones. Now they’re turning AI into a hate multiplier, and the implications go far beyond content moderation.

The creation of Hitler chatbots and AI-generated propaganda represents something qualitatively different from previous extremist tech adoption. Earlier technologies amplified human-created content; AI generates new content at scale. A neo-Nazi no longer needs writing skills or production resources—they need prompts.

This isn’t just about detecting deepfakes or moderating chatbots. It’s about the fundamental economics of propaganda production approaching zero. When content creation costs nothing, attention becomes the only scarce resource, which incentivizes increasingly extreme messaging to break through the noise.

The historical pattern suggests current responses are inadequate. Every previous wave—from mailed newsletters to bulletin boards to websites—saw authorities playing catch-up with bans and regulations while extremists simply moved to new platforms or jurisdictions. The same dynamic is emerging with AI: Gab creates Hitler bots while mainstream platforms ban them, creating a fragmented landscape where extremist AI tools operate in parallel to mainstream ones.

But there’s a darker strategic element here. Extremist groups are essentially beta-testing AI propaganda techniques in low-stakes environments. The sophistication they develop will eventually leak into mainstream political messaging, corporate manipulation, and state propaganda. We’re watching the R&D phase of industrial-scale persuasion.

The real concern isn’t individual hate chatbots—it’s that extremists are pioneering AI persuasion techniques that will be adopted by anyone seeking to influence public opinion. They’re not just using AI; they’re teaching it to be more effective at changing minds.

Questions

  • If AI distribution matters more than AI quality, why are investors still funding standalone AI companies instead of demanding AI integration strategies?
  • When propaganda creation costs approach zero, does the concept of “truth” become purely about distribution power rather than factual accuracy?
  • Will Google’s ambient AI strategy create the ultimate filter bubble where users never encounter information outside their existing Google ecosystem?

Past Briefings

Mar 18, 2026

Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.

THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...

Mar 17, 2026

Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting

THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...

Mar 16, 2026

Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.

THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...