Signal/Noise
Signal/Noise
2025-12-10
While markets obsess over which LLM scores higher on benchmarks, the real AI story is playing out in two parallel universes: Google’s quiet conquest of everyday workflows through product integration, and extremists turning AI into a propaganda factory. Both reveal the same truth—AI’s value isn’t in raw capability, but in reaching the right humans at the right moment.
Google’s Stealth AI Takeover: Why LLM Leaderboards Are Missing the Point
Everyone’s watching the wrong game. While pundits debate whether Gemini 3.0 beats ChatGPT on reasoning benchmarks, Google is executing the most obvious AI strategy that somehow everyone else missed: putting AI where people already are.
The 30% user growth for Gemini isn’t about having the smartest model—it’s about friction reduction. When your AI lives inside Gmail, Maps, and Search, adoption becomes inevitable rather than intentional. Users don’t need to remember to open a separate ChatGPT tab; the AI is just there when they’re drafting emails or planning routes.
This reveals the fundamental misunderstanding driving most AI investments. Companies are building standalone AI products when they should be building AI into existing workflows. OpenAI created a destination; Google created an ambient layer. One requires behavior change, the other exploits existing behavior.
Sam Altman’s internal memo about “temporary economic headwinds” suggests OpenAI finally understands this dynamic. They’ve built the best restaurant in a food court while Google turned every grocery store into a kitchen. The restaurant might serve better food, but most people eat at home.
The strategic implication is profound: AI’s winner won’t be determined by model quality but by distribution density. Google doesn’t need the best AI—they need good enough AI in enough places that switching becomes impossible. Every Gmail suggestion and Maps query creates switching costs that OpenAI can’t match by making ChatGPT slightly smarter.
This is why Alphabet’s valuation discount makes sense as an opportunity rather than a warning. The market is pricing in a horse race between equivalent competitors when Google is actually playing a different sport entirely.
The Propaganda Engine: How Extremists Are Industrializing Hate with AI
Far-right groups have always been early technology adopters—from 1980s bulletin boards to Stormfront in 1995—because marginalized movements need leverage more than mainstream ones. Now they’re turning AI into a hate multiplier, and the implications go far beyond content moderation.
The creation of Hitler chatbots and AI-generated propaganda represents something qualitatively different from previous extremist tech adoption. Earlier technologies amplified human-created content; AI generates new content at scale. A neo-Nazi no longer needs writing skills or production resources—they need prompts.
This isn’t just about detecting deepfakes or moderating chatbots. It’s about the fundamental economics of propaganda production approaching zero. When content creation costs nothing, attention becomes the only scarce resource, which incentivizes increasingly extreme messaging to break through the noise.
The historical pattern suggests current responses are inadequate. Every previous wave—from mailed newsletters to bulletin boards to websites—saw authorities playing catch-up with bans and regulations while extremists simply moved to new platforms or jurisdictions. The same dynamic is emerging with AI: Gab creates Hitler bots while mainstream platforms ban them, creating a fragmented landscape where extremist AI tools operate in parallel to mainstream ones.
But there’s a darker strategic element here. Extremist groups are essentially beta-testing AI propaganda techniques in low-stakes environments. The sophistication they develop will eventually leak into mainstream political messaging, corporate manipulation, and state propaganda. We’re watching the R&D phase of industrial-scale persuasion.
The real concern isn’t individual hate chatbots—it’s that extremists are pioneering AI persuasion techniques that will be adopted by anyone seeking to influence public opinion. They’re not just using AI; they’re teaching it to be more effective at changing minds.
Questions
- If AI distribution matters more than AI quality, why are investors still funding standalone AI companies instead of demanding AI integration strategies?
- When propaganda creation costs approach zero, does the concept of “truth” become purely about distribution power rather than factual accuracy?
- Will Google’s ambient AI strategy create the ultimate filter bubble where users never encounter information outside their existing Google ecosystem?
Past Briefings
OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...
Feb 20, 2026We’re Building the Agentic Web Faster Than We’re Protecting It
Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...