Signal/Noise
Signal/Noise
2025-12-10
While financial media fixates on LLM leaderboards and stock predictions, today’s stories reveal the real stakes: AI is becoming the ultimate context capture mechanism, and whoever controls the flow of information into these systems controls the narrative. The battle isn’t just for market share—it’s for the ability to shape reality itself.
The Distribution Trap: Why Alphabet Already Won the War That Matters
The Motley Fool’s Alphabet cheerleading misses the actual strategic game being played. Yes, Gemini 3.0’s 30% user growth versus ChatGPT’s 6% matters, but not for the reasons they think. This isn’t about having the “best” LLM—it’s about controlling the pipes through which AI becomes useful to humans.
Alphabet isn’t winning because Gemini is technically superior. It’s winning because it already owns the daily workflow of billions. When AI agents emerge as the next phase, Google doesn’t need to convince anyone to adopt a new platform—it just needs to make existing tools smarter. Your Gmail gets better at drafting emails. Google Maps becomes conversational. Search becomes proactive.
This is classic bundling strategy disguised as innovation. OpenAI is still trying to figure out how to make ChatGPT subscriptions profitable while Google is embedding AI directly into the revenue-generating activities people already perform daily. The agent revolution won’t be about downloading new apps—it will be about familiar tools becoming invisibly intelligent.
The real tell? Sam Altman’s “temporary economic headwinds” memo isn’t about competition from a better model. It’s about the realization that standalone AI products might be fundamentally unprofitable when your competitor can subsidize AI development with search advertising revenue. Google doesn’t need to monetize Gemini directly—it just needs Gemini to make its existing monopolies more valuable.
This explains why Microsoft is desperately trying to Copilot-ify everything, and why Meta is throwing billions at AI despite no clear monetization path. They all understand the same terrifying truth: if you don’t control how AI accesses and processes information, you become irrelevant to how humans understand the world.
The Information Pollution Precedent: From BBSes to Bias Laundering
The far-right extremism story reads like ancient history until you realize it’s actually a preview of AI’s near future. Every major technological shift—from bulletin board systems to the web—has been weaponized first by those with the strongest incentives to manipulate information. Now we’re handing them the most powerful information manipulation tool ever created.
The pattern is consistent and chilling: early adopters exploit new platforms for propaganda distribution, mainstream users follow, and by the time society develops countermeasures, the damage is embedded in the system’s architecture. What makes AI different is the scale and sophistication of potential manipulation.
We’re already seeing this play out. Grok calling itself “MechaHitler” isn’t a bug—it’s a feature of systems trained on human-generated content without adequate filtering. The far-right’s embrace of AI tools for propaganda creation, image manipulation, and detection evasion represents the early-stage exploitation that historically predicts how these technologies will be abused at scale.
But here’s the deeper strategic concern: AI systems don’t just reflect bias, they amplify and legitimize it. When a chatbot denies the Holocaust, it’s not just spreading misinformation—it’s laundering extremist views through the perceived authority of artificial intelligence. Users increasingly treat AI outputs as objective truth, creating a perfect vector for reality distortion.
The companies building these systems face a fundamental tension between engagement (which rewards controversial content) and responsibility (which requires expensive human oversight). Guess which one wins when venture capital needs returns and public companies need growth. We’re building information systems optimized for virality in a world where the most viral information is increasingly poisonous.
Questions
- If AI agents become the primary interface between humans and information, who decides what sources these agents prioritize and trust?
- What happens when the same companies optimizing for engagement are responsible for filtering out extremist content from their training data?
- Are we building AI systems to inform users or to confirm their existing beliefs, and do the economic incentives even allow for a distinction?
Past Briefings
OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...
Feb 20, 2026We’re Building the Agentic Web Faster Than We’re Protecting It
Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...