Signal/Noise
Signal/Noise
2025-12-01
While AI cheerleaders trumpet another milestone—ChatGPT’s 1,000-day anniversary—the real story is a fundamental shift in the nature of power itself. We’re witnessing the emergence of ‘agency warfare,’ where the ability to deploy autonomous agents at scale becomes the new determinant of competitive advantage, replacing traditional moats of capital, data, or even talent.
The Death of Human-Scale Competition
ChatGPT’s three-year milestone obscures a more profound transformation: we’re entering an era where competitive advantage flows not from what you know or own, but from how many autonomous agents you can deploy. The numbers tell the story—ChatGPT alone handles 50 million shopping queries daily, while companies like Allstate process 400,000 customer interactions through AI without human intervention. This isn’t just automation; it’s the emergence of what we might call ‘agency warfare.’
Traditional business strategy assumed human-scale decision-making and execution. But when Anthropic’s Claude Code can execute cyber-attacks ‘largely without human intervention at scale,’ or when AI agents can complete 30-hour coding marathons to build entire applications, we’re no longer competing on human terms. The math is simple: a company that can deploy 100,000 AI agents against your 50 human customer service reps isn’t just more efficient—it’s playing a different game entirely.
What’s fascinating is how this plays out across industries. Xiaomi’s CEO predicts humanoid robots will dominate factory floors within five years, while agricultural robotics grows at 24% annually as farms face structural labor shortages. The pattern repeats: wherever human bandwidth becomes the constraint, agents rush in to fill the void. This isn’t displacement—it’s scale transformation.
The uncomfortable truth is that most companies are still thinking in terms of ‘AI assistance’ when they should be thinking in terms of ‘AI multiplication.’ The winners won’t be those who make their humans slightly more productive, but those who realize that human bottlenecks—not just inefficiencies—are now optional.
The Authenticity Wars Begin
Behind the scenes of AI’s triumphant march, a shadow war is brewing over something far more fundamental than efficiency: truth itself. The same week that showcased AI’s growing capabilities, we saw the darker implications. AI toys meant for children delivered sexually explicit content. Jorja Smith’s record label battles an AI clone of their artist. Prison phone calls train AI models to predict future crimes. Each incident points to the same underlying crisis: as AI becomes capable of perfect mimicry, authenticity becomes both more valuable and more vulnerable.
Consider the paradox facing ElevenLabs, now valued at $6.6 billion for creating voices so convincing ‘they could fool your mother.’ Their success depends on eliminating the gap between real and synthetic, yet their survival depends on maintaining boundaries around what gets cloned and how. They’ve created seven different categories of ‘no-go’ voices and employ human moderators alongside AI to police misuse—essentially building a business on perfecting deception while policing deception.
This authenticity crisis extends far beyond voice cloning. When 78% of IT job postings now require AI skills, we’re not just changing skill requirements—we’re changing what human expertise means. Business schools are overhauling curricula not just to teach AI tools, but to teach students how to validate AI output, question system logic, and maintain human judgment in an age of synthetic intelligence.
The real value isn’t in building better AI—it’s in building systems that can reliably distinguish between human and artificial, between authentic and synthetic, between trustworthy and manipulated. Companies that solve this authenticity problem won’t just have a product advantage; they’ll have a civilization-level moat.
The Infrastructure Reality Check
While AI evangelists obsess over model capabilities, the real constraint isn’t intelligence—it’s infrastructure. The numbers are staggering: $2.8 trillion in forecasted AI datacenter spending by 2030, equivalent to the entire GDP of Canada. Individual training runs now cost $100 billion, requiring the energy output of nine nuclear reactors. This isn’t sustainable innovation; it’s a bubble built on the assumption that throwing compute at the problem will eventually yield artificial general intelligence.
The cracks are already showing. Despite massive investment, OpenAI faces lawsuits over AI systems encouraging suicide, while ‘AI winter’ predictions multiply. Even OpenAI’s Sam Altman admits they need quantum leaps in efficiency to avoid hitting physical limits. Meanwhile, companies like DeepSeek are proving that smarter architectures can match or exceed larger models with a fraction of the resources—suggesting much of the current spending is fundamentally misdirected.
More tellingly, we’re seeing a geographic arbitrage emerge around infrastructure costs. European AI startups like Black Forest Labs raise $258 million precisely because they can access different cost structures and regulatory environments than Silicon Valley giants. ByteDance challenges Alibaba not through superior algorithms but through more efficient infrastructure deployment.
The real competition isn’t who can build the smartest AI—it’s who can build sustainable AI infrastructure that doesn’t require the GDP of small nations to operate. The companies solving the efficiency problem rather than the capability problem may find themselves inheriting the entire market when the current spending binge inevitably hits physical and financial limits.
Questions
- If competitive advantage flows from agent deployment rather than human capability, what happens to companies built around human expertise?
- When authenticity becomes both more valuable and more vulnerable, who becomes the arbiters of truth in human-AI interactions?
- Are we building the infrastructure for AI dominance or AI collapse?
Past Briefings
OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...
Feb 20, 2026We’re Building the Agentic Web Faster Than We’re Protecting It
Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...