Signal/Noise
Signal/Noise
2025-12-26
As 2025 closes, the AI narrative has shifted from raw model capability to a multi-front battle for control over the entire AI stack. While the proliferation of ‘open’ models attempts to commoditize the base layer, the real strategic plays are centered on owning proprietary user context and, increasingly, on nation-states asserting digital sovereignty over critical AI infrastructure, creating new moats and fragmenting the global landscape.
The ‘Open’ AI Trojan Horse: Commoditizing Models to Control the Stack
The drumbeat of ‘open source’ AI continues to reverberate, with new, increasingly capable models hitting public repositories and consortiums seemingly every other week. The latest, ‘Atlas-7B,’ released by a well-funded coalition of startups and cloud providers, claims near-SOTA performance across several benchmarks. On the surface, this looks like a win for democratization, leveling the playing field for smaller players. Dig deeper, and it’s a strategic masterstroke designed to accelerate the commoditization of the model layer itself. The true beneficiaries aren’t necessarily the developers of these models, but the infrastructure providers and application builders. By making powerful models cheap and accessible, the value shifts up and down the stack. Cloud giants, who often fund or host these ‘open’ efforts, gain by driving demand for their specialized compute and storage. They’re happy to give away the gold if they can sell all the picks and shovels at a premium. The real money isn’t in developing the general-purpose model anymore; it’s in running inference at scale and, more critically, in building proprietary applications that leverage unique, context-rich data. The ‘open’ model becomes a loss leader, a high-quality commodity that creates an illusion of choice while subtly funneling users towards specific ecosystems. For startups, this means the barrier to entry for model development is lower, but the barrier to sustainable differentiation has moved. Their ‘secret sauce’ can no longer be the model itself, but the unique data, workflow integration, or user experience they build on top. It’s a classic commodity trap for anyone whose core offering is just a foundational model, however impressive.
The Silent War for Your Digital Soul: Why AI’s True Battleground is Context, Not Compute
While the industry remains fixated on benchmark scores and new model architectures, a far more insidious and impactful battle is quietly raging: the war for human context. Today saw a raft of minor updates from the usual suspects – new ‘smart’ features in Google Workspace, deeper AI integration in Microsoft 365 Copilot, Apple’s rumored on-device AI for ‘proactive’ assistance, and Amazon’s continued push for AI-infused shopping and smart home experiences. None of these were headline-grabbing model releases, yet collectively, they represent the true strategic play of the decade. These companies aren’t just building better chatbots; they’re embedding AI as an invisible, omnipresent layer that processes and leverages every scrap of your digital life. Your emails, calendar, browsing history, purchase patterns, health data, location, and even conversational nuances are being used to build an increasingly comprehensive ‘digital twin’ of you. This isn’t about general intelligence; it’s about personalized utility, creating lock-in mechanisms so subtle you barely perceive them. The AI isn’t just generating text; it’s anticipating your needs, optimizing your schedule, recommending purchases, and even drafting your communications based on historical context. This makes their platforms indispensable, not because of raw computational power, but because they understand you better than any competitor can. The attention economy has evolved: it’s no longer just about capturing eyeballs, but about capturing the entire context of your digital existence. The company that owns your context owns your attention, your workflow, and ultimately, your choices. This is the Wall-E future arriving, not with robots, but with an invisible AI butler that knows you intimately and makes leaving its ecosystem feel like losing a limb.
Borders in the Cloud: The Geopolitical Scramble to Nationalize AI
Today’s news included announcements from several mid-tier nations (e.g., Brazil, Indonesia, South Korea) detailing new initiatives to fund ‘sovereign AI’ models and establish domestic compute infrastructure, often coupled with stricter data residency and auditing requirements. This isn’t just about economic development; it’s a clear signal of the accelerating geopolitical fragmentation of the AI landscape. As AI transitions from a niche technology to critical national infrastructure – impacting everything from defense and cybersecurity to public services and economic competitiveness – nations are growing increasingly wary of relying on foreign-controlled models and data centers. The rhetoric often centers on ethics, privacy, and national security, but the underlying motivation is control. Governments want to ensure their data isn’t processed by foreign algorithms, that their ‘national values’ are embedded in their AI, and that they have a domestic industry capable of developing and maintaining these systems. This creates a complex web of regulatory arbitrage and protectionism. While it might slow the pace of global standardization and create inefficiencies for multinational corporations, it also opens up significant opportunities for local AI champions and for companies that can navigate these increasingly complex, localized compliance requirements. The dream of a borderless digital commons is increasingly giving way to a reality where AI, like other strategic resources, is being nationalized. The ‘AI Act’ and similar regulatory frameworks are just the tip of the iceberg; the deeper trend is a global scramble to establish digital sovereignty, creating distinct AI ecosystems with their own rules, data, and even foundational models. This means the future of AI isn’t a single, unified trajectory, but a divergent path across different geopolitical blocs, each vying for technological independence.
Questions
- If foundational models become a true commodity, where do startups find their defensible moats, and will this further centralize power with those who already own vast user context?
- As AI becomes an invisible, embedded layer, are we sleepwalking into a future where genuine user agency is subtly eroded by algorithms designed to anticipate and optimize our lives?
- Will the nationalization of AI infrastructure lead to a ‘tech iron curtain,’ stifling global innovation, or will it foster diverse, localized AI solutions better suited to specific cultural and regulatory contexts?
- Who ultimately bears the cost of this increasing geopolitical fragmentation in AI – consumers, innovators, or the incumbents who must navigate a labyrinth of regulations?
Past Briefings
OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...
Feb 20, 2026We’re Building the Agentic Web Faster Than We’re Protecting It
Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...