back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-06

Today’s stories reveal a fundamental shift from AI hype to AI infrastructure reality. The era of moonshot promises is ending as companies face the brutal arithmetic of making AI profitable—and the only path forward requires replacing human work at industrial scale, building massive physical infrastructure, and controlling the entire value chain from chips to content.

The Great AI Infrastructure Land Grab

While everyone obsesses over which chatbot gives better answers, the real power play is happening in physical space. Google’s announcing plans to put data centers in orbit. The UK is building an ‘AI factory of national importance’ next to a literal landfill (for the renewable energy connection). China’s racing to build sovereign AI capacity. This isn’t about better algorithms—it’s about controlling the means of AI production.

The location decisions reveal the actual constraints: cheap power, cooling capacity, and regulatory arbitrage. That Derbyshire facility isn’t going next to Cambridge; it’s going where the electricity is guaranteed and the planning permission flows faster. Google’s space gambit isn’t sci-fi fantasy; it’s acknowledging that Earth’s resources are finite and the competition for power grid access is becoming existential.

Meanwhile, the incumbents are playing defense through offense. News Corp isn’t just licensing to OpenAI—they’re building a multi-LLM revenue strategy because they understand that in the infrastructure game, diversification is survival. When your content becomes training data, you better have deals with everyone who might build the next foundation model.

The strategic insight everyone’s missing: AI isn’t a software business anymore. It’s a utilities business. The companies building the rails will capture more value than those running trains on them.

The Automation Honesty Hour

Geoffrey Hinton just said the quiet part out loud: AI can only be profitable if it replaces human jobs. Not ‘augments’ them, not ’empowers’ them—replaces them. This isn’t coming from some dystopian critic; this is the godfather of AI explaining basic economics to investors who’ve pumped trillions into the sector.

Look at the numbers behind the narrative. OpenAI has $1.4 trillion in infrastructure commitments against a $20 billion revenue run rate. Sam Altman just publicly rejected any government bailout, essentially betting the company on exponential revenue growth through automation. The Nexford study showing AI users earn 40% more isn’t about human empowerment—it’s about identifying which humans are valuable enough to keep around as AI operators.

The labor replacement isn’t coming through dramatic robot uprisings. It’s happening through boring business process optimization. SAP’s embedding AI agents into every workflow. Utilities are using AI to manage power grids without human intervention. The mundane, steady elimination of middle-management roles and routine knowledge work.

Here’s what’s strategic: the companies being honest about job displacement will capture more value than those still pretending AI just makes everyone more productive. Honesty allows for better planning, clearer ROI calculations, and realistic timelines for replacing workforce costs.

The Truth Wars Begin

Google’s AI Overview just told Australians they need to keep headlights on 24/7 or face $250 fines—completely fabricated but displayed as authoritative search results. Meanwhile, OpenAI is being sued because ChatGPT allegedly encouraged a young man’s suicide. The age of AI misinformation isn’t coming; it’s here, and it’s worse than anyone anticipated.

This isn’t just about better fact-checking. Traditional media companies like News Corp are positioning themselves as guardians of truth in an AI-generated world, selling their credibility as much as their content. The subtext: when algorithms hallucinate with authority, verified human journalism becomes more valuable, not less.

But here’s the strategic tension: the same companies building AI systems need the content that could save them from misinformation chaos. Apple’s reportedly negotiating to power Siri with Google’s Gemini models because even Apple can’t solve the accuracy problem alone. Everyone needs everyone else’s data and models, creating a web of dependencies that makes the whole system fragile.

The real winner won’t be whoever builds the most accurate AI, but whoever controls the verification layer. That’s why enterprise AI tools are embedding citation systems and why content licensing deals are multiplying. In an world of infinite AI-generated content, provenance becomes the scarcest resource.

Questions

  • If AI infrastructure is becoming a utilities business, which traditional utility regulations will governments apply to prevent AI monopolies?
  • When companies admit AI’s purpose is job replacement, how will they maintain consumer demand if their own customers become unemployed?
  • If every AI system needs human-verified content to avoid misinformation chaos, who ultimately controls the truth verification infrastructure?

Past Briefings

Feb 24, 2026

OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning

THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...

Feb 23, 2026

Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.

Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...

Feb 20, 2026

We’re Building the Agentic Web Faster Than We’re Protecting It

Google's WebMCP gives agents structured access to every website. Anthropic's data shows autonomy doubling with oversight thinning. OpenAI's agent already drains crypto vaults. Google shipped working code Thursday that hands AI agents a structured key to every website on the internet. WebMCP, running in Chrome 146 Canary, lets sites expose machine-readable "Tool Contracts" so agents can book a flight, file a support ticket, or complete a checkout without parsing screenshots or scraping HTML. Early benchmarks show 67% less compute overhead than visual approaches. Microsoft co-authored the spec. The W3C is incubating it. This isn't a proposal. It's production software already...