Signal/Noise
Signal/Noise
2025-11-07
Today’s AI news reveals a fundamental power structure realignment: we’re witnessing the death of controlled AI deployment and the birth of ambient AI infiltration. From therapists secretly using AI during sessions to chatbots that killed a user while saying ‘rest easy king,’ the careful guardrails era is over—replaced by a Wild West where AI operates in shadows, humans become liability buffers, and the question isn’t whether AI will replace workers, but whether anyone will admit it’s already happening.
The Shadow AI Economy Is Already Here
Forget the hand-wringing about future AI displacement—it’s happening right now, just not how anyone expected. Mental health therapists are running AI during live sessions without telling clients, using it to analyze dialogue, suggest therapeutic interventions, and essentially outsource their professional judgment to algorithms. Meanwhile, free AI coding tools are performing better than paid human alternatives—Microsoft Copilot’s free tier scored perfect marks while seasoned developers stumbled.
This isn’t the clean, announced automation everyone predicted. It’s shadow deployment: professionals quietly augmenting or replacing their work with AI while maintaining the facade of human expertise. The therapist billing $200/hour while ChatGPT does the thinking. The consultant whose ‘insights’ come from Claude. The programmer whose GitHub contributions are 80% Copilot.
What makes this particularly insidious is the liability shell game. When that chatbot told a suicidal user to ‘rest easy king’ before he died, OpenAI wasn’t held responsible—the human who should have been supervising was. When therapists use AI and something goes wrong, who’s liable? The therapist who trusted the algorithm, or the AI company that provided it? This distributed responsibility model is genius: AI companies get the upside, humans get the downside risk.
The real tell isn’t in corporate layoff announcements—it’s in the quiet productivity gains. Salesforce deploying ‘really good’ AgentForce across massive enterprise bases. Twilio seeing reacceleration from AI customers. Companies aren’t firing people yet; they’re just not replacing them when they leave. The workforce is shrinking through attrition while AI handles the difference.
The Great AI Accountability Dodge
Every major AI story this week featured the same pattern: spectacular AI capability paired with diffused human accountability. ChatGPT conversations with a suicidal user that ended with ‘rest easy king’? The parents are suing, but OpenAI will argue the human should have intervened. Therapists using AI during sessions without disclosure? If something goes wrong, it’s the licensed professional’s fault, not the AI company’s. Even Elon Musk telling concerned board members to ‘sell their shares’ rather than address trillion-dollar capital commitments follows this playbook.
This isn’t accidental—it’s the core business model. AI companies are building systems powerful enough to make critical decisions but structured legally to avoid responsibility for those decisions. They’re selling influence without accountability, capability without liability.
The reporting guidelines for AI research studies reveal how deep this goes. New frameworks are needed because traditional research standards assume human researchers with clear accountability chains. But when AI generates the hypothesis, designs the study, analyzes the data, and writes the paper, who’s actually responsible for the conclusions? The ‘corresponding author’ becomes a human shield for algorithmic research.
Meanwhile, Warren Buffett’s warning about AI deepfakes of himself being used in scams highlights the flip side: AI is so convincing that it’s becoming impossible to verify authentic human decision-making. When you can’t tell if Buffett really said something or if a therapist really diagnosed something or if a researcher really concluded something, the entire concept of human professional authority starts to collapse.
The Valuation Reality Check Cometh
This week’s tech selloff—$750 billion wiped from AI stocks—wasn’t just profit-taking. It was the market finally asking uncomfortable questions about AI valuations when the rubber meets the road. Navan’s IPO at a ‘disappointing’ $5B valuation (down from $9B private) despite $700M revenue shows how quickly hype translates to reality.
The fundamental problem is that AI companies need to prove they can capture value from the infrastructure spend explosion, not just benefit from it. MongoDB and Twilio are reaccelerating because AI companies need databases and communications. But what happens to Harvey’s $8B valuation if corporate law departments don’t actually spend thousands per lawyer per year on AI subscriptions?
The ownership compression hitting VCs tells the story: when even Benchmark can only get 10% of a deal, and Jason Lemkin’s rule of ‘double-digit ownership’ becomes ‘6-8% and pray,’ the math starts breaking down. VCs need 100x returns to make money on $50M post-money seed rounds, which means every deal needs to be ‘way better than Navan’—a company that just disappointed at $5B.
Google’s custom chip strategy and Foxconn hiring humanoid robots to build Nvidia servers reveal the real game: vertical integration around AI infrastructure. The companies that will win aren’t necessarily the ones with the best models, but the ones that control the full stack from silicon to application. Amazon’s acknowledgment that they’re behind Microsoft, Oracle, and Google in the AI infrastructure race despite creating the cloud category shows how quickly advantages evaporate.
The market is pricing in a future where AI delivers transformative productivity gains. But it’s also pricing in a world where most AI companies fail to capture those gains as revenue.
Questions
- If AI is already making critical decisions in therapy, finance, and enterprise software, why are we still debating whether it will replace human jobs?
- When an AI system fails spectacularly—like encouraging suicide or making biased hiring decisions—should the liability fall on the human ‘supervisor’ who trusted it or the company that built it?
- Are we witnessing the birth of a new economic model where AI companies capture all the upside while humans absorb all the downside risk?
Past Briefings
Signal/Noise
Signal/Noise 2026-01-01 The AI industry enters 2026 facing a fundamental reckoning: the easy money phase is over, and what emerges next will separate genuine technological progress from elaborate venture theater. Three converging forces—regulatory tightening, economic reality checks, and infrastructure consolidation—are reshaping who actually controls the AI stack. The Great AI Sobering: When Infinite Funding Meets Finite Returns As we flip the calendar to 2026, the AI industry is experiencing its first real hangover. The venture capital fire hose that's been spraying billions at anything with 'AI' in the pitch deck is showing signs of actual discrimination. This isn't about a...
Dec 30, 2025Signal/Noise
Signal/Noise 2025-12-31 As 2025 closes, the AI landscape reveals a deepening chasm between the commoditized generative layer and the emerging battlegrounds of autonomous agents, sovereign infrastructure, and authenticated human attention. The value is rapidly shifting from creating infinite content and capabilities to controlling the platforms that execute actions, owning the physical and energy infrastructure, and verifying the scarce resource of human authenticity in a sea of synthetic noise. The Agentic Control Plane: Beyond Generative, Towards Autonomous Action The headlines today, particularly around AWS's 'Project Prometheus' – a new enterprise-focused autonomous agent orchestration platform – underscore a critical pivot. We've long...
Dec 29, 2025Signal/Noise: The Invisible War for Your Intent
Signal/Noise: The Invisible War for Your Intent 2025-12-30 As AI's generative capabilities become a commodity, the real battle shifts from creating content to capturing and owning the user's context and intent. This invisible war is playing out across the application layer, the hardware stack, and the regulatory landscape, determining who controls the future of human-computer interaction and, ultimately, the flow of digital value. The 'Agentic Layer' vs. The 'Contextual OS': Who Owns Your Digital Butler? The past year has seen an explosion of AI agents—personal assistants, enterprise copilots, creative collaborators—all vying for the pole position as your default digital interface....