Signal/Noise
Signal/Noise
2025-11-20
While everyone debates whether AI is a bubble, the real story is a massive power consolidation happening beneath the surface. The federal government is moving to crush state AI regulation, enterprises are building fortress-like local systems, and a new class of AI grifters is emerging to profit from regulatory confusion—all while the technology’s actual adoption remains stubbornly slow.
The Federal Steamroller Comes for State AI Laws
Trump’s leaked executive order to weaponize the DOJ against state AI laws isn’t just regulatory arbitrage—it’s the opening move in a winner-take-all battle over who controls the future of American technology. The draft order would create a DOJ task force specifically to challenge state laws as unconstitutional, withhold federal funding from non-compliant states, and preempt the patchwork of regulations that Silicon Valley claims is stifling innovation.
But look past the ‘innovation versus regulation’ framing. This is really about cementing the dominance of a handful of mega-tech companies before alternatives can emerge. States like California and Colorado have been the only entities with enough power to actually impose meaningful constraints on AI development—requiring transparency, safety protocols, and algorithmic accountability. Crush that, and you’re left with purely voluntary industry self-regulation.
The timing isn’t coincidental. We’re seeing simultaneous AI bubble concerns, slowing enterprise adoption, and growing skepticism about returns on massive AI investments. The last thing Big Tech needs is state-level regulation creating compliance costs that could tip already-marginal AI projects into the red. Better to use federal preemption to lock in the current oligopoly structure while the window remains open.
The real tell? Even GOP senators like Josh Hawley are questioning the anti-federalism angle, suggesting this isn’t about conservative principles but about protecting specific corporate interests. When you lose traditional Republicans on a states’ rights issue, you’re revealing the true beneficiaries.
The Great Enterprise AI Fortress-Building
While the headlines obsess over ChatGPT subscriptions and public AI tools, enterprises are quietly building something entirely different: local AI fortresses designed to keep their data completely isolated from the cloud giants. The story emerging from education and corporate IT leaders reveals the real AI adoption strategy for organizations that actually handle sensitive information.
Take Merced County Office of Education, which argues that running local LLMs with tools like Ollama and contracting with Cisco or Nvidia for enterprise support is more cost-effective and secure than paying for cloud subscriptions. They’re willing to accept 90% of the capabilities to maintain 100% of the data security. This isn’t technological Luddism—it’s rational risk management.
The broader pattern is telling: enterprises are moving fast on AI infrastructure but slowly on actual deployment. Half of CFOs expect AI to create new roles while nearly as many expect job cuts, yet only 12% feel prepared for these shifts. Companies are buying the picks and shovels without knowing what they’re mining for.
This creates a fascinating dynamic where the cloud AI providers’ revenue looks robust (hello, Nvidia’s $57 billion quarter) while actual workflow transformation remains limited. Enterprises are essentially stockpiling AI capability in local fortresses, waiting to see how the regulatory and competitive landscape shakes out before committing to specific use cases.
The winners here aren’t the flashy AI chatbot companies but the infrastructure players enabling this fortress-building: hardware providers, enterprise AI platforms, and cybersecurity companies that can guarantee data never leaves the building.
The AI Grift Economy Goes Mainstream
Rob Braxman’s elaborate privacy phone scam reveals something bigger than one bad actor—it shows how AI anxiety is creating a new class of sophisticated grifters who exploit the gap between technological fear and understanding. Braxman built an entire ecosystem around fake privacy solutions, selling phones that can’t make calls and offering encrypted communications that send keys in plain text, all while positioning himself as the antidote to Big Tech surveillance.
But the real innovation here isn’t technical—it’s psychological. Braxman understood that people’s AI fears aren’t really about specific technical capabilities but about loss of control and agency. So he sold them the illusion of control: ‘private’ phones, ‘secure’ messaging, and explanations for why mainstream solutions couldn’t be trusted. The products didn’t have to work; they just had to feel like resistance to an overwhelming technological tide.
This grift economy extends far beyond individual scammers. Look at Xania Monet, the AI-generated pop star that signed a $3 million deal after hitting 17 million streams. The entire project exists because someone realized they could capture the novelty value of AI-generated content while actual human creativity becomes more valuable by contrast. It’s arbitraging the temporary fascination with AI against the enduring appeal of authentic human expression.
Meanwhile, legitimate AI safety concerns get drowned out by both the grifters selling fake solutions and the companies overselling AI capabilities. When judges complain about becoming ‘human filters’ for AI-generated legal arguments, and chatbots prove dangerous for teen mental health, the real issues get lost in the noise between scammers and boosters.
The through-line: AI’s greatest impact so far might be creating new opportunities for sophisticated deception rather than genuine productivity improvements.
Questions
- If enterprises are stockpiling local AI infrastructure but not deploying it, what happens when the bubble finally pops and they’re stuck with expensive hardware they never actually used?
- Will the federal preemption of state AI laws backfire by making AI development less trustworthy in the eyes of consumers and enterprises who relied on state oversight?
- When the AI grift economy inevitably collapses, will it take legitimate AI safety research and development down with it?
Past Briefings
Signal/Noise
Signal/Noise 2026-01-01 The AI industry enters 2026 facing a fundamental reckoning: the easy money phase is over, and what emerges next will separate genuine technological progress from elaborate venture theater. Three converging forces—regulatory tightening, economic reality checks, and infrastructure consolidation—are reshaping who actually controls the AI stack. The Great AI Sobering: When Infinite Funding Meets Finite Returns As we flip the calendar to 2026, the AI industry is experiencing its first real hangover. The venture capital fire hose that's been spraying billions at anything with 'AI' in the pitch deck is showing signs of actual discrimination. This isn't about a...
Dec 30, 2025Signal/Noise
Signal/Noise 2025-12-31 As 2025 closes, the AI landscape reveals a deepening chasm between the commoditized generative layer and the emerging battlegrounds of autonomous agents, sovereign infrastructure, and authenticated human attention. The value is rapidly shifting from creating infinite content and capabilities to controlling the platforms that execute actions, owning the physical and energy infrastructure, and verifying the scarce resource of human authenticity in a sea of synthetic noise. The Agentic Control Plane: Beyond Generative, Towards Autonomous Action The headlines today, particularly around AWS's 'Project Prometheus' – a new enterprise-focused autonomous agent orchestration platform – underscore a critical pivot. We've long...
Dec 29, 2025Signal/Noise: The Invisible War for Your Intent
Signal/Noise: The Invisible War for Your Intent 2025-12-30 As AI's generative capabilities become a commodity, the real battle shifts from creating content to capturing and owning the user's context and intent. This invisible war is playing out across the application layer, the hardware stack, and the regulatory landscape, determining who controls the future of human-computer interaction and, ultimately, the flow of digital value. The 'Agentic Layer' vs. The 'Contextual OS': Who Owns Your Digital Butler? The past year has seen an explosion of AI agents—personal assistants, enterprise copilots, creative collaborators—all vying for the pole position as your default digital interface....