The Species That Wasn’t Ready
Last Tuesday, Matt Shumer — an AI startup founder and investor — published a viral 4,000-word post on X comparing the current moment to February 2020. Back then, a few people were talking about a virus originating out of Wuhan, China. Most of us weren’t listening. Three weeks later, the world rearranged itself.
His argument: we’re in the “this seems overblown” phase of something much bigger than Covid.
The same morning, my wife told me she was sick of AI commercials. Too much hype. Reminded her of Crypto. Nothing good would come of it. Twenty dollars a month? For what?
She’s not wrong about the hype. A researcher named Cam Pedersen recently fit hyperbolic models to five metrics of AI progress and found something uncomfortable. The capability metrics — benchmark scores, cost per token, release intervals — are improving linearly. Steadily. No hockey stick. The only curve going vertical? The number of researchers writing about AI. Human attention is the variable approaching infinity, not machine intelligence. The social singularity is front-running the technical one. My wife can smell the bullshit, and her instinct is to hold her nose.
The problem is that the signal underneath keeps getting louder.
Gradually, Then Suddenly
Ernest Hemingway wrote the best description of how AI will actually hit: “How did you go bankrupt?” “Two ways. Gradually, then suddenly.” That’s what’s happening. The hype is loud and fast, but the actual effects of AI are accumulating slowly — and then one day they’ll be everywhere.
But how slowly? Newsletter writer Akash Gupta offered the sharpest rebuttal to Shumer’s post: 90% of American businesses still don’t use AI in production. Anthropic’s own research with Census Bureau data shows enterprise AI adoption crawled from 3.7% to 9.7% over two years — two years of the fastest capability improvement in computing history, and fewer than one in ten businesses actually deployed it. The capability curve is exponential. The deployment curve is logarithmic. The distance between those two lines is where we actually live.
History Repeats: The Deployment Gap
This shouldn’t surprise anyone who knows their history. ATMs started deploying widely in the 1970s. The number of US bank tellers increased until 2007 — three full decades later — because ATMs made branches cheaper, which expanded total branch count. Electricity took thirty years to reshape manufacturing after the first power plants fired up. Factories had to be physically redesigned around electric motors instead of steam-driven belt systems. The resistance wasn’t technological. It was architectural. It was human.
The bottleneck has moved. It’s no longer “can AI do this task?” It’s “can our organization deploy it?” And that second bottleneck runs on procurement cycles, compliance reviews, infrastructure buildouts, and institutional trust — none of which compress the way model capabilities do.
The CISO Problem: Prohibition in the Age of AI
Rick Grinnell confirmed this when he spent months interviewing over fifty enterprise CISOs and published his findings in CIO this week. Nearly all of them hadn’t deployed a single AI security solution. Their strategy? Prohibition. No AI allowed. Legacy firewall rules. Policies written for a world that no longer exists.
Meanwhile, their employees are building anyway. Rogue AI agents. Unauthorized automation wired into CRMs and customer data. Not because they’re malicious — because the official tools are slow, the official channels said “maybe Q3,” and they’d rather risk a security incident than fall behind the person in the next cubicle.
The Innovator’s Dilemma in Real Time
This is the innovator’s dilemma playing out in real time. Move too fast and you break your business. Move too slow and someone eats your lunch. Elon Musk understood this when he started blowing up rockets on purpose — iterating at a pace NASA’s procurement process couldn’t comprehend. Palmer Luckey understood it when he built Anduril to outrun defense incumbents who were still filling out paperwork for last decade’s threats.
My two daughters embody both sides of this dilemma perfectly. My younger daughter works at an interior design firm. She had thirty custom GPTs before her boss knew what GPT stood for. AutoCAD, Claude, ChatGPT, Gemini — she could do the work of five people because she treated the tools as what they are: cognitive leverage. No systems to protect. No CIO to report to. Nothing but work to be done. She is SpaceX — fast, agile, unburdened.
My older daughter — a rocket scientist, literally, with a master’s in aerospace engineering — is skeptical. She designs satellites at a major aerospace company. If she’s wrong, hardware melts in orbit, years of work evaporate, and nobody builds another one. Six nines of precision. Triple redundancy. She has nothing but risk. Her skepticism isn’t ignorance. It’s discipline calibrated for a world where the iteration cycle is measured in years and the cost of failure is measured in millions.
These two reactions aren’t a sibling disagreement. They’re a species-wide phenomenon.
From Steel Mills to Knowledge Work
The post-war industrial era built the American middle class on manufacturing. Then the information age arrived and those jobs shipped overseas. The consequences were real and lasting — economic, political, cultural. There’s a reason you’ll find Pittsburgh Steelers fans at every away game in droves. They left when the mills closed, scattering across the country with their terrible towels and their identities intact but their livelihoods gone. The information age that followed ushered in a golden era for knowledge workers — the lawyers, analysts, consultants, and engineers who traded in ideas instead of steel.
Now AI is coming for them. But here’s what makes this different: we aren’t shipping these jobs overseas. There are no mills to reopen. There’s no convenient adjacent industry absorbing displaced workers. The very nature of cognitive work — reading, writing, analyzing, deciding — is what AI does, and it does more of it every month.
Institutions Built for a Slower World
And yet we legislate for what happened two years ago. We update school curricula at a glacial pace. A kid entering college this fall is being prepared for a job market that may not exist by graduation. Anyone with curiosity and twenty dollars a month now has access to the most patient, knowledgeable tutor in history. But the institutions built to deliver education have decided it’s easier to ban the tools than to rethink the model.
This is the pattern everywhere. Institutions built for a slower world are being broken apart from the inside by the people stuck within them, while the bureaucracy holds on for dear life. The CISOs ban AI. The employees use it anyway. The schools prohibit ChatGPT. The students use it anyway. The regulators draft frameworks for last year’s models. The labs ship next year’s models anyway.
Everyone Is Right. Nobody Is Right.
Shumer’s right that the technology is real. My wife is right that the hype is insufferable. Gupta’s right that deployment is painfully slow. My older daughter is right that caution has value. My younger daughter is right that the tools work if you just use them. The CISOs are right that there are risks. Their employees are right that waiting isn’t an option.
Everyone is right about their piece of it. Nobody is right about all of it. And that’s the problem — because “all of it” is what’s coming, whether we’re ready or not. The technology always moves faster than humans can understand it, and the humans always move faster than the institutions that govern them.
The only question is how long “gradually” lasts before “suddenly” arrives.
Harry DeMott is co-founder of CoAi, a modern intelligence desk for the AI era.