The Developer Productivity Paradox
AI coding tools are working. That's the problem.
Here’s what nobody’s telling you about AI coding assistants: they work. And that’s exactly what should worry you.
Two studies published this month punch a hole in the “AI makes developers 10x faster” story. The data points
somewhere darker: AI coding tools deliver speed while eroding the skills developers need to use that speed well.
The Numbers Don’t Lie (But They Do Surprise)
Anthropic ran a randomized controlled trial, published January 29, 2026. They put 52 professional developers through
a new programming library. Half used AI assistants. Half coded by hand. The results weren’t close.
Developers using AI scored 17% lower on comprehension tests. Manual coders averaged 67%. AI users
hit 50%. The gap was worst around debugging—figuring out when code breaks and why.
The kicker: AI didn’t make them faster. Not in any statistically meaningful way. Just less skilled.
Participants using AI said they felt “lazy” and admitted to “gaps in understanding.” They moved faster through the
motions. They learned less.
The METR Bombshell
A second study, from METR (Model Evaluation & Threat Research), landed an even stranger result.
Between February and June 2025, researchers gave 16 experienced open-source developers real tasks from their own
repositories. Projects they’d worked on for five years on average. Developers using AI took 19%
longer to finish.
Read that again. Not juniors struggling with new code. Experts. Working in their own backyards. AI made them slower.
The strangest part: after the study, developers guessed they’d been 20% faster with AI. They were off by nearly 40
points. The tools felt faster while doing the opposite.
The Scale of the Shift
This matters because AI-written code isn’t a novelty anymore. Research from the Complexity Science Hub shows
AI-generated code grew sixfold in two years—from 5% in 2022 to nearly 30% by late 2024.
U.S. companies spend over $600 billion a year on programming labor. A 4% productivity bump (the study’s overall
estimate) sounds decent. Then you notice: the gains show up almost entirely among senior developers.
Less-experienced programmers use AI more often (37% adoption vs. lower rates for seniors). But productivity gains?
Almost none. Juniors use the tools more and get less from them.
The Junior Developer Problem
For early-career developers, the picture gets rough.
A Harvard study of 62 million workers found junior developer hiring drops 9-10% within six quarters after companies
adopt AI coding tools. Juniors see the biggest raw productivity boost from AI—and the biggest hit to skill-building.
They accept more suggestions, ask fewer questions, build less foundation.
The result, researchers say: new developers “unable to explain how or why their code works.”
Tim Kellogg, a developer who builds autonomous agents and talks about AI constantly, didn’t sugarcoat it: “Yes,
massively so. Today it’s writing code, then it’ll be architecture, then product management. Those who can’t operate
at a higher level won’t keep their jobs.”
The Experience Dividend
Not everyone’s drowning. Roland Dreier, a longtime Linux kernel contributor, described a “step-change” in the past
six months, especially after Anthropic released Claude Opus 4.5. He used to rely on AI for autocomplete. Now he
tells an agent “this test is failing, fix it” and it works.
He estimated 10x speed gains for complex tasks—building a Rust backend with Terraform deployment and a Svelte
frontend. But he worries about newcomers: “We’re going to need changes in education and training to give juniors the
experience and judgment they need.”
The pattern holds across studies and interviews: AI coding tools multiply whatever skill you already have. Twenty
years of pattern recognition? You spot bad AI output instantly. Still building that intuition? You accept the
hallucination and move on.
The Uncomfortable Question
Darren Mart, a senior engineer at Microsoft since 2006, put the tension plainly. He recently used Claude to build a
Next.js app with Azure Functions. The AI “successfully built roughly 95% of it to my spec.”
But he stays cautious: “I’m only comfortable using them for tasks I already fully understand. Otherwise there’s no
way to know if I’m heading down a bad path and setting myself up for a mountain of technical debt.”
This is the paradox. AI works best for people who need it least. Experts use it to speed up what they already know.
Novices use it to skip what they don’t. The skipping costs them.
What Changes
Organizations won’t stop using AI coding tools. The productivity gains for experienced devs are real. The pressure to
ship faster isn’t going anywhere.
But the evidence says something has to shift. Managers should deploy AI deliberately, making sure engineers keep
learning as they work. Some providers now offer learning modes—Anthropic’s Claude Code Learning, OpenAI’s ChatGPT
Study Mode—built to explain, not just produce.
The skill of 2026 isn’t writing a QuickSort. It’s looking at an AI-generated QuickSort and spotting instantly that it
uses an unstable pivot. That takes more expertise, not less.
Eric Cheng, CEO at Jobright, put it this way: the developers who thrive “will treat AI like a junior engineer on the
team—helpful, fast, but needing oversight. Knowing how to prompt, review, and improve AI output will be as essential
as writing clean code.”
Here’s the thing: the tools built to make coding easier are making the job of being a good developer harder. Speed is
a byproduct. Judgment is still the product.
Sources: Ars Technica, ZDNET, CIO,METR, Anthropic Study
Recent Blog Posts
Stop Boarding Up the Windows. The Tsunami Is Coming.
There's a popular narrative about AI and jobs right now. It goes something like this: AI is coming for your job. Companies are laying people off. The robots are winning. It's not wrong, exactly. But it's dangerously incomplete — like watching a hurricane through your living room window and thinking the problem is the wind. When a hurricane hits, the first thing you notice is the wind. Trees bending, debris flying, power lines snapping. It's dramatic and visible and it's what every camera crew points at. Then comes the rain — relentless, overwhelming, the kind that makes you question every...
Feb 24, 2026The command line didn’t die. It was waiting.
There's a moment every programmer remembers. Not when they learned to code — that's a different memory, usually involving a textbook and a lot of frustration. I mean the moment when the terminal stopped feeling like a place you visited and started feeling like a place you lived. For me, that moment happened twice. Once in my early twenties, bent over a keyboard writing Bash scripts, watching the Unix command line respond to me like a conversation. And then again, exactly one year ago, when I typed my first prompt into Claude Code and felt that same electricity — something on...
Feb 12, 2026AI and Jobs: What Three Decades of Building Tech Taught Me About What’s Coming
In 2023, I started warning people. Friends. Family. Anyone who would listen. I told them AI would upend their careers within three years. Most nodded politely and moved on. Some laughed. A few got defensive. Almost nobody took it seriously. It's 2026 now. I was right. I wish I hadn't been. Who Am I to Say This? I've spent thirty years building what's next before most people knew it was coming. My earliest partner was Craig Newmark. We co-founded DigitalThreads in San Francisco in the mid-90s — Craig credits me with naming Craigslist and the initial setup. That project reshaped...