A new study by Berkeley-based AI benchmarking nonprofit Metr found that experienced developers who used AI tools to complete coding tasks actually took 19% longer than those who didn’t use AI assistance. The finding challenges widespread assumptions about AI’s productivity benefits and suggests that organizations may be overestimating the efficiency gains from AI tools in skilled professional work.
The big picture: While developers predicted AI would speed up their work by 24% before starting and 20% after completing tasks, objective data showed the opposite effect occurred.
Key study details: Metr’s research focused on experienced open-source developers working on large, complex codebases they had helped build.
- The study was motivated by understanding “how close we might be to automating the role of an AI lab research engineer.”
- Participants were monitored throughout the coding process to measure actual completion times.
- The 19% slowdown represents a significant efficiency hit with direct business costs.
Why this matters: AI expert Gary Marcus warns this could indicate a broader pattern across industries where AI tools are becoming commonplace.
- “People might be imagining productivity gains that they are not getting, and ignoring real-world costs to boot,” Marcus explained.
- The finding represents “a serious blow to generative AI’s flagship use case” if the results prove replicable beyond coding.
Important caveats: The study’s scope and timing suggest the results may not apply universally to all AI tool usage.
- Research was conducted in early 2025 with AI tools that are “evolving and improving every day.”
- Metr expects “AI tools provide greater productivity benefits in other settings (e.g. on smaller projects, with less experienced developers, or with different quality standards).”
- The test group was highly specialized, focusing on experienced developers working on complex projects.
Broader industry context: The coding profession remains divided on AI’s impact, with competing perspectives emerging from recent studies.
- A Microsoft study raised concerns that young coders rely so heavily on AI they don’t understand the underlying science of their code.
- OpenAI CFO Sarah Friar announced the company is developing agent-based AI tools capable of automatically “building an app for you.”
- Expert coder Salvatore Sanfilippo argued in May that human coders still outperform AI.
What organizations should consider: The study suggests companies should test their AI implementations rather than assuming productivity gains.
- For smaller businesses lacking coding teams, AI tools may still provide value by offering software development capabilities they would otherwise need to outsource.
- Companies using AI for various business tasks should run their own tests to determine whether tools are truly saving time or creating distractions.
AI Might Be Slowing Down Some Employees' Work, a Study Says