Three executives from Microsoft, Bloomberg Beta, and AI startup Sola defended high AI failure rates at Fortune’s Most Powerful Women conference, arguing that the widely cited 95% failure rate for enterprise AI pilots reflects normal learning curves rather than fundamental technology problems.
What they’re saying: The panelists used vivid analogies to normalize AI experimentation failures and encourage continued investment.
- “Has anybody ever started to ride a bike on the first try? No. We get up, we dust ourselves off, we keep experimenting, and somehow we figure it out. And it’s the same thing with AI,” said Karin Klein, founding partner at Bloomberg Beta.
- “We’re on that jagged frontier, which is we’re going to have some wins, and then we’re going to see that trough, and then we’re going to have some more wins,” added Amy Coleman, Microsoft’s executive vice president and chief people officer.
Key context: Jessica Wu, co-founder and CEO of Sola, provided crucial perspective on the MIT study’s findings by comparing AI adoption to historical enterprise technology deployments.
- The study shows that only 5% of AI tools being tested make it into production, but Wu noted that success rates for large enterprise technology deployments historically hovered around 10% or lower.
- “At the same time, AI is very new. It’s going to hallucinate. You’re going to have to work with experimentation in ways that previous [generations] wouldn’t have,” Wu explained.
In plain English: When AI systems “hallucinate,” they generate information that sounds plausible but is actually incorrect or made up—like a confident student giving a wrong answer that seems right.
What successful implementation requires: The executives outlined specific organizational conditions necessary for effective AI adoption.
- Coleman emphasized building “AI fluency” across workforces through collaborative approaches where technical experts work alongside business users.
- Wu highlighted the need for both top-down leadership support and bottom-up engagement from employees who understand daily workflows.
- “Leadership really enabling employees to test and build things safely obviously, but giving people the flexibility to experiment, try new tools, encourage them to use and build AI and help them build fluency,” Wu said.
Cultural transformation needed: Coleman stressed that organizational culture matters more than the technology itself for successful AI implementation.
- Companies must embrace being “okay with failure” and “okay with messy” as they navigate the “entry point of this transformation.”
- Managers need to “stop assessing tasks and start teaching learning” to create what she called “a learning organization.”
- The key conditions include “vulnerability and courage” as organizations navigate technology that moves faster than previous transformations.
The human element: Coleman pushed back against concerns that AI enthusiasm diminishes the value of human work.
- “The more we talk about AI, the more people think that we don’t trust humans,” she said. “It’s really important that we’re talking about the criticality of humans in all these workflows.”
- The focus should be on freeing up time for uniquely human capabilities rather than replacing human workers.
Broader implications: Klein encouraged widespread experimentation beyond formal enterprise deployments, suggesting people become “vibe coders” who use accessible AI tools to build applications without traditional programming backgrounds.
Experts say the high failure rate in AI adoption isn't a bug, but a feature: 'Has anybody ever started to ride a bike on the first try?'