×
AI-generated code causes 1 in 5 security breaches despite accountability gaps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-generated code is now responsible for one in five security breaches, according to new research from Aikido Security, a cybersecurity firm, despite representing 24% of all production code globally. The findings highlight a growing accountability gap where 69% of organizations have discovered vulnerabilities in AI-written code, yet no clear ownership exists for when these flaws cause actual security incidents.

The big picture: AI coding tools are creating a dangerous blind spot in cybersecurity, where traditional responsibility structures break down and leave organizations vulnerable to prolonged remediation times.

Key vulnerabilities: The research reveals significant regional differences in AI-related security incidents.

  • European companies experienced serious incidents at a 20% rate, while US organizations saw more than double that frequency at 43%.
  • Aikido attributes this disparity to US developers being more likely to bypass security controls (72% vs 61% in Europe) and Europe’s stricter compliance requirements.
  • Even in Europe, 53% of companies admitted to having near misses with AI-generated code.

The accountability problem: Organizations struggle to assign responsibility when AI code causes breaches, creating what experts call “a real nightmare of risk.”

  • Security teams (53%), developers (45%), and managers (42%) still receive blame when AI-written code fails.
  • “Developers didn’t write the code, infosec didn’t get to review it and legal is unable to determine liability should something go wrong,” noted Mike Wilkes, Aikido’s CISO.
  • “No one knows who’s accountable when AI-generated code causes a breach.”

Tool complexity amplifies risk: The number of AI development tools organizations use directly correlates with security incidents and slower response times.

  • Companies using six to eight tools experienced incidents at a 90% rate, compared to 64% for those using just one or two tools.
  • Remediation time increases dramatically with tool complexity: 3.3 days for organizations using 1-2 tools versus 7.8 days for those using five or more tools.

Future outlook: Despite current challenges, industry professionals remain optimistic about AI’s security potential.

  • 96% of respondents believe AI will eventually write secure, reliable code within the next five years.
  • 90% expect AI to handle penetration testing capabilities within 5.5 years.
  • Only 21% think this advancement will happen without human oversight, emphasizing the continued importance of human workers in the development process.
One in five security breaches now thought to be caused by AI-written code

Recent News

Republican candidate debates AI deepfake opponent after she declines

The bizarre political theater highlights growing concerns about election integrity and voter manipulation.

Gemini AI doubles traffic share to 12.9% as ChatGPT dominance slips

Google's ecosystem integration enables seamless workflows across Gmail, Docs and Drive.

Study: 45% of AI news responses contain serious errors

Nearly half of AI-generated news responses contain serious errors, study finds.