back
Get SIGNAL/NOISE in your inbox daily

Nearly half of AI-generated code contains security vulnerabilities despite appearing production-ready, according to new research from Veracode, a cybersecurity company, that examined over 100 large language models across 80 coding tasks. The findings reveal that even advanced AI coding tools are creating significant security risks for companies increasingly relying on artificial intelligence to supplement or replace human developers, with no improvement in security performance across newer or larger models.

What you should know: The security flaws affect all major programming languages, with Java experiencing the highest failure rate at over 70%.

  • Python, C#, and JavaScript also showed concerning failure rates between 38-45%.
  • Large language models chose insecure coding methods 45% of the time across all tested scenarios.
  • The research found no correlation between model size or recency and security performance.

The vulnerabilities in detail: AI-generated code consistently fails to defend against common attack vectors that have plagued software development for years.

  • Cross-site scripting vulnerabilities appeared in 86% of cases where LLMs should have implemented proper defenses.
  • Log injection attacks succeeded 88% of the time against AI-generated code.
  • These failure rates occur even when the generated code appears functional and ready for production use.

In plain English: Cross-site scripting is like leaving your front door unlocked—it allows malicious actors to inject harmful code into websites that then runs on visitors’ computers. Log injection is similar to someone tampering with a building’s security logbook to hide their tracks or plant false information.

Why this matters: The security gaps coincide with AI’s growing role in software development, creating a potentially dangerous combination.

  • As much as one-third of new code at Google and Microsoft is now AI-generated, according to the research.
  • “The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” explained Veracode CTO Jens Wessling.
  • AI also enables attackers to exploit vulnerabilities faster and at scale, amplifying the impact of insecure code.

What they’re saying: Security experts warn that the current trajectory could create massive technical debt if left unaddressed.

  • “Our research shows models are getting better at coding accurately but are not improving at security,” Wessling noted.
  • “AI coding assistants and agentic workflows represent the future of software development… Security cannot be an afterthought if we want to prevent the accumulation of massive security debt,” he concluded.

Recommended solutions: Veracode suggests several measures to mitigate the security risks while still leveraging AI development tools.

  • Enable security checks in AI-driven workflows to enforce compliance and security standards.
  • Adopt AI remediation guidance to train developers on secure coding practices.
  • Deploy firewalls and detection tools that can identify flaws earlier in the development process.
  • Implement systematic security reviews for AI-generated code before production deployment.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...