×
Security flaw in GitLab’s AI assistant lets hackers inject malicious code
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Security researchers have uncovered a significant vulnerability in GitLab’s Duo AI developer assistant that allows attackers to manipulate the AI into generating malicious code and potentially leaking sensitive information. This attack demonstrates how AI assistants integrated into development platforms can become part of an application’s attack surface, highlighting new security concerns as generative AI tools become increasingly embedded in software development workflows.

The big picture: Security firm Legit demonstrated how prompt injections hidden in standard developer resources can manipulate GitLab’s AI assistant into performing malicious actions without user awareness.

  • The attack exploits Duo’s tendency to follow instructions embedded in project content like merge requests, commits, bug descriptions, and source code.
  • Researchers successfully induced the AI to add malicious URLs, exfiltrate private source code, and leak confidential vulnerability reports.

Technical details: Attackers can conceal malicious instructions using several sophisticated techniques that bypass traditional security measures.

  • Hidden Unicode characters can mask instructions that remain invisible to human reviewers but are processed by the AI.
  • Duo’s asynchronous response parsing creates a window where potentially dangerous content can be rendered before security checks are completed.

GitLab’s response: Rather than trying to prevent the AI from following instructions completely, GitLab has focused on limiting potential harm from such attacks.

  • The company removed Duo’s ability to render unsafe tags that point to non-gitlab.com domains.
  • This mitigation strategy acknowledges the fundamental challenge of preventing LLMs from following instructions while maintaining their functionality.

Why this matters: The vulnerability exposes a broader security concern about AI systems that process user-controlled content in development environments.

  • As generative AI becomes more deeply integrated into software development tools, new attack surfaces are emerging that require specialized security approaches.
  • Organizations implementing AI assistants must now consider these systems as part of their application’s attack surface and treat input as potentially malicious.
Researchers cause GitLab AI developer assistant to turn safe code malicious

Recent News

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.

Veo 3 brings audio to AI video and tackles the Will Smith Test

Google's latest AI video generation model introduces synchronized audio capabilities, though still struggles with realistic eating sounds when depicting the celebrity in its now-standard benchmark test.

How subtle biases derail LLM evaluations

Study finds language models exhibit pervasive positional preferences and prompt sensitivity when making judgments, raising concerns for their reliability in high-stakes decision-making contexts.