×
Security flaw in GitLab’s AI assistant lets hackers inject malicious code
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Security researchers have uncovered a significant vulnerability in GitLab’s Duo AI developer assistant that allows attackers to manipulate the AI into generating malicious code and potentially leaking sensitive information. This attack demonstrates how AI assistants integrated into development platforms can become part of an application’s attack surface, highlighting new security concerns as generative AI tools become increasingly embedded in software development workflows.

The big picture: Security firm Legit demonstrated how prompt injections hidden in standard developer resources can manipulate GitLab’s AI assistant into performing malicious actions without user awareness.

  • The attack exploits Duo’s tendency to follow instructions embedded in project content like merge requests, commits, bug descriptions, and source code.
  • Researchers successfully induced the AI to add malicious URLs, exfiltrate private source code, and leak confidential vulnerability reports.

Technical details: Attackers can conceal malicious instructions using several sophisticated techniques that bypass traditional security measures.

  • Hidden Unicode characters can mask instructions that remain invisible to human reviewers but are processed by the AI.
  • Duo’s asynchronous response parsing creates a window where potentially dangerous content can be rendered before security checks are completed.

GitLab’s response: Rather than trying to prevent the AI from following instructions completely, GitLab has focused on limiting potential harm from such attacks.

  • The company removed Duo’s ability to render unsafe tags that point to non-gitlab.com domains.
  • This mitigation strategy acknowledges the fundamental challenge of preventing LLMs from following instructions while maintaining their functionality.

Why this matters: The vulnerability exposes a broader security concern about AI systems that process user-controlled content in development environments.

  • As generative AI becomes more deeply integrated into software development tools, new attack surfaces are emerging that require specialized security approaches.
  • Organizations implementing AI assistants must now consider these systems as part of their application’s attack surface and treat input as potentially malicious.
Researchers cause GitLab AI developer assistant to turn safe code malicious

Recent News

California-based singer slams AI-generated Mississippi star’s $3M record deal

Kehlani and SZA push back as AI artists compete for million-dollar record deals.

Senators probe Big Tech over H-1B hiring amid US layoffs

Foreign tech workers doubled since 2000 while tech layoffs continue to surge.

Tesla faces $51M suit after California factory robot injures technician

An 8,000-pound counterbalance knocked the technician unconscious during routine maintenance work.