×
Forrester on AI security: How to prevent jailbreaks, data poisoning and more
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI security is evolving rapidly, with recent incidents involving DeepSeek, Google, and Microsoft highlighting critical vulnerabilities and security challenges in generative AI systems.

Recent developments; Major players in the tech industry have released significant findings about AI security threats and defensive measures.

  • DeepSeek’s app store success was quickly followed by Wiz’s discovery of basic developer errors in their system
  • Google published research on adversarial misuse of generative AI
  • Microsoft released findings from red teaming 100 generative AI products, emphasizing how AI amplifies existing security risks

Priority security areas; Organizations must focus on three key areas to effectively secure their AI systems and protect against emerging threats.

  • Securing user interactions with AI systems, including both employee and customer usage
  • Protecting applications that serve as gateways to AI systems
  • Safeguarding the underlying AI models themselves, though model-specific attacks remain primarily academic for now

Implementation strategy; Security leaders should follow a practical, prioritized approach to AI security implementation.

  • Begin by securing user-facing prompts to protect against immediate risks like prompt injection and data leakage
  • Conduct comprehensive discovery of AI implementations across the organization’s technology infrastructure
  • Address model security as a longer-term priority, particularly for industries outside of technology, financial services, healthcare, and government

Technical considerations; AI security requires a multi-layered approach to protect against various attack vectors.

  • Bidirectional security controls must examine both user inputs and system outputs
  • Application security becomes more complex due to the increased volume of code and apps resulting from AI integration
  • Data protection underlies all security measures, requiring both traditional and novel approaches to data governance

Industry implications; The rapid evolution of AI security threats requires organizations to balance immediate defensive measures with long-term security planning.

  • Customer and employee-facing AI systems often exist within organizations before security teams become aware
  • “Bring Your Own AI” trends, exemplified by DeepSeek’s popularity, create additional security challenges
  • Security measures must adapt to both existing and emerging threats in the AI landscape

Security landscape assessment: While immediate focus should be on securing user interactions and applications, organizations must remain vigilant about emerging threats to AI models while building comprehensive security frameworks that can evolve with the technology.

AI and ML Security: Preventing Jailbreaks, Drop Tables, and Data Poisoning

Recent News

College-educated Americans earn up to $1,000 weekly fixing AI responses

College graduates find lucrative opportunities in Silicon Valley's latest niche: fixing chatbots' grammar and tone to sound more natural.

Insta-pop: New open source AI DiffRhythm creates complete songs in just 10 seconds

Chinese researchers unveil an AI model that generates fully synchronized songs with vocals from just lyrics and style prompts in seconds.

New open-source math AI model delivers high performance for just $1,000

An open-source AI model matches commercial rivals at solving complex math problems while slashing typical training costs to just $1,000.