OpenAI and Andreessen Horowitz’s $43 million investment in Adaptive Security represents a significant pivot in cybersecurity strategy, focusing on human-centered security testing rather than just technical defenses. This first-time cybersecurity investment from OpenAI’s Startup Fund signals growing concern about AI-powered social engineering attacks, as Adaptive Security uses artificial intelligence to test employees’ ability to recognize scams through simulated phishing attempts across multiple communication channels. The approach acknowledges that most security breaches result from human error rather than technical vulnerabilities.
The big picture: OpenAI and Andreessen Horowitz are leading a $43 million Series A funding round for Adaptive Security, an AI cybersecurity startup that tests and trains employees to identify suspicious behavior.
How it works: Unlike traditional cybersecurity solutions, Adaptive Security actively attempts to trick employees with AI-generated scams across multiple channels to identify human vulnerabilities.
Why this matters: The investment acknowledges that human error represents the primary vulnerability in most enterprise security systems, especially as generative AI enables more sophisticated and frequent scam attempts.
What’s next: According to CEO Brian Long, the company will use the new funding to expand its engineering team to stay ahead of emerging threats and scamming techniques.