×
Pentagon seeks AI to automate overseas propaganda campaigns for real-time flexibility
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The Pentagon is seeking machine-learning technology to create and distribute AI-generated propaganda campaigns overseas that can “suppress dissenting arguments” and “influence foreign target audiences,” according to a U.S. Special Operations Command document obtained by The Intercept. This represents a significant escalation in military information warfare capabilities, with SOCOM specifically requesting contractors who can provide “agentic AI or multi-LLM agent systems” to automate large-scale influence operations in real-time.

What they’re seeking: SOCOM wants automated systems that can scrape internet content, analyze situations, and respond with propaganda messages aligned with military objectives.

  • The document calls for technology that can “respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”
  • The system should also “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages” to create more targeted messaging.
  • SOCOM anticipates using these systems to simulate how propaganda will be received by creating “comprehensive models of entire societies” for testing campaigns.

The broader context: This capability request emerges amid growing concerns about foreign adversaries using similar AI-powered influence operations.

  • OpenAI reported in May 2024 that Iranian, Chinese, and Russian actors attempted to use the company’s tools for covert influence campaigns, though none proved particularly successful.
  • Recent New York Times reporting highlighted GoLaxy, Chinese software that can scan social media and produce bespoke propaganda campaigns, which has allegedly undertaken influence campaigns in Hong Kong and Taiwan.
  • A 2024 academic study found that language models can generate text “nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”

Why this matters: The Pentagon’s pursuit of AI propaganda tools reflects a significant shift toward automated information warfare at unprecedented scale.

  • Current information environments “move too fast for military members to adequately engage and influence an audience on the internet,” according to the document.
  • SOCOM believes having such programs will “enable us to control narratives and influence audiences in real time.”
  • While Pentagon policy prohibits targeting U.S. audiences, the “porous nature of the internet makes that difficult to ensure,” the document acknowledges.

What experts are saying: Security researchers and former officials express mixed views on the effectiveness and risks of AI-powered propaganda.

  • “AI tends to make these campaigns stupider, not more effective,” warned Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab.
  • Heidy Khlaaf from the AI Now Institute, a research organization focused on AI’s social implications, cautioned that “offensive and defensive uses are really two sides of the same coin and would allow them to use it precisely in the same way that adversaries do.”
  • William Marcellino from RAND Corporation, a policy research organization, argued that “countering those campaigns likely requires AI at-scale responses” given similar efforts by China and Russia.

Historical precedent: The military has previously conducted clandestine information operations with mixed results.

  • In 2024, Reuters revealed the Pentagon operated an anti-vaccination social media campaign targeting Chinese COVID vaccines in Asia, describing WHO-approved shots as “fake” and untrustworthy.
  • A 2022 investigation uncovered a network of Twitter and Facebook accounts secretly operated by U.S. Central Command pushing anti-Russian and Iranian content, which failed to gain traction and became an embarrassment for the Pentagon.

The technical challenge: SOCOM’s document acknowledges the inherent risks of deploying autonomous AI systems for propaganda.

  • So-called “agentic” systems are marketed as operating with minimal human oversight but remain “error-prone and unpredictable,” according to Khlaaf.
  • The tendency of large language models to fabricate information could prove “a major liability” when tasked with understanding complex foreign populations.
  • Security vulnerabilities in these systems could allow adversaries to compromise or redirect propaganda campaigns in unintended ways.
Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda

Recent News

Canadian university debuts 3D AI teaching assistant “Kia” to co-teach ethics course

The professor expects his AI colleague will make "hilarious mistakes" during live classroom discussions.