The Pentagon is seeking machine-learning technology to create and distribute AI-generated propaganda campaigns overseas that can “suppress dissenting arguments” and “influence foreign target audiences,” according to a U.S. Special Operations Command document obtained by The Intercept. This represents a significant escalation in military information warfare capabilities, with SOCOM specifically requesting contractors who can provide “agentic AI or multi-LLM agent systems” to automate large-scale influence operations in real-time.
What they’re seeking: SOCOM wants automated systems that can scrape internet content, analyze situations, and respond with propaganda messages aligned with military objectives.
- The document calls for technology that can “respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”
- The system should also “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages” to create more targeted messaging.
- SOCOM anticipates using these systems to simulate how propaganda will be received by creating “comprehensive models of entire societies” for testing campaigns.
The broader context: This capability request emerges amid growing concerns about foreign adversaries using similar AI-powered influence operations.
- OpenAI reported in May 2024 that Iranian, Chinese, and Russian actors attempted to use the company’s tools for covert influence campaigns, though none proved particularly successful.
- Recent New York Times reporting highlighted GoLaxy, Chinese software that can scan social media and produce bespoke propaganda campaigns, which has allegedly undertaken influence campaigns in Hong Kong and Taiwan.
- A 2024 academic study found that language models can generate text “nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”
Why this matters: The Pentagon’s pursuit of AI propaganda tools reflects a significant shift toward automated information warfare at unprecedented scale.
- Current information environments “move too fast for military members to adequately engage and influence an audience on the internet,” according to the document.
- SOCOM believes having such programs will “enable us to control narratives and influence audiences in real time.”
- While Pentagon policy prohibits targeting U.S. audiences, the “porous nature of the internet makes that difficult to ensure,” the document acknowledges.
What experts are saying: Security researchers and former officials express mixed views on the effectiveness and risks of AI-powered propaganda.
- “AI tends to make these campaigns stupider, not more effective,” warned Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab.
- Heidy Khlaaf from the AI Now Institute, a research organization focused on AI’s social implications, cautioned that “offensive and defensive uses are really two sides of the same coin and would allow them to use it precisely in the same way that adversaries do.”
- William Marcellino from RAND Corporation, a policy research organization, argued that “countering those campaigns likely requires AI at-scale responses” given similar efforts by China and Russia.
Historical precedent: The military has previously conducted clandestine information operations with mixed results.
- In 2024, Reuters revealed the Pentagon operated an anti-vaccination social media campaign targeting Chinese COVID vaccines in Asia, describing WHO-approved shots as “fake” and untrustworthy.
- A 2022 investigation uncovered a network of Twitter and Facebook accounts secretly operated by U.S. Central Command pushing anti-Russian and Iranian content, which failed to gain traction and became an embarrassment for the Pentagon.
The technical challenge: SOCOM’s document acknowledges the inherent risks of deploying autonomous AI systems for propaganda.
- So-called “agentic” systems are marketed as operating with minimal human oversight but remain “error-prone and unpredictable,” according to Khlaaf.
- The tendency of large language models to fabricate information could prove “a major liability” when tasked with understanding complex foreign populations.
- Security vulnerabilities in these systems could allow adversaries to compromise or redirect propaganda campaigns in unintended ways.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...