back
Get SIGNAL/NOISE in your inbox daily

Microsoft has uncovered alarming evidence of Iranian state-sponsored groups using artificial intelligence to create fake news sites aimed at influencing US voters ahead of the 2024 election. This revelation comes amid growing concerns about foreign interference in democratic processes and the potential misuse of AI technologies.

The AI-powered disinformation campaign: Microsoft alleges that Iranian actors are leveraging generative AI to create convincing fake news articles on professional-looking websites targeting both liberal and conservative audiences in the United States.

  • Two examples of these fake news sites are “Nio Thinker,” which appears to cater to left-leaning readers, and “Savannah Time,” which purports to be a conservative news source.
  • These sites feature polished designs but lack legitimate contact information and attribute articles to generic “staff” rather than named journalists, raising red flags about their authenticity.
  • The content on these sites is believed to be AI-generated, often plagiarizing and rewriting articles from legitimate US publications to create a veneer of credibility.

The suspected actors: Microsoft has identified specific groups and tactics involved in this sophisticated disinformation effort, pointing to a coordinated attempt to manipulate US public opinion.

  • An Iranian group known as “Storm-2035” is suspected of being behind the creation and dissemination of these fake news sites, potentially using social media to amplify their reach.
  • Another Iranian entity is allegedly impersonating activist groups to “stoke chaos, undermine trust in authorities, and sow doubt about election integrity.”
  • Microsoft has also uncovered evidence of hackers, believed to be associated with Iran’s Islamic Revolutionary Guard Corps, targeting a high-ranking official in a US presidential campaign with phishing emails.

Broader international interference: The Iranian efforts are part of a larger landscape of foreign attempts to influence US elections, with other major state actors also implicated.

  • Microsoft’s report highlights ongoing activities by Russian and Chinese groups using social media propaganda to sway US voters and shape public discourse.
  • These revelations underscore the complex and multi-faceted nature of foreign interference in democratic processes, with various countries employing diverse tactics to achieve their goals.

Microsoft’s role and motivations: By sharing this intelligence, Microsoft aims to raise awareness and bolster defenses against foreign influence campaigns.

  • The tech giant’s decision to publicize these findings demonstrates the private sector’s increasingly proactive role in identifying and combating cyber threats and disinformation.
  • Microsoft’s report serves as a warning to voters, policymakers, and tech companies about the evolving nature of election interference and the need for heightened vigilance.

The intersection of AI and disinformation: The use of generative AI in creating fake news sites represents a concerning evolution in the spread of misinformation and propaganda.

  • AI-generated content can be produced rapidly and at scale, potentially overwhelming fact-checkers and content moderators.
  • The ability of AI to mimic authentic writing styles and create convincing fake articles poses new challenges for identifying and countering disinformation.

Implications for election integrity: These revelations raise significant concerns about the potential impact on the upcoming US elections and the broader democratic process.

  • The targeting of both liberal and conservative audiences suggests a sophisticated attempt to exploit existing political divisions and polarization within the United States.
  • The use of AI to create seemingly credible news sources could further erode trust in media and make it more difficult for voters to discern reliable information.

The evolving landscape of cyber threats: Microsoft’s findings highlight the need for continued vigilance and adaptation in the face of increasingly sophisticated cyber and information warfare tactics.

  • As state actors and other malicious entities adopt new technologies, the methods for detecting and countering these threats must also evolve.
  • Collaboration between tech companies, government agencies, and cybersecurity experts will be crucial in developing effective countermeasures against AI-powered disinformation campaigns.

Looking ahead: The future of election security: As AI technologies continue to advance, the challenge of protecting democratic processes from foreign interference is likely to become increasingly complex.

  • Policymakers and tech leaders will need to grapple with the ethical and practical implications of AI-generated content in political discourse.
  • Educating voters about the existence and nature of these sophisticated disinformation tactics will be critical in building resilience against foreign influence campaigns.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...