×
Opinion: Ex-NSA Chief Joining OpenAI Board Will Lead to Further Weaponization of AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The appointment of retired General Paul Nakasone to OpenAI’s board marks a significant shift in the company’s alignment with national security interests, raising concerns about the potential weaponization of AI.

Blurring lines between Big Tech and government: OpenAI’s move follows a trend of tech giants like Amazon, Google, and Microsoft increasingly aligning themselves with governmental and military agendas under the guise of security:

  • Advanced AI systems intended for defensive purposes could evolve into tools for mass surveillance, monitoring citizens’ online activities and communications.
  • OpenAI may capitalize on its data analytics capabilities to shape public discourse, with some suggesting this is already occurring.

Financial ties fueling expansion: U.S. military and intelligence contracts awarded to major tech firms from 2019-2022 totaled at least $53 billion, potentially fueling OpenAI’s expansion into defense and surveillance technologies.

  • In April 2024, OpenAI CEO Sam Altman and other AI leaders were recruited to join a new federal Artificial Intelligence Safety and Security Board.
  • This board brings together fierce competitors to ensure AI works in the national interest, but the national interest often diverges from the interests of individual citizens.

The revolving door between tech and government: There have been several high-profile instances of individuals moving between positions at Big Tech companies and the U.S. government:

  • Executives from Google, Microsoft, and Amazon have taken on roles in the White House and federal agencies, while former government officials have gone on to work for these tech giants.
  • Notable examples include Jay Carney (Amazon/Obama administration) and Eric Schmidt (Google/Department of Defense advisor).

OpenAI’s shifting policies: ChatGPT’s usage policies originally prohibited military use but have been quietly changed to allow for military use cases deemed acceptable by the company.

  • This shift is significant, similar to when Google removed the “don’t be evil” clause from its policies.
  • The weaponization of AI is just beginning, with AI set to rewrite society in every possible way, from healthcare and education to law enforcement and national defense.

Broader implications: The embrace between Big Tech and Big Government raises important questions about the alignment of corporate interests with the public good and the potential erosion of individual privacy and freedoms in the name of national security. As OpenAI and other tech giants deepen their ties with the government and military, it is crucial to critically examine the implications and ensure proper oversight and accountability to prevent the misuse of AI for surveillance and control.

Are We Witnessing the Weaponization of AI?

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.