×
Opinion: Ex-NSA Chief Joining OpenAI Board Will Lead to Further Weaponization of AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The appointment of retired General Paul Nakasone to OpenAI’s board marks a significant shift in the company’s alignment with national security interests, raising concerns about the potential weaponization of AI.

Blurring lines between Big Tech and government: OpenAI’s move follows a trend of tech giants like Amazon, Google, and Microsoft increasingly aligning themselves with governmental and military agendas under the guise of security:

  • Advanced AI systems intended for defensive purposes could evolve into tools for mass surveillance, monitoring citizens’ online activities and communications.
  • OpenAI may capitalize on its data analytics capabilities to shape public discourse, with some suggesting this is already occurring.

Financial ties fueling expansion: U.S. military and intelligence contracts awarded to major tech firms from 2019-2022 totaled at least $53 billion, potentially fueling OpenAI’s expansion into defense and surveillance technologies.

  • In April 2024, OpenAI CEO Sam Altman and other AI leaders were recruited to join a new federal Artificial Intelligence Safety and Security Board.
  • This board brings together fierce competitors to ensure AI works in the national interest, but the national interest often diverges from the interests of individual citizens.

The revolving door between tech and government: There have been several high-profile instances of individuals moving between positions at Big Tech companies and the U.S. government:

  • Executives from Google, Microsoft, and Amazon have taken on roles in the White House and federal agencies, while former government officials have gone on to work for these tech giants.
  • Notable examples include Jay Carney (Amazon/Obama administration) and Eric Schmidt (Google/Department of Defense advisor).

OpenAI’s shifting policies: ChatGPT’s usage policies originally prohibited military use but have been quietly changed to allow for military use cases deemed acceptable by the company.

  • This shift is significant, similar to when Google removed the “don’t be evil” clause from its policies.
  • The weaponization of AI is just beginning, with AI set to rewrite society in every possible way, from healthcare and education to law enforcement and national defense.

Broader implications: The embrace between Big Tech and Big Government raises important questions about the alignment of corporate interests with the public good and the potential erosion of individual privacy and freedoms in the name of national security. As OpenAI and other tech giants deepen their ties with the government and military, it is crucial to critically examine the implications and ensure proper oversight and accountability to prevent the misuse of AI for surveillance and control.

Are We Witnessing the Weaponization of AI?

Recent News

Claude AI can now analyze and critique Google Docs

Claude's new Google Docs integration allows users to analyze multiple documents simultaneously without manual copying, marking a step toward more seamless AI-powered workflows.

AI performance isn’t plateauing, it’s just outgrown benchmarks, Anthropic says

The industry's move beyond traditional AI benchmarks reveals new capabilities in self-correction and complex reasoning that weren't previously captured by standard metrics.

How to get a Perplexity Pro subscription for free

Internet search startup Perplexity offers its $200 premium AI service free to university students and Xfinity customers, aiming to expand its user base.