×
Opinion: Ex-NSA Chief Joining OpenAI Board Will Lead to Further Weaponization of AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The appointment of retired General Paul Nakasone to OpenAI’s board marks a significant shift in the company’s alignment with national security interests, raising concerns about the potential weaponization of AI.

Blurring lines between Big Tech and government: OpenAI’s move follows a trend of tech giants like Amazon, Google, and Microsoft increasingly aligning themselves with governmental and military agendas under the guise of security:

  • Advanced AI systems intended for defensive purposes could evolve into tools for mass surveillance, monitoring citizens’ online activities and communications.
  • OpenAI may capitalize on its data analytics capabilities to shape public discourse, with some suggesting this is already occurring.

Financial ties fueling expansion: U.S. military and intelligence contracts awarded to major tech firms from 2019-2022 totaled at least $53 billion, potentially fueling OpenAI’s expansion into defense and surveillance technologies.

  • In April 2024, OpenAI CEO Sam Altman and other AI leaders were recruited to join a new federal Artificial Intelligence Safety and Security Board.
  • This board brings together fierce competitors to ensure AI works in the national interest, but the national interest often diverges from the interests of individual citizens.

The revolving door between tech and government: There have been several high-profile instances of individuals moving between positions at Big Tech companies and the U.S. government:

  • Executives from Google, Microsoft, and Amazon have taken on roles in the White House and federal agencies, while former government officials have gone on to work for these tech giants.
  • Notable examples include Jay Carney (Amazon/Obama administration) and Eric Schmidt (Google/Department of Defense advisor).

OpenAI’s shifting policies: ChatGPT’s usage policies originally prohibited military use but have been quietly changed to allow for military use cases deemed acceptable by the company.

  • This shift is significant, similar to when Google removed the “don’t be evil” clause from its policies.
  • The weaponization of AI is just beginning, with AI set to rewrite society in every possible way, from healthcare and education to law enforcement and national defense.

Broader implications: The embrace between Big Tech and Big Government raises important questions about the alignment of corporate interests with the public good and the potential erosion of individual privacy and freedoms in the name of national security. As OpenAI and other tech giants deepen their ties with the government and military, it is crucial to critically examine the implications and ensure proper oversight and accountability to prevent the misuse of AI for surveillance and control.

Are We Witnessing the Weaponization of AI?

Recent News

North Korea unveils AI-equipped suicide drones amid deepening Russia ties

North Korea's AI-equipped suicide drones reflect growing technological cooperation with Russia, potentially destabilizing security in an already tense Korean peninsula.

Rookie mistake: Police recruit fired for using ChatGPT on academy essay finds second chance

A promising police career was derailed then revived after an officer's use of AI revealed gaps in how law enforcement is adapting to new technology.

Auburn University launches AI-focused cybersecurity center to counter emerging threats

Auburn's new center brings together experts from multiple disciplines to develop defensive strategies against the rising tide of AI-powered cyber threats affecting 78 percent of security officers surveyed.