×
Opinion: Ex-NSA Chief Joining OpenAI Board Will Lead to Further Weaponization of AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The appointment of retired General Paul Nakasone to OpenAI’s board marks a significant shift in the company’s alignment with national security interests, raising concerns about the potential weaponization of AI.

Blurring lines between Big Tech and government: OpenAI’s move follows a trend of tech giants like Amazon, Google, and Microsoft increasingly aligning themselves with governmental and military agendas under the guise of security:

  • Advanced AI systems intended for defensive purposes could evolve into tools for mass surveillance, monitoring citizens’ online activities and communications.
  • OpenAI may capitalize on its data analytics capabilities to shape public discourse, with some suggesting this is already occurring.

Financial ties fueling expansion: U.S. military and intelligence contracts awarded to major tech firms from 2019-2022 totaled at least $53 billion, potentially fueling OpenAI’s expansion into defense and surveillance technologies.

  • In April 2024, OpenAI CEO Sam Altman and other AI leaders were recruited to join a new federal Artificial Intelligence Safety and Security Board.
  • This board brings together fierce competitors to ensure AI works in the national interest, but the national interest often diverges from the interests of individual citizens.

The revolving door between tech and government: There have been several high-profile instances of individuals moving between positions at Big Tech companies and the U.S. government:

  • Executives from Google, Microsoft, and Amazon have taken on roles in the White House and federal agencies, while former government officials have gone on to work for these tech giants.
  • Notable examples include Jay Carney (Amazon/Obama administration) and Eric Schmidt (Google/Department of Defense advisor).

OpenAI’s shifting policies: ChatGPT’s usage policies originally prohibited military use but have been quietly changed to allow for military use cases deemed acceptable by the company.

  • This shift is significant, similar to when Google removed the “don’t be evil” clause from its policies.
  • The weaponization of AI is just beginning, with AI set to rewrite society in every possible way, from healthcare and education to law enforcement and national defense.

Broader implications: The embrace between Big Tech and Big Government raises important questions about the alignment of corporate interests with the public good and the potential erosion of individual privacy and freedoms in the name of national security. As OpenAI and other tech giants deepen their ties with the government and military, it is crucial to critically examine the implications and ensure proper oversight and accountability to prevent the misuse of AI for surveillance and control.

Are We Witnessing the Weaponization of AI?

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.