×
Opinion: Ex-NSA Chief Joining OpenAI Board Will Lead to Further Weaponization of AI
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The appointment of retired General Paul Nakasone to OpenAI’s board marks a significant shift in the company’s alignment with national security interests, raising concerns about the potential weaponization of AI.

Blurring lines between Big Tech and government: OpenAI’s move follows a trend of tech giants like Amazon, Google, and Microsoft increasingly aligning themselves with governmental and military agendas under the guise of security:

  • Advanced AI systems intended for defensive purposes could evolve into tools for mass surveillance, monitoring citizens’ online activities and communications.
  • OpenAI may capitalize on its data analytics capabilities to shape public discourse, with some suggesting this is already occurring.

Financial ties fueling expansion: U.S. military and intelligence contracts awarded to major tech firms from 2019-2022 totaled at least $53 billion, potentially fueling OpenAI’s expansion into defense and surveillance technologies.

  • In April 2024, OpenAI CEO Sam Altman and other AI leaders were recruited to join a new federal Artificial Intelligence Safety and Security Board.
  • This board brings together fierce competitors to ensure AI works in the national interest, but the national interest often diverges from the interests of individual citizens.

The revolving door between tech and government: There have been several high-profile instances of individuals moving between positions at Big Tech companies and the U.S. government:

  • Executives from Google, Microsoft, and Amazon have taken on roles in the White House and federal agencies, while former government officials have gone on to work for these tech giants.
  • Notable examples include Jay Carney (Amazon/Obama administration) and Eric Schmidt (Google/Department of Defense advisor).

OpenAI’s shifting policies: ChatGPT’s usage policies originally prohibited military use but have been quietly changed to allow for military use cases deemed acceptable by the company.

  • This shift is significant, similar to when Google removed the “don’t be evil” clause from its policies.
  • The weaponization of AI is just beginning, with AI set to rewrite society in every possible way, from healthcare and education to law enforcement and national defense.

Broader implications: The embrace between Big Tech and Big Government raises important questions about the alignment of corporate interests with the public good and the potential erosion of individual privacy and freedoms in the name of national security. As OpenAI and other tech giants deepen their ties with the government and military, it is crucial to critically examine the implications and ensure proper oversight and accountability to prevent the misuse of AI for surveillance and control.

Are We Witnessing the Weaponization of AI?

Recent News

How AI is addressing social isolation and loneliness in aging populations

AI chatbots and virtual companions are being tested as tools to combat isolation, though experts emphasize they should complement rather than replace human relationships.

Breaking up Big Tech: Regulators struggle to manage AI market concentration

Regulators worldwide struggle to check tech giants' growing power as companies rapidly consolidate control over AI and digital markets.

How mathematicians are incorporating AI assistants into their work

AI tools are helping mathematicians develop and verify complex proofs, marking the most significant change in mathematical research methods since computer algebra systems.