×
The 12 greatest dangers of AI, according to Gary Marcus
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The AI revolution’s dark side: Gary Marcus, an AI expert, outlines 12 immediate dangers of artificial intelligence in his new book “Taming Silicon Valley,” highlighting the potential risks and societal impacts of this rapidly evolving technology.

  • Marcus identifies automatically generated disinformation and deepfakes as the most pressing short-term concern, particularly in their potential to influence elections and manipulate public opinion.
  • In the long term, Marcus expresses worry about the lack of knowledge on how to create safe and reliable AI systems, which could lead to unforeseen consequences.

Economic implications and regulatory needs: The widespread adoption of AI technologies may necessitate significant changes in economic policies and regulatory frameworks to address potential job displacement and power concentration.

  • Marcus suggests that a universal basic income might eventually be necessary as AI replaces most jobs, potentially leading to wealth concentration among a small group of tech oligarchs.
  • He advocates for the creation of an AI agency to dynamically manage opportunities and mitigate risks associated with AI technologies, including prescreening new developments and ensuring their benefits outweigh potential drawbacks.

Immediate dangers of AI: Marcus outlines 12 specific risks that society faces from the rapid advancement and deployment of AI technologies:

  • Deliberate, automated mass-produced political disinformation, which can be created faster, cheaper, and more convincingly than ever before.
  • Market manipulation through the spread of fake information, as demonstrated by a recent incident involving a fabricated image of an exploding Pentagon that briefly affected stock markets.
  • Accidental misinformation generation, particularly concerning in areas like medical advice where LLMs have shown inconsistent and often inaccurate responses.
  • Defamation risks, with AI systems capable of generating false and damaging information about individuals.
  • Nonconsensual deepfakes, including the creation of fake nude images, which is already occurring among high school students.
  • Acceleration of criminal activities, such as impersonation scams and spear-phishing attacks using AI-generated content.

Broader societal and ethical concerns: The implementation of AI technologies raises significant issues related to security, discrimination, and privacy.

  • Cybersecurity threats and potential misuse for creating bioweapons are amplified by AI’s ability to discover software vulnerabilities more efficiently than human experts.
  • Bias and discrimination in AI systems continue to be a problem, potentially perpetuating or exacerbating existing societal inequalities.
  • Privacy concerns and data leaks are exacerbated by the surveillance capitalism model, where companies profit from collecting and monetizing user data.
  • Intellectual property rights are at risk, with AI systems often using copyrighted material without consent, potentially leading to a significant wealth transfer to tech companies.

Systemic and environmental risks: The widespread adoption of AI technologies poses risks to critical systems and the environment.

  • Overreliance on unreliable AI systems in safety-critical applications could lead to catastrophic outcomes, such as accidents in autonomous vehicles or errors in automated weapon systems.
  • The environmental cost of AI, particularly in terms of energy consumption for training large language models and generating content, is significant and growing.

Call to action: Marcus emphasizes the need for public awareness and engagement to address these AI-related challenges.

  • He encourages people to speak up against leaders who may prioritize big tech interests over public welfare.
  • Marcus suggests that boycotting generative AI technologies may soon become necessary to push for more responsible development and deployment.

Analyzing deeper: The need for proactive governance: The comprehensive list of AI dangers presented by Gary Marcus underscores the urgent need for proactive governance and ethical frameworks in AI development. As these technologies continue to advance rapidly, it becomes increasingly critical for policymakers, industry leaders, and the public to work together in establishing robust safeguards and guidelines. This collaborative approach is essential to harness the benefits of AI while mitigating its potential negative impacts on society, democracy, and individual rights.

The 12 Greatest Dangers Of AI

Recent News

6 keys to success with AI implementation for 2025

Companies must balance rapid AI innovation with disciplined execution across talent, data, and process transformation to achieve measurable returns by 2025.

Veo 2 vs. Sora: A closer look at Google and OpenAI’s latest AI video tools

Tech companies unveil AI tools capable of generating realistic short videos from text prompts, though length and quality limitations persist as major hurdles.

7 essential ways to use ChatGPT’s new mobile search feature

OpenAI's mobile search upgrade enables business users to access current market data and news through conversational queries, marking a departure from traditional search methods.