back
Get SIGNAL/NOISE in your inbox daily

Generative AI’s rapid rise has sparked concerns about its societal impact and the ethical implications of Silicon Valley’s push for artificial general intelligence (AGI).

The big picture: Gary Marcus, NYU professor emeritus and AI critic, argues that Silicon Valley’s moral decline and focus on short-term gains have led to the development of flawed generative AI systems with potentially dire consequences.

  • Marcus’s new book, “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” highlights the immediate threats posed by current generative AI technology, including political disinformation, market manipulation, and cybersecurity risks.
  • The author traces this shift in Silicon Valley’s priorities back to the 2008 financial crisis, which he claims led to a focus on value extraction and startup valuations over sustainable business models.

Key concerns and criticisms: Marcus expresses skepticism about the current state of AI and its potential for achieving artificial general intelligence in the near future.

  • He argues that major AI companies are overpromising on AGI capabilities while their current models still struggle with basic tasks like tic-tac-toe and chess.
  • Marcus criticizes the lack of regulation in the AI industry, comparing it to the airline industry and suggesting that proper oversight is necessary for safety and innovation.
  • The author questions whether current generative AI technologies are a net positive for humanity, citing concerns about energy consumption, environmental impact, and potential misuse.

Industry reactions and competitive landscape: Marcus offers insights into the different approaches taken by major tech companies in the AI space.

  • While critical of OpenAI, Google, and Meta, Marcus takes a more favorable view of Apple, suggesting that the company’s business model is less reliant on exploiting personal information.
  • He notes that some big tech firms have faced backlash for their AI products, citing examples like Meta’s AI-generated images and Microsoft’s Recall feature.
  • Marcus questions why these companies continue to push generative AI features on users, often without opt-out options, despite the technology’s limitations and potential drawbacks.

Navigating the AI-dominated web: As search engines increasingly incorporate AI-generated content, Marcus offers advice for everyday users.

  • He suggests that people should “just say no” to unwanted AI systems and consider boycotting AI if tech companies don’t address concerns about climate impact and copyright violations.
  • Marcus emphasizes the importance of user choice and the need for companies to be more responsible in their AI deployments.

Regulation and innovation: The author challenges the notion that regulation inherently hinders innovation in the tech industry.

  • Marcus dismisses arguments against regulation as self-serving rhetoric from those prioritizing profit over societal well-being.
  • He advocates for holding companies accountable for the downsides of their technology, including misinformation, environmental harm, and potential discrimination in areas like job hiring.

Future outlook and potential solutions: Marcus calls for a shift in focus away from generative AI towards more reliable and beneficial AI technologies.

  • He cites examples like AlphaFold, Google Search, and GPS navigation as positive AI applications that offer tangible benefits to society.
  • The author suggests that by holding companies responsible for the negative impacts of their AI systems, they may be incentivized to develop better approaches and technologies.

Broader implications: As AI continues to evolve and integrate into various aspects of our lives, Marcus’s warnings serve as a crucial reminder of the need for responsible development and deployment of these technologies.

  • The debate surrounding AI regulation and its impact on innovation is likely to intensify as policymakers grapple with the rapid advancements in the field.
  • Marcus’s call for citizen involvement and corporate accountability highlights the importance of public engagement in shaping the future of AI and its role in society.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...