back
Get SIGNAL/NOISE in your inbox daily

AI regulation urgency: Gary Marcus, a prominent AI researcher and critic, calls for increased public pressure to regulate the rapidly advancing field of generative AI, highlighting concerns about its potential impact on democracy and creative professions.

  • Marcus, a professor emeritus at New York University and serial entrepreneur, argues that Silicon Valley has strayed far from its “Don’t be evil” ethos and is becoming increasingly powerful with minimal constraints.
  • He draws parallels between the need for AI regulation and successful public health campaigns against smoking, suggesting that similar pressure is required to protect citizens from invasive and problematic AI technologies.

Key concerns and threats: The proliferation of automatically generated misinformation and deepfakes poses a significant threat to democracy, according to Marcus, who identifies this as the most troubling issue surrounding AI development.

  • Marcus distinguishes between individual free speech and mass-produced misinformation, arguing that the latter should be treated differently due to its potential for large-scale manipulation of public opinion.
  • The ease of disseminating information without accountability or transparency exacerbates concerns about impersonation, fraud, and bias in AI-generated content.

Generative AI market saturation: Marcus expresses skepticism about the widespread integration of generative AI into tech platforms, noting that while summarization has its uses, the technology is not entirely reliable and has become a commodity.

  • He observes a shift from the hype surrounding generative AI in 2023 to growing disillusionment in 2024 as companies struggle to recoup their substantial investments in the technology.
  • Marcus points out that while traditional AI applications like web search and GPS navigation have proven useful, many generative AI applications have been overhyped and face limitations in reliability.

Creative work and AI: The potential for AI to enable wealth to access skill while diminishing the ability of skilled individuals to access wealth is a growing concern, particularly in creative industries.

  • Marcus expresses deep concern about the large-scale appropriation of creative work by generative AI companies, warning that this trend could extend to other professions if left unchecked.
  • He anticipates that generative AI companies will likely be forced to license their raw materials, similar to streaming services, which he considers a positive outcome.

Transparency and regulation: Marcus advocates for increased transparency in AI development, including the disclosure of training data for models that affect the public.

  • He emphasizes the importance of understanding the contents of AI models to mitigate potential harms and address issues such as bias.
  • Marcus expresses pessimism about the prospects for meaningful AI regulation in the United States, noting that U.S. citizens have far less protection around privacy and AI compared to their European counterparts.

Call to action: To address these concerns, Marcus proposes a potential boycott of generative AI technologies to push for better regulation and responsible development.

  • He urges citizens to speak up more loudly and take action to ensure that AI technologies are developed and deployed in a manner that serves the public interest.
  • Marcus’s book, “Taming Silicon Valley: How we can ensure that AI works for us,” aims to encourage citizen engagement and promote a more responsible approach to AI development.

Looking ahead: As the debate over AI regulation and its societal impact intensifies, Marcus’s call for increased public awareness and action highlights the growing need for a balanced approach to technological innovation and ethical considerations in the rapidly evolving field of artificial intelligence.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...