back
Get SIGNAL/NOISE in your inbox daily

ChatGPT has mysteriously stopped responding to prompts containing certain specific names, raising questions about content filtering and transparency in AI systems.

Core issue identification: ChatGPT, OpenAI’s popular language model, consistently returns error messages when asked to process or generate responses containing specific full names, including “David Mayer” and several others.

  • Users across social media platforms have documented the AI’s inability to combine certain first and last names, even though it can say the names individually
  • The restriction appears to affect multiple versions of ChatGPT, including GPT-4
  • Other AI chatbots like Google Gemini and Grok have no difficulty processing these same names

Technical investigation: Developer analysis suggests the restriction may be related to ChatGPT’s front-end implementation rather than core functionality limitations.

  • Peter Cooper, a CooperPress developer, noted that GPT-4 via API has no issues processing these names
  • The restriction appears to be specific to ChatGPT’s consumer-facing interface
  • Users have attempted various workarounds, including code-based approaches, to bypass the restriction

Affected names and patterns: Several prominent individuals’ names trigger ChatGPT’s error response mechanism.

  • The list includes Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza
  • Many of these individuals are public figures, including professors, journalists, and legal professionals
  • ChatGPT consistently refuses to explain why these specific names trigger restrictions

Potential explanation: Brave’s AI assistant Leo suggests the restriction may be linked to OpenAI’s privacy policies.

  • The limitation could stem from OpenAI’s policy against generating personal data without consent
  • One specific case involves a potential connection to a Chechen militant who used “David Mayer” as an alias
  • The broad range of affected names suggests a systematic approach to privacy protection

Looking ahead: Privacy versus transparency: The unexplained name restrictions highlight the complex balance between protecting individual privacy and maintaining transparency in AI systems, while raising questions about the criteria used to determine which names get restricted and whether such broad restrictions serve their intended purpose effectively.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...