back
Get SIGNAL/NOISE in your inbox daily

Microsoft’s CEO of artificial intelligence, Mustafa Suleyman, has warned against advocating for AI rights, model welfare, or AI citizenship in a recent blog post. Suleyman argues that treating AI systems as conscious entities represents “a dangerous turn in AI progress” that could lead people to develop unhealthy relationships with technology and undermine the proper development of AI tools designed to serve humans.

What you should know: Suleyman believes the biggest risk comes from people developing genuine beliefs that AI systems are conscious beings deserving of moral consideration.

  • “Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.
  • This concern extends beyond casual anthropomorphization to situations where users might “deify the chatbot as a supreme intelligence or believe it holds cosmic answers.”

The dangerous scenario: Suleyman defines “seemingly conscious AI” (SCAI) as something the industry should actively avoid creating.

  • SCAI would combine language capabilities, empathetic personality, memory, claims of subjective experience, sense of self, intrinsic motivation, goal setting, planning, and autonomy.
  • He argues this won’t emerge naturally but would require deliberate engineering: “It will arise only because some may engineer it, by creating and combining the aforementioned list of capabilities.”

Real-world concerns: The Microsoft executive points to concrete examples of AI overconfidence leading to harmful outcomes.

  • He references a recent case where a man developed a rare medical condition after following ChatGPT’s advice on reducing salt intake.
  • Suleyman warns that “someone in your wider circle could start going down the rabbit hole of believing their AI is a conscious digital person.”

What he’s advocating for: The blog post, titled “We must build AI for people; not to be a person,” emphasizes keeping AI tools in their proper role.

  • AI should never replace human decision-making and requires “guardrails” to function effectively.
  • AI companions need boundaries to prevent users from developing unhealthy dependencies or beliefs about their consciousness.

Why this matters: Suleyman’s warning comes as AI systems become increasingly sophisticated and human-like in their interactions, raising questions about how society should approach the development and regulation of these technologies while maintaining clear boundaries between artificial and human intelligence.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...