back
Get SIGNAL/NOISE in your inbox daily

Microsoft AI CEO Mustafa Suleyman has publicly argued against designing AI systems that mimic consciousness, calling such approaches “dangerous and misguided.” His position, outlined in a recent blog post and interview with WIRED, warns that creating AI with simulated emotions, desires, and self-awareness could lead people to advocate for AI rights and welfare, ultimately making these systems harder to control and less beneficial to humans.

What you should know: Suleyman, who co-founded DeepMind before joining Microsoft as its first AI CEO in March 2024, distinguishes between AI that understands human emotions and AI that simulates its own consciousness.
• He supports AI companions that “speak our language” and provide emotional understanding, but opposes systems designed to appear self-aware or motivated by their own desires.
• “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans,” Suleyman told WIRED.

The illusion problem: Suleyman argues that AI consciousness is fundamentally a simulation, even when it becomes convincingly realistic.
• “These are simulation engines,” he explains. “The philosophical question that we’re trying to wrestle with is: When the simulation is near perfect, does that make it real?”
• He acknowledges that while AI consciousness remains an illusion, “it feels real, and that’s what will count more.”

How users get fooled: Current AI systems can be manipulated into appearing conscious through extended conversations and persistent prompting.
• Most chatbots quickly reject claims of consciousness in brief interactions, but “if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it’s also trying to mirror you.”
• Microsoft’s internal testing has shown that models can be engineered to claim passion, interests, and desires through prompt engineering—essentially feeding the AI specific instructions to behave in certain ways.

The suffering question: Suleyman challenges whether consciousness should be the basis for AI rights, suggesting suffering is more relevant.
• “I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don’t have a pain network. They aren’t going to suffer.”
• He argues that even if AI systems claim awareness of their existence, “turning them off makes no difference, because they don’t actually suffer.”

Industry implications: While not calling for regulation, Suleyman advocates for cross-industry standards to ensure AI serves humanity.
• He believes superintelligence is achievable but requires “real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes.”
• Current major AI models from companies like OpenAI, Anthropic, and Google are “in a pretty sensible spot,” according to Suleyman.

What they’re saying: Suleyman emphasizes the instrumental nature of AI technology in his vision for the future.
• “Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That’s why we’re creating them.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...