×
The dangerous psychology of why we treat AI like humans—and the risks involved
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The proliferation of human-like AI interfaces risks creating deceptive emotional connections between users and their virtual assistants. As chatbots like ChatGPT and Siri become increasingly sophisticated at mimicking human conversation, our psychological tendency to anthropomorphize non-human entities creates a dangerous blind spot. Understanding the psychological mechanisms behind this phenomenon is crucial as AI becomes further integrated into daily life, potentially leading to misplaced trust, emotional dependence, and distorted perceptions of machine capabilities.

The big picture: Anthropomorphism—attributing human characteristics to non-human entities—is a well-documented psychological shortcut that becomes particularly problematic with advanced AI systems.

  • Even early AI systems like ELIZA (1976) demonstrated how easily humans form emotional connections with machines, with its creator Josef Weizenbaum noting his own inclination toward emotional attachment.
  • Modern examples include describing AlphaZero’s chess-playing as “intuitive” and “romantic,” language that incorrectly implies the AI possesses human-like intentions and feelings.

Why this matters: The tendency to humanize AI systems creates false expectations about their capabilities and can lead to harmful dependencies.

  • People increasingly rate ChatGPT’s responses as more empathetic than those from actual humans, despite AI’s fundamental inability to experience emotional understanding.
  • In extreme cases, this misplaced trust has had tragic consequences, including at least one instance where a person took their own life after following advice from an AI chatbot.

The dangers: Anthropomorphizing AI systems creates four significant risks that undermine proper technology use.

  • False expectations lead users to assume AI possesses qualities like empathy, moral judgment, or creativity that algorithms fundamentally cannot achieve.
  • Emotional dependency can develop as users replace challenging human interactions with seemingly understanding AI companions.
  • Distorted understanding occurs when people confuse what AI is actually doing (following algorithms) with what it appears to be doing (thinking and feeling).
  • Language choices that frame AI as a subject rather than an object embed anthropomorphic perceptions in our subconscious, despite intellectual awareness to the contrary.

Where we go from here: The article proposes an “A-Frame” approach to maintain human agency in AI interactions.

  • Awareness: Recognize that AI systems operate on algorithms and lack true emotional capabilities.
  • Appreciation: Prioritize genuine human connections over AI interactions.
  • Acceptance: Evaluate AI accuracy before relying on it for important decisions.
  • Accountability: Take responsibility for outcomes resulting from AI interactions rather than deflecting to the technology.

Reading between the lines: Even terminology like “artificial intelligence” creates misleading parallels to human reasoning, encouraging anthropomorphization.

  • The author suggests we view AI simply as “useful” without attributing qualities like strategic thinking, kindness, or wisdom.
  • Maintaining “deliberate distance” from AI requires conscious effort and recognition of our own agency in decision-making.
Are You at Risk of Developing Feelings for Your Chatbot?

Recent News

Replit in talks for $200M funding round that could triple its AI coding tool valuation

Investors signal strong appetite for AI-assisted software development as Replit seeks funding that would substantially increase its market position in the competitive coding tools landscape.

MIT researchers create system that helps AI solve complex planning problems

MIT's new system functions as an intermediary, translating complex planning problems described in plain language into mathematical formulations that specialized optimization software can solve efficiently.

Report: Global regulators warn AI could enable unprecedented market manipulation

Financial watchdogs identify AI systems' ability to execute coordinated misinformation campaigns and exploit market microstructure at speeds beyond human detection capabilities.