Geoffrey Hinton, the Nobel Prize-winning “godfather of AI,” has proposed giving artificial intelligence systems “maternal instincts” to prevent them from harming humans. Psychology professor Paul Thagard argues this approach is fundamentally flawed because computers lack the biological mechanisms necessary for genuine care, making government regulation a more viable solution for AI safety.
Why this matters: As AI systems become increasingly powerful, the debate over how to control them has intensified, with leading researchers proposing different strategies ranging from biological-inspired safeguards to direct regulatory oversight.
The core argument: Thagard contends that maternal caring requires specific biological foundations that computers simply cannot possess.
- Maternal care depends on chemical mechanisms including oxytocin (the “bonding hormone”), prolactin (which triggers milk production), estrogen, progesterone, and dopamine
- These chemicals activate neural circuits in brain areas such as the MPOA hub, nucleus accumbens, amygdala, and insula during pregnancy, lactation, and infant interaction
- Current AI models run on neural networks implemented in data centers with computer chips that completely lack these biological mechanisms
What AI models themselves say: Thagard tested his hypothesis by asking ChatGPT, Grok, Claude, and Gemini about maternal care mechanisms.
- All four models provided detailed explanations of the chemical and neural processes involved in parental care
- Each model acknowledged that current AI systems completely lack these biological mechanisms
- The models recognized the difference between simulating parental behavior and actually experiencing parental feelings
Alternative regulatory approach: Rather than relying on artificial emotional constraints, Thagard advocates for direct government regulation through specific commandments.
- Do not allow AI systems to be fully autonomous or beyond human supervision
- Do not allow AI systems to control humans or eradicate most human jobs
- Do not give AI systems control over weapons, especially nuclear and bioweapons
- Do not allow AI systems to achieve superintelligence or contribute to misinformation
The bigger picture: This debate reflects broader tensions in AI safety between those seeking technical solutions and those favoring regulatory approaches.
- Companies developing AI are “so engaged in competing with each other to produce smarter and faster models that they cannot be trusted to avoid producing dangerous systems”
- Most major AI companies have convinced US leadership to avoid needed legislation due to concerns about foreign competition
- Thagard’s new book “Dreams, Jokes, and Songs” provides additional arguments for why AI models lack conscious feelings and are unlikely to acquire them
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...