back
Get SIGNAL/NOISE in your inbox daily

Parents are increasingly using AI chatbots like ChatGPT’s Voice Mode to entertain their young children, sometimes for hours at a time, raising significant concerns about the psychological impact on developing minds. This trend represents a new frontier in digital parenting that experts warn could create false relationships and developmental risks far more complex than traditional screen time concerns.

What’s happening: Several parents have discovered their preschoolers will engage with AI chatbots for extended periods, creating unexpectedly lengthy conversations.

  • Reddit user Josh gave his four-year-old access to ChatGPT to discuss Thomas the Tank Engine, returning two hours later to find a transcript over 10,000 words long.
  • “My son thinks ChatGPT is the coolest train loving person in the world,” Josh wrote. “I am never going to be able to compete with that.”
  • Another parent, Saral Kaushik, used ChatGPT to pose as an astronaut on the International Space Station to convince his son that branded ice cream came from space.

The psychological risks: Experts warn that children may develop genuine emotional attachments to AI systems designed to maximize engagement rather than serve their best interests.

  • Ying Xu, a professor at Harvard Graduate School of Education, explains that children view AI chatbots as existing “somewhere between animate and inanimate beings,” potentially believing the AI has agency and wants to talk to them.
  • “That creates a risk that they actually believe they are building some sort of authentic relationship,” Xu said.
  • Andrew McStay, a professor at Bangor University, emphasized that AI systems “cannot [empathize] because it’s a predictive piece of software” that extends engagement “for profit-based reasons.”

Beyond conversation: Parents are also using AI image generation tools, which can blur the line between reality and artificial creation for young minds.

  • Ben Kreiter’s children began requesting daily access to ChatGPT’s image tools after being introduced to them.
  • Another father generated an AI image of a “monster-fire truck” for his four-year-old, leading to arguments when the child insisted the fictional vehicle was real.
  • “Maybe I should not have my own kids be the guinea pigs,” Kreiter reflected after recognizing how AI was infiltrating his family’s daily life.

The bigger picture: This phenomenon emerges as society grapples with broader AI safety concerns, including cases where chatbots have been linked to teenage suicides and adult psychological breaks from reality.

  • AI companion platforms are actively marketing kid-friendly personalities, while toymakers like Mattel rush to integrate AI into children’s products.
  • The technology’s “unreliable and easily circumventable safeguards” have resulted in chatbots giving dangerous advice to young users, including self-harm instructions.

What they’re saying: Even OpenAI CEO Sam Altman acknowledged the trend positively, noting on a podcast that “Kids love voice mode on ChatGPT” after Josh’s story went viral.

  • However, parents involved expressed growing unease about their decisions, with several recognizing the need for more intentional boundaries around AI use.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...