Human Rights Watch recently revealed that photos of children scraped from the internet, including some hidden behind privacy settings on social media, were used to train AI models without consent from the children or their families. This concerning revelation has broad implications for data privacy and the unintended consequences of “sharenting” in the age of AI.
Key Takeaways: The unauthorized use of children’s personal photos to train AI models raises serious privacy concerns:
- Many of the scraped images included children’s names and identifying information, making them easily traceable.
- Some of the photos used were not even publicly available but hidden behind privacy settings on social media platforms.
- Parents who thought they were protecting their children’s privacy by using privacy settings have now learned that their precautions were insufficient.
Lack of Meaningful Consent: Children cannot meaningfully consent to having their images and personal information shared online, highlighting the risks of “sharenting”:
- Young children are not developmentally capable of understanding the long-term implications of having their photos and stories shared publicly.
- As children grow older, they may object to the extensive online record of their lives created without their permission.
- The HRW report underscores that even well-intentioned parents cannot foresee all potential future uses of the data they share about their kids.
Regulatory Gaps and Challenges: The unauthorized scraping of children’s data by AI companies reveals significant gaps in privacy protections and regulatory oversight:
- It is unclear whether AI companies have the legal right to train models on personal data, especially that of children, without explicit consent.
- The Supreme Court’s recent ruling against the Chevron doctrine has limited the power of federal agencies like the FTC to regulate in this space, leaving a patchwork of state laws.
- With federal privacy legislation unlikely in the near term, Big Tech is largely left to police itself on these issues.
Implications for Families: Until robust privacy protections and AI regulations are put in place, parents should exercise extreme caution in sharing any information or photos of their children online:
- “Sharenting” on social media, even with privacy settings enabled, carries inherent risks as this data may be scraped and repurposed without consent.
- Legislators, especially at the state level, must act quickly to enact guardrails around the collection and use of children’s personal data.
- In the meantime, refraining from posting kids’ photos and information may be the only way for families to protect their children’s privacy from AI models.
Looking Ahead: The unauthorized use of children’s photos to train AI models without consent is a stark reminder of the urgent need for updated privacy protections and responsible AI regulations. Policymakers must act swiftly to address these critical gaps. In the interim, families should carefully weigh the risks before sharing any information about their kids online, as the long-term implications in an AI-powered world remain unknown. Fundamentally, we must grapple with the question of whether technology companies should have the right to exploit personal data, especially that of children, for their own gain without oversight – the stakes for privacy and data dignity could not be higher.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...