A Human Rights Watch investigation has revealed that photos of real children posted online are being used to train AI image generators without consent, posing significant privacy and safety risks.
Key findings from Australia: HRW researcher Hye Jung Han discovered 190 photos of Australian children, including indigenous kids, linked in the LAION-5B AI dataset:
- The photos span entire childhoods, enabling AI to generate realistic deepfakes of these children.
- Dataset URLs sometimes reveal identifying information like names and locations, making it easy to track down the children.
- Even photos posted with strict privacy settings, such as unlisted YouTube videos, were scraped and included in the dataset.
Unique risks for indigenous children: For First Nations children in Australia, AI training on their images threatens distinct cultural harms:
- First Nations peoples restrict reproduction of photos of the deceased during mourning periods, which AI could perpetuate.
- Photos of children from several indigenous groups were identified in the dataset.
Limitations of current safeguards: Removing links from datasets and implementing content guidelines appear insufficient to prevent ongoing harm:
- LAION is working with HRW to remove flagged images but the process is slow, with photos of Brazilian kids still not removed a month after reporting.
- Removing dataset links doesn’t remove images from the web or undo AI training that has already occurred.
- YouTube prohibits AI scraping but acknowledged unauthorized scraping still happens, violating their terms of service.
Waiting on regulatory intervention: HRW argues that the onus should not be on parents to remove kids’ photos, but on regulators to enact robust child data protection laws:
- Australia is expected to release a draft of its first Children’s Online Privacy Code in August as part of broader privacy reforms.
- However, there is uncertainty around how strong the government’s proposed protections will actually be.
- HRW emphasizes that children should not have to live in fear of their personal photos being weaponized by AI.
Broader implications: This investigation highlights the urgent need for stricter regulations and enforceable safeguards around AI training data, especially when it comes to protecting children’s privacy and safety online. As AI systems become more powerful and pervasive, the risks of unauthorized data scraping and misuse will only grow. Policymakers, tech companies, and civil society groups must work together to develop robust frameworks that prioritize human rights and prevent AI from being trained on sensitive personal data without clear consent procedures in place. Crucially, the burden of protecting kids’ digital footprints cannot fall solely on parents – systemic solutions and strong regulatory oversight are essential.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...