×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A Human Rights Watch investigation has revealed that photos of real children posted online are being used to train AI image generators without consent, posing significant privacy and safety risks.

Key findings from Australia: HRW researcher Hye Jung Han discovered 190 photos of Australian children, including indigenous kids, linked in the LAION-5B AI dataset:

  • The photos span entire childhoods, enabling AI to generate realistic deepfakes of these children.
  • Dataset URLs sometimes reveal identifying information like names and locations, making it easy to track down the children.
  • Even photos posted with strict privacy settings, such as unlisted YouTube videos, were scraped and included in the dataset.

Unique risks for indigenous children: For First Nations children in Australia, AI training on their images threatens distinct cultural harms:

  • First Nations peoples restrict reproduction of photos of the deceased during mourning periods, which AI could perpetuate.
  • Photos of children from several indigenous groups were identified in the dataset.

Limitations of current safeguards: Removing links from datasets and implementing content guidelines appear insufficient to prevent ongoing harm:

  • LAION is working with HRW to remove flagged images but the process is slow, with photos of Brazilian kids still not removed a month after reporting.
  • Removing dataset links doesn’t remove images from the web or undo AI training that has already occurred.
  • YouTube prohibits AI scraping but acknowledged unauthorized scraping still happens, violating their terms of service.

Waiting on regulatory intervention: HRW argues that the onus should not be on parents to remove kids’ photos, but on regulators to enact robust child data protection laws:

  • Australia is expected to release a draft of its first Children’s Online Privacy Code in August as part of broader privacy reforms.
  • However, there is uncertainty around how strong the government’s proposed protections will actually be.
  • HRW emphasizes that children should not have to live in fear of their personal photos being weaponized by AI.

Broader implications: This investigation highlights the urgent need for stricter regulations and enforceable safeguards around AI training data, especially when it comes to protecting children’s privacy and safety online. As AI systems become more powerful and pervasive, the risks of unauthorized data scraping and misuse will only grow. Policymakers, tech companies, and civil society groups must work together to develop robust frameworks that prioritize human rights and prevent AI from being trained on sensitive personal data without clear consent procedures in place. Crucially, the burden of protecting kids’ digital footprints cannot fall solely on parents – systemic solutions and strong regulatory oversight are essential.

AI trains on kids’ photos even when parents use strict privacy settings

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.