×
AI Trains on Kids’ Photos Without Consent, Enabling Realistic Deepfakes and Tracking
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A Human Rights Watch investigation has revealed that photos of real children posted online are being used to train AI image generators without consent, posing significant privacy and safety risks.

Key findings from Australia: HRW researcher Hye Jung Han discovered 190 photos of Australian children, including indigenous kids, linked in the LAION-5B AI dataset:

  • The photos span entire childhoods, enabling AI to generate realistic deepfakes of these children.
  • Dataset URLs sometimes reveal identifying information like names and locations, making it easy to track down the children.
  • Even photos posted with strict privacy settings, such as unlisted YouTube videos, were scraped and included in the dataset.

Unique risks for indigenous children: For First Nations children in Australia, AI training on their images threatens distinct cultural harms:

  • First Nations peoples restrict reproduction of photos of the deceased during mourning periods, which AI could perpetuate.
  • Photos of children from several indigenous groups were identified in the dataset.

Limitations of current safeguards: Removing links from datasets and implementing content guidelines appear insufficient to prevent ongoing harm:

  • LAION is working with HRW to remove flagged images but the process is slow, with photos of Brazilian kids still not removed a month after reporting.
  • Removing dataset links doesn’t remove images from the web or undo AI training that has already occurred.
  • YouTube prohibits AI scraping but acknowledged unauthorized scraping still happens, violating their terms of service.

Waiting on regulatory intervention: HRW argues that the onus should not be on parents to remove kids’ photos, but on regulators to enact robust child data protection laws:

  • Australia is expected to release a draft of its first Children’s Online Privacy Code in August as part of broader privacy reforms.
  • However, there is uncertainty around how strong the government’s proposed protections will actually be.
  • HRW emphasizes that children should not have to live in fear of their personal photos being weaponized by AI.

Broader implications: This investigation highlights the urgent need for stricter regulations and enforceable safeguards around AI training data, especially when it comes to protecting children’s privacy and safety online. As AI systems become more powerful and pervasive, the risks of unauthorized data scraping and misuse will only grow. Policymakers, tech companies, and civil society groups must work together to develop robust frameworks that prioritize human rights and prevent AI from being trained on sensitive personal data without clear consent procedures in place. Crucially, the burden of protecting kids’ digital footprints cannot fall solely on parents – systemic solutions and strong regulatory oversight are essential.

AI trains on kids’ photos even when parents use strict privacy settings

Recent News

Lights, camera, robots! Shotoku unveils Swoop cranes to automate broadcast studios

Safety sensors create protective "bubbles" to prevent collisions in busy studio environments.

Companies quietly rehire freelancers to fix subpar AI work

A new freelance economy emerges around polishing machine-generated content.

Tesla bets on humanoid robots for 80% of its $25T future as EV sales drop 13%

Tesla's U.S. market share hits lowest point since 2017 as robot ambitions ramp up.