×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Human Rights Watch recently revealed that photos of children scraped from the internet, including some hidden behind privacy settings on social media, were used to train AI models without consent from the children or their families. This concerning revelation has broad implications for data privacy and the unintended consequences of “sharenting” in the age of AI.

Key Takeaways: The unauthorized use of children’s personal photos to train AI models raises serious privacy concerns:

  • Many of the scraped images included children’s names and identifying information, making them easily traceable.
  • Some of the photos used were not even publicly available but hidden behind privacy settings on social media platforms.
  • Parents who thought they were protecting their children’s privacy by using privacy settings have now learned that their precautions were insufficient.

Lack of Meaningful Consent: Children cannot meaningfully consent to having their images and personal information shared online, highlighting the risks of “sharenting”:

  • Young children are not developmentally capable of understanding the long-term implications of having their photos and stories shared publicly.
  • As children grow older, they may object to the extensive online record of their lives created without their permission.
  • The HRW report underscores that even well-intentioned parents cannot foresee all potential future uses of the data they share about their kids.

Regulatory Gaps and Challenges: The unauthorized scraping of children’s data by AI companies reveals significant gaps in privacy protections and regulatory oversight:

  • It is unclear whether AI companies have the legal right to train models on personal data, especially that of children, without explicit consent.
  • The Supreme Court’s recent ruling against the Chevron doctrine has limited the power of federal agencies like the FTC to regulate in this space, leaving a patchwork of state laws.
  • With federal privacy legislation unlikely in the near term, Big Tech is largely left to police itself on these issues.

Implications for Families: Until robust privacy protections and AI regulations are put in place, parents should exercise extreme caution in sharing any information or photos of their children online:

  • “Sharenting” on social media, even with privacy settings enabled, carries inherent risks as this data may be scraped and repurposed without consent.
  • Legislators, especially at the state level, must act quickly to enact guardrails around the collection and use of children’s personal data.
  • In the meantime, refraining from posting kids’ photos and information may be the only way for families to protect their children’s privacy from AI models.

Looking Ahead: The unauthorized use of children’s photos to train AI models without consent is a stark reminder of the urgent need for updated privacy protections and responsible AI regulations. Policymakers must act swiftly to address these critical gaps. In the interim, families should carefully weigh the risks before sharing any information about their kids online, as the long-term implications in an AI-powered world remain unknown. Fundamentally, we must grapple with the question of whether technology companies should have the right to exploit personal data, especially that of children, for their own gain without oversight – the stakes for privacy and data dignity could not be higher.

Photos of your children are being used to train AI without your permission, and there’s nothing you can do about it

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.