back
Get SIGNAL/NOISE in your inbox daily

Facebook’s massive data collection for AI training: The social media giant has confirmed it is scraping public data from all Australian adult users on its platform to train AI models, without offering an opt-out option.

  • Facebook is collecting public photos, posts, and other data from Australian adult users’ accounts dating back to 2007 for AI training purposes.
  • The company initially denied this practice but later confirmed it when pressed during an inquiry.
  • Data from users under 18 is not scraped, but public photos of children posted on adult accounts are included in the collection.

Discrepancy in user privacy options: Facebook provides different levels of control over data usage for AI training to users in different regions, highlighting inconsistencies in global privacy protections.

  • European Union and United States users were notified in June about the use of their data for AI training and were given an opt-out option.
  • Australian users are not offered the same opt-out choice, with Facebook citing the lack of specific privacy laws in Australia as the reason.
  • The company stated that the opt-out option in Europe is a direct result of the existing regulatory landscape and ongoing legal questions surrounding privacy laws.

Implications for user privacy and data control: The revelation raises concerns about the extent of data collection and user control over personal information on social media platforms.

  • Facebook’s global privacy director, Melinda Claybaugh, acknowledged that public photos of children on adult accounts would be scraped for AI training.
  • The company could not confirm whether it scrapes data from previous years of users who were under 18 when they created their accounts but are now adults.
  • Australian users can set their data to private, but this is not equivalent to the opt-out option offered to European users.

Facebook’s justification for data collection: The company defends its practices by emphasizing the need for extensive data to develop effective AI tools.

  • Meta, Facebook’s parent company, claims that large amounts of data are necessary to create “flexible and powerful” AI tools.
  • The company also argues that more data helps deliver safer products with fewer biases.
  • However, this stance raises questions about the balance between technological advancement and user privacy rights.

Australian government’s response and future regulations: The revelation comes amid increased scrutiny of social media platforms and their impact on users, particularly young people.

  • The federal government has recently vowed to introduce a ban on social media for children due to concerns about potential harm.
  • Greens senator David Shoebridge emphasized the need for stronger privacy laws in Australia, similar to those in Europe, to protect users’ data.
  • The government is expected to announce reforms to the Privacy Act in response to a 2020 review that found current laws to be outdated.

Broader implications for data privacy and AI development: Facebook’s practices highlight the complex relationship between technological innovation, user privacy, and regulatory frameworks.

  • The discrepancy in privacy options offered to users in different regions underscores the impact of varying privacy laws on global tech companies’ practices.
  • The situation raises questions about the ethical implications of using vast amounts of user data for AI training without explicit consent.
  • As AI technology continues to advance, the need for clear, consistent, and robust privacy regulations becomes increasingly apparent to protect user rights and data.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...