A growing debate has emerged over whether to preserve pre-AI internet content before it becomes “contaminated” with artificial intelligence-generated material. The concern centers on the fact that since ChatGPT’s launch in late 2022, it has become increasingly difficult to distinguish human-created content from AI-generated material, potentially creating problems for future AI training and historical research.
What you should know: Two competing approaches have emerged for handling the AI content divide—archiving pre-AI data versus documenting AI evolution.
- John Graham-Cumming at Cloudflare, a cybersecurity firm, has created lowbackgroundsteel.ai to archive “uncontaminated” data sources like a full Wikipedia download from August 2022, comparing pre-AI content to low-background steel that’s prized for scientific instruments because it lacks atomic-era contamination.
- Mark Graham from the Internet Archive’s Wayback Machine wants to focus on creating archives of AI output instead, planning to ask 1,000 topical questions daily to chatbots and store their responses for future researchers.
The big picture: This digital archaeology problem extends far beyond academic curiosity, affecting journalism, legal discovery, and scientific research at a time when reliable content verification has become nearly impossible at scale.
Why this matters: The contamination issue could lead to “model collapse,” where low-quality AI output fed back into training new models has detrimental effects on AI development.
- Studies already show that Wikipedia today contains significant AI input compared to its pre-2022 state.
- Future AI models may benefit from access to purely human-created training data to avoid degradation in quality.
What they’re saying: Industry experts emphasize the urgency and complexity of the challenge.
- “I’ve been thinking about this ‘digital archaeology’ problem since ChatGPT launched, and it’s becoming more urgent every month,” says Rajiv Pant, an entrepreneur and former CTO of The New York Times and The Wall Street Journal. “Right now, there’s no reliable way to distinguish human-authored content from AI-generated material at scale.”
- Graham-Cumming notes the philosophical dimension: “There’s a point at which we did everything ourselves, and then at some point we started to get augmented significantly by these chat systems.”
The challenge ahead: The Internet Archive already processes up to 160 terabytes of new information daily, making comprehensive archiving efforts technically daunting.
- Graham’s proposed AI monitoring system would use artificial intelligence to track AI output changes over time, creating a recursive documentation process.
- The approach recognizes that AI responses to identical questions vary day by day, making temporal tracking essential for understanding AI evolution.
Should we preserve the pre-AI internet before it is contaminated?