The debate over AI art‘s future hinges on whether the increasing presence of AI-generated images in training data will lead to model deterioration or improvement. While some fear a feedback loop of amplifying flaws, others see a natural selection process where only the most successful AI images proliferate online, potentially leading to evolutionary improvements rather than collapse.
Why fears of model collapse may be unfounded: The selection bias in what AI art gets published online suggests a natural filtering process that could improve rather than degrade future models.
- Images commonly shared online tend to be higher quality outputs, creating a positive feedback loop where models learn from the best examples.
- This process mirrors natural selection, as AI-generated images that receive the most engagement and shares become more represented in training data.
The counterargument: The visibility of AI art online may not always favor aesthetic quality.
- Content that provokes strong reactions, particularly anger from anti-AI communities, could spread more widely than beautiful but unremarkable images.
- AI models might inadvertently optimize for creating recognizably “AI-looking” art that generates controversy and engagement rather than technical excellence.
The evolutionary perspective: Regardless of whether optimization favors beauty or controversy, AI-generated images are adapting to maximize their ability to spread online.
- This evolutionary pressure suggests that rather than collapsing, AI art models may simply adapt to whatever characteristics most effectively propagate across the internet.
- The selection mechanism ultimately depends on what human curators choose to share, save, and engage with online.
I doubt model collapse will happen