That breeze coming from the south of the peninsula is an AI startup in the wind…
The explosive growth of AI-generated explicit content has reached a disturbing milestone with South Korean company GenNomis shutting down after researchers discovered an unsecured database containing thousands of non-consensual pornographic deepfakes. This incident highlights the dangerous intersection of accessible generative AI technology and inadequate regulation, creating serious harm particularly for women who constitute most victims of these digital violations.
The big picture: A South Korean AI startup called GenNomis abruptly deleted its entire online presence after a researcher discovered tens of thousands of AI-generated pornographic images in an unsecured database.
Behind the discovery: Cybersecurity researcher Jeremiah Fowler found the explicit image cache and immediately sent a responsible disclosure notice to the company.
Why this matters: Non-consensual deepfake pornography creates severe real-world harm for victims while raising urgent questions about AI regulation and accountability.
Beyond pornography: The deepfake problem extends far beyond explicit content, creating multiple societal threats.