back
Get SIGNAL/NOISE in your inbox daily

University of Cambridge researchers have developed LightShed, a proof-of-concept tool that can effectively strip away anti-AI protections from digital artwork, neutralizing defenses like Glaze and Nightshade that artists use to prevent their work from being scraped for AI training. The technology represents a significant escalation in the ongoing battle between artists seeking to protect their intellectual property and AI companies needing training data, potentially undermining the digital defenses that 7.5 million artists have downloaded to safeguard their work.

The big picture: LightShed demonstrates that current artist protection tools may provide only temporary security, as AI researchers can develop countermeasures that learn to identify and remove the digital “poison” these tools apply to artwork.

How it works: The tool operates by learning to identify the subtle pixel changes that protection tools make to artwork, then effectively “washing” them away.

  • Researchers trained LightShed using art samples with and without Nightshade, Glaze, and similar protections applied.
  • The system learns to reconstruct “just the poison on poisoned images,” identifying where digital defenses have been applied.
  • LightShed can even apply knowledge from one protection tool to defeat others it has never encountered before.

What these tools protect against: Current artist defenses work by making imperceptible changes to artwork that confuse AI models during training.

  • Glaze makes AI models misunderstand artistic style, causing them to interpret a photorealistic painting as a cartoon.
  • Nightshade makes models see subjects incorrectly, such as interpreting a cat in a drawing as a dog.
  • These “perturbations” push artwork over the boundaries that AI models use to categorize different types of images.

In plain English: Think of AI models as having invisible filing cabinets where they sort images into categories like “realistic painting” or “cartoon.” Artist protection tools slightly alter pixels to trick the AI into filing artwork in the wrong drawer, making it learn incorrect information about the art’s style or content.

Why this matters: The research exposes fundamental vulnerabilities in tools that millions of artists rely on for protection against unauthorized AI training.

  • Around 7.5 million people, many artists with small and medium-size followings, have downloaded Glaze to protect their work.
  • Artists face concerns that AI models will learn their style, mimic their work, and potentially eliminate their livelihoods.
  • The legal and regulatory landscape around AI training and copyright remains uncertain, making technical defenses particularly important.

What the researchers are saying: The LightShed team emphasizes they’re not trying to steal artists’ work but want to prevent false security assumptions.

  • “You will not be sure if companies have methods to delete these poisons but will never tell you,” says Hanna Foerster, the study’s lead author and PhD student at Cambridge.
  • “It might need a few more rounds of trying to come up with better ideas for protection,” Foerster added about the need for improved defenses.

The creators’ perspective: Original protection tool developers acknowledge the temporary nature of their solutions while defending their value.

  • Shawn Shan, who created both Glaze and Nightshade and was named MIT Technology Review’s Innovator of the Year, views these tools as deterrents rather than permanent solutions.
  • “It’s a deterrent,” says Shan, explaining the goal is creating enough roadblocks that AI companies find it easier to work directly with artists.
  • The Nightshade website warned the tool wasn’t future-proof before LightShed development even began.

What’s next: Researchers plan to use insights from LightShed to develop new artist defenses, including watermarks that could persist even after artwork passes through AI models.

  • The findings will be presented at the Usenix Security Symposium, a leading global cybersecurity conference, in August.
  • Foerster hopes to build defenses that could “tip the scales back in the artist’s favor once again.”
  • The research continues the cat-and-mouse game between artists and AI proponents across technology, law, and culture.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...