OpenAI faces mounting pressure to restrict how its Sora 2 video generation tool handles celebrity likenesses, following complaints from high-profile figures including Breaking Bad star Bryan Cranston and the Screen Actors Guild. The controversy highlights broader tensions in the AI industry over digital rights and content creation boundaries.
When OpenAI launched Sora 2, its latest AI video generation model, the company made a controversial decision that set it apart from competitors: allowing users to create videos featuring real people’s likenesses. Unlike other AI video tools that block celebrity content entirely, OpenAI initially implemented an “opt-out” system, meaning celebrities had to actively request removal from the platform rather than explicitly consent to their inclusion.
This approach quickly backfired. The platform became flooded with inappropriate content featuring public figures, prompting CEO Sam Altman to announce a shift toward an “opt-in” model requiring explicit celebrity consent. However, even this change hasn’t resolved the underlying issues.
The Martin Luther King controversy
OpenAI faced significant backlash after users generated inappropriate videos of civil rights leader Martin Luther King Jr. using Sora 2. The company was forced to issue a public apology to King’s family and implement specific restrictions on content featuring the historical figure.
The incident wasn’t isolated. Users also created videos featuring other deceased public figures, including President John F. Kennedy and physicist Stephen Hawking, raising questions about consent and digital dignity for historical personalities who obviously couldn’t opt out of modern AI systems.
Bryan Cranston and SAG-AFTRA push back
The situation escalated when Bryan Cranston, the Emmy-winning actor known for Breaking Bad and Malcolm in the Middle, discovered his likeness being used in Sora 2 videos despite having opted out of the system. In a joint statement with SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists), the union representing film and television performers, Cranston highlighted the broader implications for all entertainers.
“I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way,” Cranston said. “I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness.”
The statement represents a significant moment in the ongoing battle between AI companies and content creators over digital rights. SAG-AFTRA has been particularly active in this space, having included AI protections in recent contract negotiations with major studios.
OpenAI’s response and policy changes
In response to the mounting pressure, OpenAI committed to strengthening what the industry calls “guardrails”—technical and policy measures designed to prevent misuse of AI systems. While the company hasn’t detailed exactly how these new protections will work, they promise to make it significantly harder for users to recreate the likenesses of individuals who have opted out.
“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness,” said Sam Altman in the joint statement with SAG-AFTRA. “We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
The NO FAKES Act, formally known as the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, is federal legislation introduced in 2024 that would create legal protections against unauthorized AI-generated recreations of people’s voices and likenesses. The bill represents one of the first comprehensive attempts to address these issues at the federal level.
Why OpenAI stands alone in this controversy
OpenAI’s troubles with celebrity deepfakes highlight a key strategic difference in how AI companies approach content restrictions. While Sora 2 allows celebrity content with opt-out protections, competitors like Google’s Veo 3 video generator were designed from the ground up to avoid creating celebrity likenesses entirely.
This divergence in approach reflects broader philosophical differences in the AI industry. Companies like Google, Anthropic, and others have generally adopted more restrictive approaches to content generation, building extensive safeguards into their models during development. These systems typically refuse requests to generate content featuring recognizable public figures, historical personalities, or copyrighted characters.
Even OpenAI’s original Sora model, released earlier with limited access, didn’t face these same issues because it included stricter content restrictions. The company’s decision to loosen these constraints for Sora 2’s broader release appears to have been a calculated risk to differentiate the product in an increasingly crowded market.
Industry implications and competitive dynamics
The controversy illustrates the complex balance AI companies must strike between creative freedom and responsible deployment. OpenAI’s more permissive approach has generated significant user engagement and media attention—both positive and negative—but has also created ongoing policy challenges and potential legal exposure.
For businesses considering AI video tools, this situation highlights important due diligence questions about content policies, liability protections, and compliance with evolving regulations. Companies using AI-generated content for marketing, training, or other purposes need to understand not just the technical capabilities of these tools, but also their policy frameworks and potential legal implications.
The entertainment industry’s response, led by organizations like SAG-AFTRA, also signals that content creators are becoming increasingly organized in their approach to AI regulation. This could influence future legislation and industry standards, potentially affecting how all AI video tools operate.
Looking ahead
As AI video generation technology becomes more sophisticated and accessible, the questions raised by OpenAI’s Sora 2 controversy will likely intensify. The company’s experience serves as a cautionary tale for other AI developers about the importance of proactive content policies and stakeholder engagement.
The outcome of ongoing legislative efforts like the NO FAKES Act, combined with industry self-regulation initiatives, will likely shape the next generation of AI content creation tools. For now, OpenAI’s struggles demonstrate that in the rapidly evolving AI landscape, technical capability alone isn’t sufficient—companies must also navigate complex ethical, legal, and social considerations to achieve sustainable success.