×
Human models outraged to discover their faces being used in AI propoganda
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-generated propaganda sparks controversy: The use of AI-generated videos featuring real human models in political propaganda has raised serious ethical concerns and sparked outrage among the affected individuals.

The Synthesia dilemma: Synthesia, a billion-dollar text-to-video AI company, has come under scrutiny for its AI avatar technology being used to create propaganda clips linked to authoritarian regimes.

  • Synthesia’s clientele ranges from reputable organizations like Reuters and Ernst & Young to groups associated with authoritarian states such as China, Russia, and Venezuela.
  • The company claims its technology allows users to create “studio-quality videos with AI avatars” with ease.
  • Human models who posed for Synthesia were shocked to discover their likenesses being used in propaganda videos without their knowledge or consent.

Model reactions and concerns: The affected models have expressed deep distress and worry about the potential consequences of their involuntary involvement in politically charged content.

  • Mark Torres, a London-based creative director, described feeling “violated and vulnerable” upon viewing a propaganda clip featuring his likeness.
  • Torres expressed concern about being perceived as involved in promoting military rule in unfamiliar countries.
  • Actor Dan Dewhirst, whose image was used in Venezuelan propaganda, worried about the potential impact on his career and mental health.

Legal and ethical implications: The use of AI-generated likenesses without explicit consent has become a contentious issue in the entertainment industry and beyond.

  • California recently passed two bills making it illegal to use AI-generated digital replicas of actors’ likenesses or voices without their explicit consent.
  • The use of generative AI was a major point of contention in the 2023 Screen Actors Guild and Writers Guild of America strike, resulting in new rules surrounding AI use in the industry.

Synthesia’s defense: The company maintains that it is protected by the terms of service agreed to by the models.

  • Synthesia claims to explain its terms of service and technology to actors and models at the start of their collaboration.
  • The company acknowledges that its processes may not be perfect but states that its founders are committed to continual improvement.

Broader implications: This controversy highlights the growing ethical challenges surrounding AI-generated content and the potential for misuse in political propaganda.

  • The incident raises questions about the responsibility of AI companies in preventing the misuse of their technology.
  • It also underscores the need for clearer regulations and guidelines governing the use of AI-generated likenesses, particularly in politically sensitive contexts.
  • The situation serves as a cautionary tale for models and actors, emphasizing the importance of understanding the full implications of their agreements with AI companies.

Looking ahead: As AI technology continues to advance, the industry faces mounting pressure to address ethical concerns and establish robust safeguards to protect individuals’ rights and reputations.

Human Models Horrified to Discover Their Faces Are Being Used for AI Propaganda

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.