×
Journalist’s AI Voice Clone Exposes Deception Risks as Technology Rapidly Evolves
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A journalist’s podcast explores the deceptive potential of AI voice cloning technology, raising questions about its implications as the technology rapidly advances.

The podcast’s premise: Journalist Evan Ratliff spent a year deceiving people with an AI clone of his own voice to test the capabilities and implications of voice cloning technology:

  • Ratliff, known for his technology-related stunts, used OpenAI’s GPT-4 model to create the voice clone for his new podcast, “Shell Game.”
  • The AI version of Ratliff’s voice claimed to be powered by the older GPT-3 model and fabricated episode titles when asked, highlighting its potential for deception.
  • Delays and the AI’s ability to rapidly recite all U.S. presidents alphabetically made it clear the voice was not human during the author’s interaction with it.

Assessing the podcast’s impact: While Ratliff’s podcast will likely entertain and provoke thought about voice cloning technology, its long-term relevance is uncertain given the rapid pace of AI advancement:

  • Voice cloning is still an emerging, infant technology, and journalism aiming to raise alarms often misses the real issues that will arise as the technology evolves.
  • Experts at top AI companies suggest today’s models are rudimentary compared to what’s to come, meaning the questions Ratliff raises may not remain salient in the future.
  • As AI voice cloning capabilities improve, the “game” Ratliff is playing now will likely be surpassed by new, more sophisticated versions of the technology.

The broader context of AI ethics: Ratliff’s experiment highlights the ongoing debate surrounding the responsible development and use of AI technologies:

  • As AI becomes more advanced and human-like, the potential for deception and misuse grows, raising ethical concerns about transparency, consent, and accountability.
  • Policymakers, researchers, and tech companies are grappling with how to regulate and govern AI to mitigate risks while still encouraging innovation.
  • The podcast may contribute to public awareness and discourse around these issues, but lasting solutions will require ongoing collaboration across sectors.
Journalist Evan Ratliff's new podcast spotlights the deceptive power of AI voice clones

Recent News

Propaganda is everywhere, even in LLMS — here’s how to protect yourself from it

Recent tragedy spurs examination of AI chatbot safety measures after automated responses proved harmful to a teenager seeking emotional support.

How Anthropic’s Claude is changing the game for software developers

AI coding assistants now handle over 10% of software development tasks, with major tech firms reporting significant time and cost savings from their deployment.

AI-powered divergent thinking: How hallucinations help scientists achieve big breakthroughs

Meta's new AI model combines powerful performance with unusually permissive licensing terms for businesses and developers.