×
Journalist’s AI Voice Clone Exposes Deception Risks as Technology Rapidly Evolves
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A journalist’s podcast explores the deceptive potential of AI voice cloning technology, raising questions about its implications as the technology rapidly advances.

The podcast’s premise: Journalist Evan Ratliff spent a year deceiving people with an AI clone of his own voice to test the capabilities and implications of voice cloning technology:

  • Ratliff, known for his technology-related stunts, used OpenAI’s GPT-4 model to create the voice clone for his new podcast, “Shell Game.”
  • The AI version of Ratliff’s voice claimed to be powered by the older GPT-3 model and fabricated episode titles when asked, highlighting its potential for deception.
  • Delays and the AI’s ability to rapidly recite all U.S. presidents alphabetically made it clear the voice was not human during the author’s interaction with it.

Assessing the podcast’s impact: While Ratliff’s podcast will likely entertain and provoke thought about voice cloning technology, its long-term relevance is uncertain given the rapid pace of AI advancement:

  • Voice cloning is still an emerging, infant technology, and journalism aiming to raise alarms often misses the real issues that will arise as the technology evolves.
  • Experts at top AI companies suggest today’s models are rudimentary compared to what’s to come, meaning the questions Ratliff raises may not remain salient in the future.
  • As AI voice cloning capabilities improve, the “game” Ratliff is playing now will likely be surpassed by new, more sophisticated versions of the technology.

The broader context of AI ethics: Ratliff’s experiment highlights the ongoing debate surrounding the responsible development and use of AI technologies:

  • As AI becomes more advanced and human-like, the potential for deception and misuse grows, raising ethical concerns about transparency, consent, and accountability.
  • Policymakers, researchers, and tech companies are grappling with how to regulate and govern AI to mitigate risks while still encouraging innovation.
  • The podcast may contribute to public awareness and discourse around these issues, but lasting solutions will require ongoing collaboration across sectors.
Journalist Evan Ratliff's new podcast spotlights the deceptive power of AI voice clones

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.