×
Journalist’s AI Voice Clone Exposes Deception Risks as Technology Rapidly Evolves
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A journalist’s podcast explores the deceptive potential of AI voice cloning technology, raising questions about its implications as the technology rapidly advances.

The podcast’s premise: Journalist Evan Ratliff spent a year deceiving people with an AI clone of his own voice to test the capabilities and implications of voice cloning technology:

  • Ratliff, known for his technology-related stunts, used OpenAI’s GPT-4 model to create the voice clone for his new podcast, “Shell Game.”
  • The AI version of Ratliff’s voice claimed to be powered by the older GPT-3 model and fabricated episode titles when asked, highlighting its potential for deception.
  • Delays and the AI’s ability to rapidly recite all U.S. presidents alphabetically made it clear the voice was not human during the author’s interaction with it.

Assessing the podcast’s impact: While Ratliff’s podcast will likely entertain and provoke thought about voice cloning technology, its long-term relevance is uncertain given the rapid pace of AI advancement:

  • Voice cloning is still an emerging, infant technology, and journalism aiming to raise alarms often misses the real issues that will arise as the technology evolves.
  • Experts at top AI companies suggest today’s models are rudimentary compared to what’s to come, meaning the questions Ratliff raises may not remain salient in the future.
  • As AI voice cloning capabilities improve, the “game” Ratliff is playing now will likely be surpassed by new, more sophisticated versions of the technology.

The broader context of AI ethics: Ratliff’s experiment highlights the ongoing debate surrounding the responsible development and use of AI technologies:

  • As AI becomes more advanced and human-like, the potential for deception and misuse grows, raising ethical concerns about transparency, consent, and accountability.
  • Policymakers, researchers, and tech companies are grappling with how to regulate and govern AI to mitigate risks while still encouraging innovation.
  • The podcast may contribute to public awareness and discourse around these issues, but lasting solutions will require ongoing collaboration across sectors.
Journalist Evan Ratliff's new podcast spotlights the deceptive power of AI voice clones

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.