back
Get SIGNAL/NOISE in your inbox daily

Voice cloning technology has rapidly advanced to a concerning level of realism, requiring only seconds of audio to create convincing replicas of someone’s voice. While this technology enables legitimate applications like audiobooks and marketing, it simultaneously creates serious vulnerabilities for fraud and scams. A new Consumer Reports investigation reveals alarming gaps in safeguards across leading voice cloning platforms, highlighting the urgent need for stronger protection mechanisms to prevent malicious exploitation of this increasingly accessible technology.

The big picture: Consumer Reports evaluated six major voice cloning tools and found most lack adequate technical safeguards to prevent unauthorized voice cloning.

  • Only two of the six platforms tested—Descript and Resemble AI—implemented meaningful technical barriers against non-consensual voice cloning.
  • The remaining four platforms (ElevenLabs, Speechify, PlayHT, and Lovo) relied primarily on simple checkbox confirmations that users had the legal right to clone voices, without technical verification.

How the safeguards work: The two companies with stronger protections take distinctly different approaches to verification.

  • Descript requires users to read and record a consent statement, using that specific audio to generate the voice clone.
  • Resemble AI ensures the first voice clone created is based on audio recorded in real-time, making it more difficult to use pre-recorded samples without permission.

Why this matters: Voice cloning scams have become increasingly common, with criminals impersonating loved ones to extract money from victims.

  • A typical attack involves cloning a family member’s voice, then contacting relatives claiming to be in an emergency situation requiring immediate financial assistance.
  • The emotional manipulation and audio authenticity make these scams particularly effective at bypassing normal skepticism.

Recommended protections: Consumer Reports outlined several measures voice cloning companies should implement to prevent misuse.

  • Collecting customers’ payment information to enable tracing of fraudulent content
  • Developing robust mechanisms to verify voice ownership before cloning
  • Creating better detection tools to identify AI-generated audio
  • Preventing cloning of public figures and influential individuals
  • Prohibiting audio containing common scam phrases
  • Moving from self-service models to supervised voice cloning processes

Practical advice: If you receive an urgent money request that seems to be from a family member, verify through alternative channels before taking action.

  • Use another device to directly contact the person allegedly making the request to confirm its legitimacy.
  • Be especially cautious of messages conveying urgency or emotional distress, as these are common tactics in voice cloning scams.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...