Voice cloning technology has rapidly advanced to a concerning level of realism, requiring only seconds of audio to create convincing replicas of someone’s voice. While this technology enables legitimate applications like audiobooks and marketing, it simultaneously creates serious vulnerabilities for fraud and scams. A new Consumer Reports investigation reveals alarming gaps in safeguards across leading voice cloning platforms, highlighting the urgent need for stronger protection mechanisms to prevent malicious exploitation of this increasingly accessible technology.
The big picture: Consumer Reports evaluated six major voice cloning tools and found most lack adequate technical safeguards to prevent unauthorized voice cloning.
- Only two of the six platforms tested—Descript and Resemble AI—implemented meaningful technical barriers against non-consensual voice cloning.
- The remaining four platforms (ElevenLabs, Speechify, PlayHT, and Lovo) relied primarily on simple checkbox confirmations that users had the legal right to clone voices, without technical verification.
How the safeguards work: The two companies with stronger protections take distinctly different approaches to verification.
- Descript requires users to read and record a consent statement, using that specific audio to generate the voice clone.
- Resemble AI ensures the first voice clone created is based on audio recorded in real-time, making it more difficult to use pre-recorded samples without permission.
Why this matters: Voice cloning scams have become increasingly common, with criminals impersonating loved ones to extract money from victims.
- A typical attack involves cloning a family member’s voice, then contacting relatives claiming to be in an emergency situation requiring immediate financial assistance.
- The emotional manipulation and audio authenticity make these scams particularly effective at bypassing normal skepticism.
Recommended protections: Consumer Reports outlined several measures voice cloning companies should implement to prevent misuse.
- Collecting customers’ payment information to enable tracing of fraudulent content
- Developing robust mechanisms to verify voice ownership before cloning
- Creating better detection tools to identify AI-generated audio
- Preventing cloning of public figures and influential individuals
- Prohibiting audio containing common scam phrases
- Moving from self-service models to supervised voice cloning processes
Practical advice: If you receive an urgent money request that seems to be from a family member, verify through alternative channels before taking action.
- Use another device to directly contact the person allegedly making the request to confirm its legitimacy.
- Be especially cautious of messages conveying urgency or emotional distress, as these are common tactics in voice cloning scams.
Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds