×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice cloning scams gaining traction: A recent survey by a UK bank reveals a concerning trend in the rise of AI-generated voice cloning scams, with 28% of respondents reporting they have been targeted.

  • Voice cloning scams involve criminals using AI technology to create convincing imitations of friends or family members’ voices, claiming to be in urgent need of financial assistance.
  • The advancement of AI technology has made it possible to generate realistic voice imitations using as little as three seconds of source material, often easily obtainable from social media videos.
  • These scams represent an evolution of older text-based fraud attempts, with the added realism of voice technology potentially increasing their effectiveness.

Survey findings highlight vulnerability: The Starling Bank survey of over 3,000 people underscores the widespread nature of this problem and the potential risks faced by unsuspecting individuals.

  • Nearly 1 in 10 respondents (8%) admitted they would send money if requested, even if the call seemed suspicious, potentially putting millions at risk.
  • Only 30% of those surveyed expressed confidence in their ability to recognize a voice cloning scam, indicating a significant knowledge gap in fraud prevention.

Recommended countermeasure: To combat these sophisticated scams, experts suggest implementing a “Safe Phrase” system among close friends and family members.

  • A Safe Phrase is a pre-agreed code word or phrase used to verify the authenticity of urgent requests for assistance.
  • Effective Safe Phrases should be simple yet random, easy to remember, distinct from other passwords, and shared in person with trusted individuals.

Characteristics of effective Safe Phrases:

  • Simplicity and randomness to ensure ease of use while maintaining security
  • Memorability to facilitate quick recall during potentially stressful situations
  • Uniqueness to prevent confusion with other security measures
  • Personal sharing to minimize the risk of the phrase being compromised

Broader implications: The rise of AI-generated voice cloning scams represents a new frontier in cybercrime, highlighting the need for increased public awareness and education.

  • As AI technology continues to advance, it’s likely that these types of scams will become more sophisticated and harder to detect.
  • The development of effective countermeasures, such as Safe Phrases, may need to evolve alongside the technology to remain effective.
  • This trend underscores the importance of maintaining healthy skepticism and verifying the identity of callers, even when they sound familiar.
PSA: AI-generated voice cloning scams are on the rise – secret code recommended

Recent News

LinkedIn is Training its AI Models on Your Data — Here’s How to Opt Out

LinkedIn's new policy automatically opts users into AI training, raising concerns about data privacy and consent.

AI Voice Calling Scams are on the Rise – Do You Have a Secret Phrase?

Survey reveals 28% of respondents targeted by AI voice cloning scams, with 8% admitting they would send money despite suspicions.

AI HW Summit Showcases Offer a Glimpse of What’s to Come for AI Hardware

The summit revealed fierce competition in AI inference speed, with companies vying for the title of "fastest on the planet" using Llama 3.1 model benchmarks.