×
AI Voice Calling Scams are on the Rise – Do You Have a Secret Phrase?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice cloning scams gaining traction: A recent survey by a UK bank reveals a concerning trend in the rise of AI-generated voice cloning scams, with 28% of respondents reporting they have been targeted.

  • Voice cloning scams involve criminals using AI technology to create convincing imitations of friends or family members’ voices, claiming to be in urgent need of financial assistance.
  • The advancement of AI technology has made it possible to generate realistic voice imitations using as little as three seconds of source material, often easily obtainable from social media videos.
  • These scams represent an evolution of older text-based fraud attempts, with the added realism of voice technology potentially increasing their effectiveness.

Survey findings highlight vulnerability: The Starling Bank survey of over 3,000 people underscores the widespread nature of this problem and the potential risks faced by unsuspecting individuals.

  • Nearly 1 in 10 respondents (8%) admitted they would send money if requested, even if the call seemed suspicious, potentially putting millions at risk.
  • Only 30% of those surveyed expressed confidence in their ability to recognize a voice cloning scam, indicating a significant knowledge gap in fraud prevention.

Recommended countermeasure: To combat these sophisticated scams, experts suggest implementing a “Safe Phrase” system among close friends and family members.

  • A Safe Phrase is a pre-agreed code word or phrase used to verify the authenticity of urgent requests for assistance.
  • Effective Safe Phrases should be simple yet random, easy to remember, distinct from other passwords, and shared in person with trusted individuals.

Characteristics of effective Safe Phrases:

  • Simplicity and randomness to ensure ease of use while maintaining security
  • Memorability to facilitate quick recall during potentially stressful situations
  • Uniqueness to prevent confusion with other security measures
  • Personal sharing to minimize the risk of the phrase being compromised

Broader implications: The rise of AI-generated voice cloning scams represents a new frontier in cybercrime, highlighting the need for increased public awareness and education.

  • As AI technology continues to advance, it’s likely that these types of scams will become more sophisticated and harder to detect.
  • The development of effective countermeasures, such as Safe Phrases, may need to evolve alongside the technology to remain effective.
  • This trend underscores the importance of maintaining healthy skepticism and verifying the identity of callers, even when they sound familiar.
PSA: AI-generated voice cloning scams are on the rise – secret code recommended

Recent News

Google does much of its coding work with AI now — its shrinking workforce should offer proof

Google's rapid AI integration sees over a quarter of new code generated by artificial intelligence, signaling a transformative shift in the company's operations and product development.

Meta partners with Lumen to drive network expansion and AI adoption

Meta's network expansion with Lumen paves the way for more advanced AI features across its platforms.

Microsoft begins international rollout of Copilot AI features within Office apps

Microsoft's expansion of AI features in Asia-Pacific subscriptions signals a potential shift in how productivity software is packaged and priced globally.