×
AI Voice Calling Scams are on the Rise – Do You Have a Secret Phrase?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice cloning scams gaining traction: A recent survey by a UK bank reveals a concerning trend in the rise of AI-generated voice cloning scams, with 28% of respondents reporting they have been targeted.

  • Voice cloning scams involve criminals using AI technology to create convincing imitations of friends or family members’ voices, claiming to be in urgent need of financial assistance.
  • The advancement of AI technology has made it possible to generate realistic voice imitations using as little as three seconds of source material, often easily obtainable from social media videos.
  • These scams represent an evolution of older text-based fraud attempts, with the added realism of voice technology potentially increasing their effectiveness.

Survey findings highlight vulnerability: The Starling Bank survey of over 3,000 people underscores the widespread nature of this problem and the potential risks faced by unsuspecting individuals.

  • Nearly 1 in 10 respondents (8%) admitted they would send money if requested, even if the call seemed suspicious, potentially putting millions at risk.
  • Only 30% of those surveyed expressed confidence in their ability to recognize a voice cloning scam, indicating a significant knowledge gap in fraud prevention.

Recommended countermeasure: To combat these sophisticated scams, experts suggest implementing a “Safe Phrase” system among close friends and family members.

  • A Safe Phrase is a pre-agreed code word or phrase used to verify the authenticity of urgent requests for assistance.
  • Effective Safe Phrases should be simple yet random, easy to remember, distinct from other passwords, and shared in person with trusted individuals.

Characteristics of effective Safe Phrases:

  • Simplicity and randomness to ensure ease of use while maintaining security
  • Memorability to facilitate quick recall during potentially stressful situations
  • Uniqueness to prevent confusion with other security measures
  • Personal sharing to minimize the risk of the phrase being compromised

Broader implications: The rise of AI-generated voice cloning scams represents a new frontier in cybercrime, highlighting the need for increased public awareness and education.

  • As AI technology continues to advance, it’s likely that these types of scams will become more sophisticated and harder to detect.
  • The development of effective countermeasures, such as Safe Phrases, may need to evolve alongside the technology to remain effective.
  • This trend underscores the importance of maintaining healthy skepticism and verifying the identity of callers, even when they sound familiar.
PSA: AI-generated voice cloning scams are on the rise – secret code recommended

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.