×
AI Voice Scams Are Surging — Here’s How to Protect Yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money.

  • The UK-based online bank reports that scammers can clone a person’s voice from just three seconds of audio found online, such as in social media videos.
  • Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses.

Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning scams.

  • Over a quarter of respondents reported being targeted by such scams in the past year.
  • 46% of those surveyed were unaware that these scams existed.
  • 8% of respondents admitted they would send money if requested by a friend or family member, even if the call seemed suspicious.

Cybersecurity expert sounds alarm: Lisa Grahame, chief information security officer at Starling Bank, emphasizes the need for increased awareness and caution.

  • Grahame points out that people often post content online containing their voice without realizing it could make them vulnerable to fraudsters.
  • The bank recommends establishing a “safe phrase” with loved ones to verify identity during phone calls.

Safeguarding against voice-cloning scams: Starling Bank offers advice on how to protect oneself from these sophisticated frauds.

  • The recommended “safe phrase” should be simple, random, and easy to remember, but different from other passwords.
  • Sharing the safe phrase via text is discouraged, but if necessary, the message should be deleted once received.

AI advancements raise concerns: The increasing sophistication of AI in mimicking human voices has sparked worries about potential misuse.

  • There are growing fears about AI’s ability to help criminals access bank accounts and spread misinformation.
  • OpenAI, the creator of ChatGPT, has developed a voice replication tool called Voice Engine but has not made it publicly available due to concerns about synthetic voice misuse.

Broader implications for AI security: The rise of AI voice-cloning scams underscores the need for enhanced cybersecurity measures and public awareness.

  • As AI technology continues to advance, it’s likely that new forms of fraud and deception will emerge, requiring ongoing vigilance from both individuals and institutions.
  • The situation highlights the importance of responsible AI development and deployment, balancing innovation with safeguards against potential misuse.
This bank says ‘millions’ of people could be targeted by AI voice-cloning scams

Recent News

Deutsche Telekom unveils Magenta AI search tool with Perplexity integration

European telecom providers are integrating AI search tools into their apps as customer service demands shift beyond basic support functions.

AI-powered confessional debuts at Swiss church

Religious institutions explore AI-powered spiritual guidance as traditional churches face declining attendance and seek to bridge generational gaps in faith communities.

AI PDF’s rapid user growth demonstrates the power of thoughtful ‘AI wrappers’

Focused PDF analysis tool reaches half a million users, demonstrating market appetite for specialized AI solutions that tackle specific document processing needs.