×
AI Voice Scams Are Surging — Here’s How to Protect Yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money.

  • The UK-based online bank reports that scammers can clone a person’s voice from just three seconds of audio found online, such as in social media videos.
  • Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses.

Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning scams.

  • Over a quarter of respondents reported being targeted by such scams in the past year.
  • 46% of those surveyed were unaware that these scams existed.
  • 8% of respondents admitted they would send money if requested by a friend or family member, even if the call seemed suspicious.

Cybersecurity expert sounds alarm: Lisa Grahame, chief information security officer at Starling Bank, emphasizes the need for increased awareness and caution.

  • Grahame points out that people often post content online containing their voice without realizing it could make them vulnerable to fraudsters.
  • The bank recommends establishing a “safe phrase” with loved ones to verify identity during phone calls.

Safeguarding against voice-cloning scams: Starling Bank offers advice on how to protect oneself from these sophisticated frauds.

  • The recommended “safe phrase” should be simple, random, and easy to remember, but different from other passwords.
  • Sharing the safe phrase via text is discouraged, but if necessary, the message should be deleted once received.

AI advancements raise concerns: The increasing sophistication of AI in mimicking human voices has sparked worries about potential misuse.

  • There are growing fears about AI’s ability to help criminals access bank accounts and spread misinformation.
  • OpenAI, the creator of ChatGPT, has developed a voice replication tool called Voice Engine but has not made it publicly available due to concerns about synthetic voice misuse.

Broader implications for AI security: The rise of AI voice-cloning scams underscores the need for enhanced cybersecurity measures and public awareness.

  • As AI technology continues to advance, it’s likely that new forms of fraud and deception will emerge, requiring ongoing vigilance from both individuals and institutions.
  • The situation highlights the importance of responsible AI development and deployment, balancing innovation with safeguards against potential misuse.
This bank says ‘millions’ of people could be targeted by AI voice-cloning scams

Recent News

ChatGPT upgrade propels OpenAI back to top of LLM rankings

OpenAI's latest GPT-4 upgrades outperform Google's Gemini in comprehensive testing, marking notable advances in file processing and creative tasks.

AI reporter fired after replacing human journalist

AI news anchors failed to master Hawaiian pronunciations and connect with local viewers, highlighting technological and cultural barriers to automated journalism.

4 strategies to safeguard your artwork from AI

Artists increasingly adopt defensive tools and legal measures as AI companies continue harvesting their work without consent for training data.