×
AI Voice Scams Are Surging — Here’s How to Protect Yourself
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI voice-cloning scams pose growing threat: Starling Bank warns that millions could fall victim to fraudsters using artificial intelligence to replicate voices and deceive people into sending money.

  • The UK-based online bank reports that scammers can clone a person’s voice from just three seconds of audio found online, such as in social media videos.
  • Fraudsters then use the cloned voice to impersonate the victim and contact their friends or family members, asking for money under false pretenses.

Survey reveals alarming trends: A recent study conducted by Starling Bank and Mortar Research highlights the prevalence and potential impact of AI voice-cloning scams.

  • Over a quarter of respondents reported being targeted by such scams in the past year.
  • 46% of those surveyed were unaware that these scams existed.
  • 8% of respondents admitted they would send money if requested by a friend or family member, even if the call seemed suspicious.

Cybersecurity expert sounds alarm: Lisa Grahame, chief information security officer at Starling Bank, emphasizes the need for increased awareness and caution.

  • Grahame points out that people often post content online containing their voice without realizing it could make them vulnerable to fraudsters.
  • The bank recommends establishing a “safe phrase” with loved ones to verify identity during phone calls.

Safeguarding against voice-cloning scams: Starling Bank offers advice on how to protect oneself from these sophisticated frauds.

  • The recommended “safe phrase” should be simple, random, and easy to remember, but different from other passwords.
  • Sharing the safe phrase via text is discouraged, but if necessary, the message should be deleted once received.

AI advancements raise concerns: The increasing sophistication of AI in mimicking human voices has sparked worries about potential misuse.

  • There are growing fears about AI’s ability to help criminals access bank accounts and spread misinformation.
  • OpenAI, the creator of ChatGPT, has developed a voice replication tool called Voice Engine but has not made it publicly available due to concerns about synthetic voice misuse.

Broader implications for AI security: The rise of AI voice-cloning scams underscores the need for enhanced cybersecurity measures and public awareness.

  • As AI technology continues to advance, it’s likely that new forms of fraud and deception will emerge, requiring ongoing vigilance from both individuals and institutions.
  • The situation highlights the importance of responsible AI development and deployment, balancing innovation with safeguards against potential misuse.
This bank says ‘millions’ of people could be targeted by AI voice-cloning scams

Recent News

AI builds architecture solutions from concept to construction

AI tools are giving architects intelligent collaborators that propose design solutions, handle technical tasks, and identify optimal materials while preserving human creative direction.

Push, pull, sniff: AI perception research advances beyond sight to touch and smell

AI systems struggle to understand sensory experiences like touch and smell because they lack physical bodies, though multimodal training is showing promise in bridging this comprehension gap.

Vibe coding shifts power dynamics in Silicon Valley

AI assistants now write most of the code for tech startups, shifting value from technical skills to creative vision and idea generation.