×
AI-powered fraud apps surge 300% on iOS, even worse on Android
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new study reveals a dramatic surge in fraudulent mobile apps powered by artificial intelligence, with iOS seeing a 300% increase and Android experiencing a 600% spike in fake applications during 2025. The research from DV Fraud Lab, a digital fraud detection company, highlights how AI tools are enabling cybercriminals to create more convincing fraudulent apps that can bypass traditional app store security measures, targeting both unsuspecting users and advertisers.

What you should know: DV Fraud Lab’s research shows fraudulent apps are using two primary attack vectors to exploit mobile ecosystems.

  • Fake versions of popular apps like Facebook attempt to steal user credentials by mimicking legitimate login processes.
  • Other fraudulent apps focus on generating fake traffic to collect illegitimate advertising revenue once accepted into app stores.

How AI is fueling the problem: Artificial intelligence has become the primary driver behind this surge, making fraud schemes more sophisticated and harder to detect.

  • “Fraud schemes are leveraging AI not just to generate fake traffic, but also to build more convincing and realistic-looking apps – making it harder for marketplace reviewers to identify and reject them at scale,” the study found.
  • AI-generated app descriptions help fraudulent applications pass initial app store reviews by creating convincing language that mimics legitimate software.
  • These apps can simulate legitimate user behavior, making them difficult to identify using traditional fraud-detection methods.

The fake review ecosystem: Bots are generating inauthentic app reviews, often with little effort to make them seem genuine.

  • One gaming app example showed reviews with repetitive language that didn’t match the app’s nature.
  • Multiple 5-star reviews referred to a gaming app as “professional software” or contained broken English like “Contains many problems, I like.”

Why this matters: The accessibility of AI-powered fraud tools has lowered the barrier to entry for cybercriminals.

  • DV Fraud Lab reports that specific AI-powered tools and websites now make it easier than ever for non-coders to create fraudulent apps.
  • The firm calls this “a critical moment” for Apple and Google to review their app vetting processes.
  • Traditional fraud detection methods are becoming less effective against AI-enhanced deceptive apps.
New study shows 'massive spike' in fraudulent apps powered by AI

Recent News

iOS 18 uses Apple Intelligence to fix Wallet’s broken order tracking

AI extracts purchase details from emails, bypassing years of merchant reluctance.

Nvidia open-sources Audio2Face AI tool for realistic 3D avatar animations

The move democratizes sophisticated animation technology that previously required extensive technical expertise and resources.