back
Get SIGNAL/NOISE in your inbox daily

Artificial intelligence has fundamentally changed how people search for information online, but this technological leap has created an unexpected vulnerability: scammers are now exploiting AI-powered search results to steal money from unsuspecting users looking for customer service numbers.

Unlike traditional search engines that display multiple results for verification, AI systems like Google’s AI Overviews and ChatGPT often present a single, authoritative-seeming answer. This streamlined approach, while convenient, creates a perfect storm for fraud when criminals manage to inject fake contact information into these AI responses.

How scammers are exploiting AI search

The mechanics of this scam are deceptively simple yet sophisticated. Criminals have discovered ways to manipulate AI systems into displaying fraudulent phone numbers when users search for customer service contacts. When someone searches for “Royal Caribbean customer service” or “Amazon support number,” they might receive what appears to be an official response from the AI—complete with a fake phone number controlled by scammers.

Alex Rivlin, owner and CEO of real estate firm Rivlin Group, learned this lesson the hard way. Despite considering himself cautious about online security, Rivlin fell victim to a Royal Caribbean scam that began with what seemed like a legitimate phone number from Google’s AI search results.

“I pride myself on being cautious,” Rivlin shared in a Facebook post. “I don’t click links, I don’t give personal info over the phone, and I always verify. But I still got caught in a very sophisticated scam—and it all started with what looked like a legit phone number for Royal Caribbean I found on Google.”

The scammers demonstrated remarkable preparation, providing accurate pricing information, industry terminology, and specific details about shuttle services. Only after discovering fraudulent charges on his credit card statement did Rivlin realize he’d been duped.

A similar incident involved Swiggy Instamart, an Indian food delivery service. When a customer’s order arrived incomplete, they searched Google for “Swiggy customer care number” and called the number that appeared in the results. The fake customer service representative asked legitimate-sounding questions before requesting the caller’s WhatsApp number and asking them to share their screen—red flags that prompted the customer to end the call. Notably, Swiggy doesn’t actually offer phone support, relying instead on chat-based assistance.

Why AI makes this problem worse

Traditional search engines present users with multiple results from various sources, naturally encouraging comparison and verification. However, AI-powered search systems are designed to provide definitive answers, often presenting a single response that appears authoritative and complete. This design philosophy, while improving user experience in legitimate cases, inadvertently increases the likelihood that users will trust and act on fraudulent information.

The problem extends beyond Google’s systems. Scammers have also successfully manipulated ChatGPT and other AI platforms using similar techniques. Security experts at Odin and ITBrew recently demonstrated how hackers can use “prompt injection”—essentially feeding specific commands to AI systems—to force platforms like Google Gemini to include scam messages and fake customer service numbers in their responses.

When an AI system encounters these injected commands, it treats them as legitimate instructions, incorporating the fraudulent information into what appears to be a standard, helpful response to a user’s query.

Company responses and ongoing challenges

Google acknowledges the problem and claims to have “strong protections and policies to prevent scams from appearing in AI Overviews or ranking highly on Search.” The company states its systems are “effective at surfacing official customer service information for the queries people search most” and that it has “taken action on several of the examples shared.”

Similarly, OpenAI reports that many pages containing fake numbers referenced by ChatGPT have been removed, though the company notes that such updates can take time to implement across all systems.

However, the cat-and-mouse nature of this problem means that as companies close one avenue of attack, scammers adapt and find new methods to exploit AI systems.

Protecting yourself from AI-powered scams

Bypass AI search entirely
The most reliable protection is avoiding AI-powered search results when looking for customer service information. Add “–AI” to your Google search query to access traditional search results that show multiple sources for comparison. Better yet, navigate directly to the company’s official website to find contact information.

Verify before you call
Before calling any customer service number found through search, cross-reference it with the official company website. Many businesses don’t actually offer phone support, relying instead on email, chat, or online support systems.

Recognize common scam tactics
Legitimate customer service representatives rarely ask customers to share their screen, provide WhatsApp numbers, or request immediate payment information without proper verification procedures. Be particularly suspicious of agents who seem overly knowledgeable about pricing and services but ask for unusual forms of contact or payment.

Check website authenticity
When visiting websites found through search results, look for signs of legitimacy: proper spelling and grammar, professional formatting, secure HTTPS connections, and official company branding. Suspicious websites often contain odd formatting, unusual fonts, or unexpected characters.

Use Google’s verification tools
Click the three dots next to search results to access Google’s “About this result” feature, which provides information about the source before you visit the website or use contact information.

For business owners
Companies should actively monitor how their customer support information appears in AI search results and work with search engines to ensure accurate contact details are prominently displayed. Consider creating structured data markup on your website to help AI systems identify and display correct customer service information.

The broader implications

This emerging threat highlights a fundamental challenge in the AI era: the same technologies that make information more accessible also create new vulnerabilities for exploitation. As AI systems become more sophisticated and widely adopted, the potential impact of successful manipulations grows correspondingly larger.

The problem is particularly concerning because it targets a basic trust relationship between users and technology. When people search for customer service information, they’re typically experiencing a problem that needs resolution—making them more vulnerable to exploitation and less likely to scrutinize results carefully.

For businesses, this trend represents both a security challenge and a customer service issue. Companies must now actively monitor how their brand appears in AI search results and ensure customers can easily distinguish between legitimate and fraudulent contact information.

As AI continues to reshape how people access information online, the responsibility for security increasingly falls on both technology companies to improve their systems and users to maintain healthy skepticism—even when dealing with seemingly authoritative AI responses.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...