×
Surfshark report reveals alarming data collection by AI chatbots
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered chatbots have become essential tools for information gathering and content creation, but they come with significant privacy trade-offs. A new Surfshark analysis reveals striking differences in data collection practices among popular AI services, with some platforms collecting up to 90% of possible data types. This comprehensive examination of AI data collection practices highlights the hidden costs of “free” AI assistance and underscores the importance of privacy awareness when selecting AI tools.

The big picture: All 10 popular AI chatbots analyzed by Surfshark collect some form of user data, with the average service collecting 13 out of 35 possible data types.

  • Nearly half (45%) of the examined AI apps gather location data from users.
  • Almost 30% of these AI services track user information for targeted advertising purposes.

Behind the numbers: Meta AI emerged as the most aggressive data collector, harvesting 32 out of 35 possible data types—representing 90% of all potential user information.

  • Google Gemini follows as the second most data-hungry AI, collecting 22 different data types.
  • Other significant collectors include Poe (14 data types), Claude (13 data types), and Microsoft Copilot (12 data types).

Key details: Surfshark’s analysis examined privacy information from Apple’s App Store alongside privacy policies for services like DeepSeek and ChatGPT to create a comprehensive picture of data collection practices.

  • The study tracked 35 distinct categories of user information, including sensitive data like contact details, health information, financial data, location, and biometric identifiers.
  • Particularly concerning is the collection of “sensitive info” which can include racial data, sexual orientation, pregnancy information, religious beliefs, and political opinions.

Why this matters: While data collection is standard practice across digital platforms, the extensive harvesting by AI chatbots raises significant privacy concerns as these tools become increasingly embedded in daily workflows and personal assistance.

  • Users often trade personal data for “free” AI services without fully understanding the scope of information being collected.
  • This data can potentially be used for targeted advertising, algorithmic profiling, or shared with third parties without explicit user awareness.

Reading between the lines: The dramatic variation in data collection practices between different AI providers suggests that extensive data harvesting isn’t technically necessary for providing AI assistant services.

  • Services collecting fewer data types demonstrate that AI functionality doesn’t inherently require the level of surveillance implemented by the most aggressive collectors.
Most AI chatbots devour your user data - these are the worst offenders

Recent News

Google study reveals key to fixing enterprise RAG system failures

New research establishes criteria for when AI systems have enough information to answer correctly, a crucial advancement for reliable enterprise applications.

Windows 11 gains AI upgrades for 3 apps, limited availability

Windows 11's new AI features in Notepad, Paint, and Snipping Tool require either Microsoft 365 subscriptions or specialized Copilot+ PCs for full access.

AI chatbots exploited for criminal activities, study finds

AI chatbots remain vulnerable to manipulative prompts that extract instructions for illegal activities, demonstrating a fundamental conflict between helpfulness and safety in their design.