back
Get SIGNAL/NOISE in your inbox daily

Google’s Gemini AI caught scanning private Google Drive documents without user consent, raising privacy concerns amid the tech industry’s AI push.

User discovers Gemini AI scanning private files: Kevin Bankston, a Senior Advisor on AI Governance, took to Twitter to share his experience of Google’s Gemini AI automatically summarizing his tax return stored in Google Drive without his permission:

  • Bankston was surprised to find that Gemini had ingested and summarized his private document, despite not explicitly asking for this feature.
  • The incident raises serious questions about the extent of control users have over their sensitive information and Google’s handling of private data in the context of its AI services.

Confusion over privacy settings and glitches: Both Google support and the Gemini AI itself seemed uncertain about the cause of this issue, with Bankston theorizing potential glitches or internal system malfunctions:

  • The privacy settings meant to inform Gemini about which documents to scan were not openly available, suggesting either the AI was “hallucinating” or there were broader technical issues at play.
  • Even after finding the relevant settings toggle, Bankston discovered that Gemini summaries were already disabled for Gmail, Drive, and Docs, indicating a discrepancy between the intended settings and the AI’s actual behavior.
  • The issue may be localized to Google Drive and potentially caused by Bankston’s earlier enrollment in Google Workspace Labs, which could be overriding Gemini’s intended settings.

Implications for user consent and privacy: Regardless of the specific technical cause, Google’s failure to respect granular user consent, particularly with sensitive information, raises significant concerns:

  • Even if the issue is isolated to a segment of users, such as Google Workspace Labs participants, it represents a severe breach of trust for those who helped test Google’s latest technologies.
  • The incident underscores the importance of obtaining explicit user permission on a case-by-case basis, especially when dealing with potentially sensitive data like financial documents.
  • Google’s apparent inability to ensure Gemini AI adheres to users’ privacy settings calls into question the company’s commitment to user consent and data protection as it rapidly expands its AI offerings.

Analyzing deeper: The Gemini AI incident is a troubling example of how the tech industry’s aggressive push towards AI adoption may be outpacing considerations for user privacy and consent. As companies like Google race to integrate AI into their services, the risk of sensitive user data being accessed or processed without explicit permission will likely increase.

This incident also highlights the need for clearer communication and transparency from tech giants about how their AI systems interact with user data. Users should be able to easily understand and control which of their documents and information are being analyzed by AI services, without having to navigate complex settings or encounter unexpected glitches.

Moreover, the fact that even Google’s own support team and Gemini AI were unclear about the cause of this issue suggests a lack of internal clarity and oversight regarding AI’s access to private user data. As AI becomes more deeply embedded in tech platforms, companies must prioritize robust governance frameworks and accountability measures to prevent such breaches of user trust.

Ultimately, while the specific details of this incident may be unique to Google’s Gemini AI, it serves as a cautionary tale for the broader tech industry. As AI continues to evolve and permeate various services, ensuring that user privacy and consent remain at the forefront will be critical to maintaining public trust and preventing the misuse of sensitive personal information.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...