×
Survey reveals 78% of workers use AI tools without company oversight
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Most workers using artificial intelligence tools at their jobs operate with virtually no oversight, creating significant legal and financial risks that many companies haven’t fully grasped. While businesses rush to harness AI’s productivity benefits, they’re inadvertently exposing themselves to data breaches, compliance violations, and potential litigation.

A recent survey by EisnerAmper, a New York-based business advisory firm, reveals that only 22 percent of U.S. desk workers who use AI tools report that their companies actively monitor this usage. This means roughly four out of five employees are deploying AI systems without meaningful supervision—even when their employers have established safety protocols or legal guidelines.

The stakes couldn’t be higher. AI tools can “hallucinate,” generating completely false information while presenting it as factual. They can also “leak” sensitive data, meaning confidential information uploaded to AI systems may later surface in responses to other users’ queries. For businesses handling client data, financial records, or strategic plans, these risks translate directly into regulatory violations and costly legal exposure.

The scope of risky AI behavior

The EisnerAmper survey, which questioned over 1,000 college-educated workers who had used AI within the past year, uncovered troubling patterns. Despite 68 percent of respondents reporting regular encounters with AI mistakes, 82 percent remained confident that AI tools generally provided accurate outputs. This disconnect suggests workers may not fully understand the reliability limitations of the technology they’re using daily.

Jen Clark, director of technology enablement at EisnerAmper, identified a clear “communication gap” between employers and staff regarding both beneficial and problematic AI practices. This gap has real-world consequences that extend far beyond simple productivity miscalculations.

Recent reports illuminate just how cavalier workers have become with AI usage. IT workers and managers have shared examples of employees uploading potentially sensitive data to AI systems, with many describing their companies as “sleepwalking into a compliance minefield” of potential data theft.

The problem intensifies when employees feel their company-provided AI tools are inadequate. Workers often turn to personal AI applications in their workplaces, either because their employers provided no AI systems or because the approved tools seemed insufficient for their needs.

Perhaps most concerning, research shows that 58 percent of U.S. workers admitted to pasting sensitive company data into AI systems when seeking assistance. This data ranged from client records to confidential financial information and strategic company documents.

Understanding the technical risks

AI systems present two primary data security challenges that business leaders must understand. First, AI hallucination occurs when these systems generate false information while presenting it with complete confidence. Unlike human errors, which often include hedging language or uncertainty markers, AI hallucinations can appear authoritative and well-reasoned while being entirely fabricated.

Second, data leakage represents a more insidious threat. When employees upload information to publicly accessible AI chatbots, that data may resurface later when other users make related queries. Some AI tools explicitly state that shared information will be used to train future models, meaning sensitive client or company data could become permanently embedded in the AI’s knowledge base.

For businesses, these technical limitations create cascading risks. A marketing team member who uploads client campaign strategies to get AI assistance with presentations might inadvertently make that information accessible to competitors using the same AI system. A finance employee seeking help with budget analysis could expose proprietary financial data that later appears in responses to other users’ queries.

These scenarios likely violate customer privacy agreements and could severely impact critical business transactions. Companies preparing for initial public offerings, for instance, face strict regulatory requirements around information disclosure. Unauthorized data exposure through AI systems could trigger regulatory sanctions and expensive litigation.

Building effective AI governance

Companies serious about managing AI risks need comprehensive usage policies, not merely informal guidelines. Effective AI governance starts with selecting tools that align with organizational data privacy requirements. This might mean choosing enterprise-grade AI systems with stronger data protection guarantees, or it could involve allowing public tools like ChatGPT while strictly prohibiting the upload of sensitive business information.

The policy framework should address several key areas. Data classification helps employees understand what information can and cannot be shared with AI systems. Clear tool approval processes ensure workers use vetted AI applications rather than experimenting with unknown platforms. Regular auditing procedures create accountability and help identify policy violations before they escalate into serious problems.

Implementation requires more than document distribution. Companies need ongoing AI education programs that evolve with the technology landscape. As new AI tools emerge and existing ones change their data handling practices, employee training must adapt accordingly. This continuous education approach serves dual purposes: it protects the company while helping employees use AI more effectively within appropriate boundaries.

Creating practical safeguards

Smart AI governance doesn’t require sophisticated technical infrastructure. Companies can start with basic but effective measures. Establishing approved AI tool lists gives employees clear guidance while allowing IT departments to evaluate security and compliance implications. Creating data handling protocols helps workers understand which information types are appropriate for AI assistance.

Regular training sessions should cover both the benefits and risks of AI usage. Employees need to understand not just what they shouldn’t do, but why these restrictions exist. When workers grasp that uploading client data to AI systems could violate privacy agreements and expose the company to lawsuits, they’re more likely to follow guidelines voluntarily.

Documentation and monitoring create essential accountability layers. Companies should maintain records of approved AI tools, track usage patterns where possible, and establish clear reporting procedures for AI-related incidents. This doesn’t require invasive surveillance, but rather transparent systems that help both employers and employees understand AI usage within the organization.

The competitive advantage of responsible AI use

Proper AI governance can become a market differentiator, particularly for companies handling sensitive client information. Law firms, healthcare organizations, financial services companies, and consulting firms can use robust AI policies as selling points when competing for security-conscious clients.

Clients increasingly want assurance that their confidential information won’t be inadvertently exposed through AI systems. Companies that can demonstrate comprehensive AI governance policies and employee training programs position themselves as more trustworthy partners in an era where data breaches regularly make headlines.

The investment in AI governance pays dividends beyond risk mitigation. Well-trained employees using appropriate AI tools can achieve significant productivity gains while maintaining security standards. Companies that get this balance right will outperform competitors who either avoid AI entirely or use it recklessly.

As AI technology continues evolving rapidly, the companies that establish strong governance frameworks now will be better positioned to adapt to new tools and regulations. Rather than viewing AI oversight as a burden, forward-thinking organizations should recognize it as essential infrastructure for sustainable competitive advantage in an AI-driven business environment.

Only 1 in 5 Workers Say Their AI Use is Checked at Work. That Needs to Change

Recent News

Tampa museum debuts AI exhibit to demystify artificial intelligence for families

From Pong to facial recognition, visitors discover AI has been hiding in plain sight for decades.

Miami-based startup Coconote’s AI note-taking app now free for all US educators

Former Loom engineers built the platform to enhance learning while respecting academic integrity codes.

OpenAI upgrades Realtime API with phone calling and image support

AI tools are only as helpful as the information they can access.