back
Get SIGNAL/NOISE in your inbox daily

Most workers using artificial intelligence tools at their jobs operate with virtually no oversight, creating significant legal and financial risks that many companies haven’t fully grasped. While businesses rush to harness AI’s productivity benefits, they’re inadvertently exposing themselves to data breaches, compliance violations, and potential litigation.

A recent survey by EisnerAmper, a New York-based business advisory firm, reveals that only 22 percent of U.S. desk workers who use AI tools report that their companies actively monitor this usage. This means roughly four out of five employees are deploying AI systems without meaningful supervision—even when their employers have established safety protocols or legal guidelines.

The stakes couldn’t be higher. AI tools can “hallucinate,” generating completely false information while presenting it as factual. They can also “leak” sensitive data, meaning confidential information uploaded to AI systems may later surface in responses to other users’ queries. For businesses handling client data, financial records, or strategic plans, these risks translate directly into regulatory violations and costly legal exposure.

The scope of risky AI behavior

The EisnerAmper survey, which questioned over 1,000 college-educated workers who had used AI within the past year, uncovered troubling patterns. Despite 68 percent of respondents reporting regular encounters with AI mistakes, 82 percent remained confident that AI tools generally provided accurate outputs. This disconnect suggests workers may not fully understand the reliability limitations of the technology they’re using daily.

Jen Clark, director of technology enablement at EisnerAmper, identified a clear “communication gap” between employers and staff regarding both beneficial and problematic AI practices. This gap has real-world consequences that extend far beyond simple productivity miscalculations.

Recent reports illuminate just how cavalier workers have become with AI usage. IT workers and managers have shared examples of employees uploading potentially sensitive data to AI systems, with many describing their companies as “sleepwalking into a compliance minefield” of potential data theft.

The problem intensifies when employees feel their company-provided AI tools are inadequate. Workers often turn to personal AI applications in their workplaces, either because their employers provided no AI systems or because the approved tools seemed insufficient for their needs.

Perhaps most concerning, research shows that 58 percent of U.S. workers admitted to pasting sensitive company data into AI systems when seeking assistance. This data ranged from client records to confidential financial information and strategic company documents.

Understanding the technical risks

AI systems present two primary data security challenges that business leaders must understand. First, AI hallucination occurs when these systems generate false information while presenting it with complete confidence. Unlike human errors, which often include hedging language or uncertainty markers, AI hallucinations can appear authoritative and well-reasoned while being entirely fabricated.

Second, data leakage represents a more insidious threat. When employees upload information to publicly accessible AI chatbots, that data may resurface later when other users make related queries. Some AI tools explicitly state that shared information will be used to train future models, meaning sensitive client or company data could become permanently embedded in the AI’s knowledge base.

For businesses, these technical limitations create cascading risks. A marketing team member who uploads client campaign strategies to get AI assistance with presentations might inadvertently make that information accessible to competitors using the same AI system. A finance employee seeking help with budget analysis could expose proprietary financial data that later appears in responses to other users’ queries.

These scenarios likely violate customer privacy agreements and could severely impact critical business transactions. Companies preparing for initial public offerings, for instance, face strict regulatory requirements around information disclosure. Unauthorized data exposure through AI systems could trigger regulatory sanctions and expensive litigation.

Building effective AI governance

Companies serious about managing AI risks need comprehensive usage policies, not merely informal guidelines. Effective AI governance starts with selecting tools that align with organizational data privacy requirements. This might mean choosing enterprise-grade AI systems with stronger data protection guarantees, or it could involve allowing public tools like ChatGPT while strictly prohibiting the upload of sensitive business information.

The policy framework should address several key areas. Data classification helps employees understand what information can and cannot be shared with AI systems. Clear tool approval processes ensure workers use vetted AI applications rather than experimenting with unknown platforms. Regular auditing procedures create accountability and help identify policy violations before they escalate into serious problems.

Implementation requires more than document distribution. Companies need ongoing AI education programs that evolve with the technology landscape. As new AI tools emerge and existing ones change their data handling practices, employee training must adapt accordingly. This continuous education approach serves dual purposes: it protects the company while helping employees use AI more effectively within appropriate boundaries.

Creating practical safeguards

Smart AI governance doesn’t require sophisticated technical infrastructure. Companies can start with basic but effective measures. Establishing approved AI tool lists gives employees clear guidance while allowing IT departments to evaluate security and compliance implications. Creating data handling protocols helps workers understand which information types are appropriate for AI assistance.

Regular training sessions should cover both the benefits and risks of AI usage. Employees need to understand not just what they shouldn’t do, but why these restrictions exist. When workers grasp that uploading client data to AI systems could violate privacy agreements and expose the company to lawsuits, they’re more likely to follow guidelines voluntarily.

Documentation and monitoring create essential accountability layers. Companies should maintain records of approved AI tools, track usage patterns where possible, and establish clear reporting procedures for AI-related incidents. This doesn’t require invasive surveillance, but rather transparent systems that help both employers and employees understand AI usage within the organization.

The competitive advantage of responsible AI use

Proper AI governance can become a market differentiator, particularly for companies handling sensitive client information. Law firms, healthcare organizations, financial services companies, and consulting firms can use robust AI policies as selling points when competing for security-conscious clients.

Clients increasingly want assurance that their confidential information won’t be inadvertently exposed through AI systems. Companies that can demonstrate comprehensive AI governance policies and employee training programs position themselves as more trustworthy partners in an era where data breaches regularly make headlines.

The investment in AI governance pays dividends beyond risk mitigation. Well-trained employees using appropriate AI tools can achieve significant productivity gains while maintaining security standards. Companies that get this balance right will outperform competitors who either avoid AI entirely or use it recklessly.

As AI technology continues evolving rapidly, the companies that establish strong governance frameworks now will be better positioned to adapt to new tools and regulations. Rather than viewing AI oversight as a burden, forward-thinking organizations should recognize it as essential infrastructure for sustainable competitive advantage in an AI-driven business environment.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...