Artificial intelligence tools have become ubiquitous in the workplace, yet a significant portion of professionals remain hesitant to embrace them. Understanding this resistance isn’t just academic curiosity—it’s essential business intelligence for organizations implementing AI strategies.
Recent research from Brigham Young University, a private research university in Utah, reveals that AI avoidance often stems from thoughtful consideration rather than technophobia. Jacob Steffen, a professor of information systems at BYU’s Marriott School of Business, and his research team surveyed hundreds of participants to understand why people actively choose not to use generative artificial intelligence tools like ChatGPT, Claude, or Google’s Bard.
“When people actively choose not to use something, there’s often a lot of thought or intention that goes behind it,” Steffen explains. “And with the rapid advancement of AI, it’s important to understand why some people are hesitant to adopt such technologies.”
The findings challenge common assumptions about AI resistance. Rather than fearing a robot apocalypse or job displacement, most non-users harbor more practical concerns about reliability, ethics, and authenticity. For business leaders navigating AI implementation, these insights offer a roadmap for addressing employee hesitation and building trust in AI-powered workflows.
The BYU team employed a two-phase approach to understand AI avoidance. First, they asked participants to describe specific situations where they chose not to use generative AI and explain their reasoning. Using these responses, researchers then designed a second survey where participants rated their likelihood of using or avoiding AI across various scenarios, while also rating their concern levels for different risk factors.
This methodology revealed four primary barriers that consistently emerged across different use cases, from workplace tasks to personal creative projects. These concerns appeared regardless of whether participants were considering AI for writing reports, creating presentations, seeking advice, or generating creative content.
The most frequently cited concern involves doubts about AI-generated content accuracy and reliability. Professionals worry that AI systems produce information that appears authoritative but contains subtle errors or outdated information that could undermine their work quality.
This concern manifests particularly strongly in high-stakes business scenarios. Financial analysts hesitate to use AI for market research, worried about inaccurate data analysis. Legal professionals avoid AI-assisted document review due to concerns about missing critical details. Marketing teams question whether AI-generated campaign strategies reflect current market realities.
The reliability concern extends beyond factual accuracy to include consistency and relevance. Many users report that AI outputs can vary significantly between similar prompts, making it difficult to establish reliable workflows. Additionally, AI systems may not understand nuanced business contexts that human experts would naturally consider.
Ethical concerns represent the second major barrier to AI adoption, encompassing worries about dishonesty, academic integrity, and professional authenticity. These concerns often arise when AI use might misrepresent human contribution or violate established norms about original work.
In educational and professional development contexts, this translates to concerns about intellectual shortcuts. As Taylor Wells, Steffen’s research partner, notes: “If you use GenAI for all your assignments, you may get your work done quickly, but you didn’t learn at all. What’s your value as a graduate if you just off-loaded all your intellectual work to a machine?”
Business applications of this concern include worries about presenting AI-generated proposals as original strategic thinking, using AI for client communications without disclosure, or relying on AI for tasks that clients expect to receive human expertise and judgment. Many professionals also worry about the broader societal implications of widespread AI adoption, including potential impacts on employment and creative industries.
Data security and privacy represent significant barriers for AI adoption, particularly among professionals handling sensitive information. Users worry about uploading confidential business data to AI platforms, uncertain about how this information might be stored, processed, or potentially accessed by unauthorized parties.
These concerns are particularly acute in regulated industries. Healthcare professionals avoid using AI for patient-related tasks due to HIPAA compliance worries. Financial services employees hesitate to input client information into AI tools due to regulatory requirements. Legal professionals avoid AI assistance with confidential client matters due to attorney-client privilege concerns.
Beyond regulatory compliance, many users express general unease about AI companies’ data practices. Questions about whether conversations and uploads are used for model training, how long data is retained, and what happens if security breaches occur create significant hesitation among privacy-conscious professionals.
The fourth major concern involves the loss of human touch and authentic interaction. Many people value the personal element in communication and creative expression, viewing AI assistance as potentially diminishing the authenticity of their work or relationships.
This concern appears strongest in contexts involving personal relationships or emotional expression. Professionals avoid using AI for condolence messages, thank-you notes, or other communications where recipients expect personal thought and effort. Similarly, many resist AI assistance for creative projects intended as gifts or personal expressions.
In business contexts, this translates to concerns about client relationships and team dynamics. Sales professionals worry that AI-generated communications might feel impersonal to prospects. Managers hesitate to use AI for employee feedback or recognition, concerned about maintaining authentic leadership connections. Customer service teams debate whether AI assistance might compromise the personal touch that differentiates their service.
Understanding these four barriers provides actionable insights for organizations implementing AI strategies. Rather than dismissing resistance as technophobia, leaders can address specific concerns through targeted approaches.
For output quality concerns, organizations can implement verification processes, provide AI literacy training, and establish clear guidelines about when human oversight is required. Addressing ethical implications involves developing usage policies, ensuring transparent communication about AI assistance, and maintaining clear boundaries between AI-supported and human-original work.
Privacy and risk concerns require robust data governance frameworks, clear policies about what information can be shared with AI tools, and regular security audits. For authenticity concerns, organizations can help employees identify appropriate use cases while preserving human judgment and personal touch in relationship-critical interactions.
Steffen compares generative AI to a hammer—extraordinarily useful in the right context but unnecessary or even counterproductive in others. This perspective suggests that successful AI implementation requires nuanced understanding of when and how to deploy these tools effectively.
The research indicates that AI resistance often reflects thoughtful consideration rather than blanket opposition. Many hesitant users recognize AI’s potential value while maintaining legitimate concerns about specific applications. This suggests that education and policy development, rather than persuasion campaigns, may be more effective approaches to increasing appropriate AI adoption.
For organizations investing in AI capabilities, these findings emphasize the importance of addressing employee concerns proactively rather than assuming resistance will diminish over time. By acknowledging and systematically addressing quality, ethical, privacy, and authenticity concerns, leaders can build more sustainable AI adoption strategies that leverage both technological capabilities and human judgment effectively.