×
55% of people using AI at work have no training on its risks
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI in the workplace: A growing concern: New research reveals a significant gap in employee awareness and training regarding the use of artificial intelligence (AI) tools at work, raising cybersecurity concerns.

  • A survey by the National Cybersecurity Alliance (NCA) found that 55% of employees using AI at work have not received any training on its associated risks.
  • Despite 65% of respondents expressing worry about AI-related cybercrime, 38% admitted to sharing confidential work information with AI tools without their employer’s knowledge.
  • Younger workers, particularly Gen Z (46%) and Millennials (43%), were more likely to engage in unauthorized sharing of sensitive information with AI tools.

The misconception of AI tools: Many employees lack understanding of how AI models learn and the potential risks associated with inputting sensitive information.

  • Lisa Plaggemier, executive director of NCA, notes that people often think of AI tools as similar to Google’s search function, focusing on the output without considering the implications of the input.
  • There’s a general lack of awareness that information entered into AI tools contributes to their learning and training models.

Varied approaches to AI policies: The implementation of AI policies and procedures varies widely across different industries and organizations.

  • Financial services and high-tech companies tend to have more stringent policies in place.
  • Many organizations are still in the process of developing their AI policies, while others have yet to address the issue.

Challenges in effective training: Even when companies provide AI and cybersecurity training, ensuring compliance remains a significant challenge.

  • Plaggemier shared an example of a Fortune 100 company where developers violated AI policies immediately after completing explicit cybersecurity training.
  • This highlights the need for more effective training methods and better enforcement of AI policies.

The role of leadership in AI governance: Employers bear the primary responsibility for establishing and enforcing AI policies in the workplace.

  • Plaggemier emphasizes that companies need to determine how to leverage AI technology while protecting themselves from associated risks.
  • Individual workers are expected to adhere to their employer’s AI policies and procedures, but these guidelines must be clearly established and communicated first.

Cybersecurity Awareness Month: The research findings coincide with Cybersecurity Awareness Month, underscoring the importance of addressing AI-related security concerns.

  • The annual event typically focuses on traditional cybersecurity measures such as updating antivirus software, using strong passwords, and being vigilant against phishing scams.
  • The emerging risks associated with AI use in the workplace highlight the need to expand cybersecurity awareness efforts to include AI-specific concerns.

Analyzing deeper: The urgency of AI literacy: As AI tools become increasingly prevalent in the workplace, there is a pressing need for comprehensive AI literacy programs that go beyond basic cybersecurity training.

  • Organizations must not only establish clear AI policies but also ensure that employees understand the underlying principles of AI and the potential consequences of misuse.
  • The disconnect between training and practice suggests that a more holistic approach to AI education is necessary, integrating technical knowledge with practical application and ethical considerations.
  • As AI continues to evolve rapidly, ongoing education and policy updates will be crucial to maintaining a secure and responsible AI-enabled work environment.
55% Of Employees Using AI At Work Have No Training On Its Risks

Recent News

OpenAI’s new o3 model is putting up monster scores on the industry’s toughest tests

OpenAI's latest model demonstrates strong performance in advanced math and science problems, though high computing costs currently limit widespread adoption.

GenFuse AI helps non-technical users create and deploy AI agents

New tools eliminate coding barriers for small businesses seeking to automate daily operations with AI, though real-world effectiveness remains to be proven.

How Shell is harnessing AI to produce cleaner energy

Shell uses AI to speed up emissions monitoring and clean energy simulations while training thousands of employees to develop practical solutions in the field.