×
Alaska schools adopt flawed policy based on fake AI-generated studies
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The use of artificial intelligence to draft official education policy in Alaska has spotlighted ongoing concerns about AI’s tendency to fabricate information, as revealed when the state’s Education Commissioner used AI to create a policy document containing multiple fictional research citations.

Key policy details: Alaska’s Education Commissioner Deena Bishop employed artificial intelligence to develop a statewide policy restricting student cell phone use in schools.

  • The AI-generated policy document included six research citations intended to support classroom phone restrictions
  • Four of these citations referenced nonexistent studies, though they were attributed to legitimate organizations like the American Psychological Association
  • Despite claims that fake citations were removed from early drafts, the Anchorage Daily News found fabricated references remained in the board-approved version

AI hallucination patterns: The artificial intelligence system demonstrated a common flaw known as “hallucination,” where AI models generate false but plausible-sounding information.

  • The AI created detailed but entirely fictional citations, including a fake 2019 APA study and a nonexistent 2017 Journal of Educational Psychology paper
  • The fabricated studies aligned with real research showing cell phones can impede learning, but invented specific studies rather than citing actual available research
  • This pattern mirrors similar incidents where AI-generated legal documents included false case citations

Technical context: The incident highlights fundamental challenges in current AI technology that even industry leaders acknowledge.

  • Google CEO Sundar Pichai has described AI hallucinations as an “unsolved problem” in large language models
  • These models can struggle with factual accuracy while producing convincing-sounding but false information
  • The technology’s tendency to fabricate citations poses particular risks for official policy documents that require rigorous sourcing

Future implications: This case raises serious questions about the appropriate role of AI in government policy-making and the need for human oversight.

  • Government officials need robust verification processes when using AI to draft policy documents
  • While AI can be a powerful drafting tool, its current limitations make it unsuitable for tasks requiring strict factual accuracy without careful human review
  • The incident may lead to more stringent guidelines about AI use in official government communications

Public trust considerations: The use of AI-generated content containing false citations in official policy documents could undermine public confidence in government decision-making processes and highlight the need for transparency about AI use in policy development.

Alaska School Cell Phone Policy Cites Fake Studies Hallucinated by AI

Recent News

When AI goes wrong: What are hallucinations and how are they caused?

AI systems' tendency to generate false information led to major financial penalties and legal challenges for companies in 2023, prompting a shift toward stricter verification protocols.

The biggest AI failures of 2024 point exactly to where it needs most improvement

A wave of high-profile AI failures in 2024, from chatbot malfunctions to misleading scientific images, exposes the growing pains of artificial intelligence as it moves from labs to everyday use.

AI leaders are all in on nuclear power — they also happen to be behind the companies creating it

As artificial intelligence drives unprecedented growth in data center power consumption, tech executives are betting billions on next-generation nuclear reactors to meet their expanding energy needs.