×
When AI goes wrong: What are hallucinations and how are they caused?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI hallucinations occur when artificial intelligence tools generate incorrect, irrelevant, or fabricated information, as demonstrated by recent high-profile cases involving Google’s Bard and ChatGPT.

Scale of the problem: Even advanced AI models experience hallucinations approximately 2.5% of the time, which translates to significant numbers given the widespread use of these tools.

  • With ChatGPT processing around 10 million queries daily, this error rate could result in 250,000 hallucinations per day
  • The issue compounds if incorrect responses are reinforced as accurate, potentially degrading model accuracy over time
  • Anthropic recently highlighted improved accuracy as a key selling point for its Claude AI model update

Technical understanding: AI hallucinations stem from the fundamental way these systems process and generate information, particularly in their prediction mechanisms.

  • Generative AI operates by predicting the most likely next word or phrase based on training data
  • Visual AI models make educated guesses about pixel placement, which can sometimes lead to errors
  • Large Language Models (LLMs) trained on internet data encounter conflicting information, increasing hallucination risks

Root causes: The primary factors contributing to AI hallucinations include data quality issues and evaluation processes.

  • Insufficient training data and inadequate model evaluation procedures are major contributors
  • Mislabeled or underrepresented data can lead to false assumptions
  • Lack of proper model evaluation and fine-tuning processes increase hallucination frequency

Real-world implications: Recent cases highlight the serious consequences of AI hallucinations in professional settings.

  • Two New York lawyers faced sanctions for citing non-existent cases from ChatGPT
  • Air Canada was legally required to honor incorrect refund policies stated by their chatbot
  • These incidents may spark the emergence of AI model insurance products to protect companies

Mitigation strategies: Industry experts suggest several approaches to reduce AI hallucinations.

  • Training models on high-quality, company-specific datasets can improve accuracy
  • Implementing retrieval augmented generation (RAG) helps filter and focus on relevant data
  • Using specific, well-crafted prompts can help guide models toward more accurate responses
  • Maintaining human oversight for critical applications in legal, medical, and financial sectors

Future considerations: The challenge of AI hallucinations presents a critical junction for the technology’s widespread adoption and trustworthiness.

  • Organizations must carefully balance AI deployment with appropriate safeguards
  • The development of better evaluation methods and data quality controls remains essential
  • Success in reducing hallucinations will likely determine the extent of AI integration in sensitive applications
What are AI Hallucinations? When AI goes wrong

Recent News

EU leaders join AI summit targeting safe, sustainable tech innovation

European policymakers meet in Paris to coordinate AI regulations and showcase the region's technological capabilities alongside global counterparts.

AI race intensifies as US prioritizes technological domination

U.S. government abandons AI safety regulations in favor of accelerated development, marking a stark reversal from previous oversight policies.

Nanoscale AI chip processes data with light, outperforms silicon

Penn researchers develop photonic processor that analyzes data faster and more efficiently than traditional electronic semiconductors.