×
When AI goes wrong: What are hallucinations and how are they caused?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI hallucinations occur when artificial intelligence tools generate incorrect, irrelevant, or fabricated information, as demonstrated by recent high-profile cases involving Google’s Bard and ChatGPT.

Scale of the problem: Even advanced AI models experience hallucinations approximately 2.5% of the time, which translates to significant numbers given the widespread use of these tools.

  • With ChatGPT processing around 10 million queries daily, this error rate could result in 250,000 hallucinations per day
  • The issue compounds if incorrect responses are reinforced as accurate, potentially degrading model accuracy over time
  • Anthropic recently highlighted improved accuracy as a key selling point for its Claude AI model update

Technical understanding: AI hallucinations stem from the fundamental way these systems process and generate information, particularly in their prediction mechanisms.

  • Generative AI operates by predicting the most likely next word or phrase based on training data
  • Visual AI models make educated guesses about pixel placement, which can sometimes lead to errors
  • Large Language Models (LLMs) trained on internet data encounter conflicting information, increasing hallucination risks

Root causes: The primary factors contributing to AI hallucinations include data quality issues and evaluation processes.

  • Insufficient training data and inadequate model evaluation procedures are major contributors
  • Mislabeled or underrepresented data can lead to false assumptions
  • Lack of proper model evaluation and fine-tuning processes increase hallucination frequency

Real-world implications: Recent cases highlight the serious consequences of AI hallucinations in professional settings.

  • Two New York lawyers faced sanctions for citing non-existent cases from ChatGPT
  • Air Canada was legally required to honor incorrect refund policies stated by their chatbot
  • These incidents may spark the emergence of AI model insurance products to protect companies

Mitigation strategies: Industry experts suggest several approaches to reduce AI hallucinations.

  • Training models on high-quality, company-specific datasets can improve accuracy
  • Implementing retrieval augmented generation (RAG) helps filter and focus on relevant data
  • Using specific, well-crafted prompts can help guide models toward more accurate responses
  • Maintaining human oversight for critical applications in legal, medical, and financial sectors

Future considerations: The challenge of AI hallucinations presents a critical junction for the technology’s widespread adoption and trustworthiness.

  • Organizations must carefully balance AI deployment with appropriate safeguards
  • The development of better evaluation methods and data quality controls remains essential
  • Success in reducing hallucinations will likely determine the extent of AI integration in sensitive applications
What are AI Hallucinations? When AI goes wrong

Recent News

NYT strikes landmark AI licensing deal with Amazon

The prestigious newspaper establishes a template for how media organizations might monetize content in the AI era while still pursuing litigation against other technology companies.

AI chip startup Cerebras outperforms NVIDIA’s Blackwell in Llama 4 test

Cerebras's custom AI hardware delivers more than double the tokens per second of NVIDIA's Blackwell GPUs in independent testing of Meta's largest language model.

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.