×
When AI goes wrong: What are hallucinations and how are they caused?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI hallucinations occur when artificial intelligence tools generate incorrect, irrelevant, or fabricated information, as demonstrated by recent high-profile cases involving Google’s Bard and ChatGPT.

Scale of the problem: Even advanced AI models experience hallucinations approximately 2.5% of the time, which translates to significant numbers given the widespread use of these tools.

  • With ChatGPT processing around 10 million queries daily, this error rate could result in 250,000 hallucinations per day
  • The issue compounds if incorrect responses are reinforced as accurate, potentially degrading model accuracy over time
  • Anthropic recently highlighted improved accuracy as a key selling point for its Claude AI model update

Technical understanding: AI hallucinations stem from the fundamental way these systems process and generate information, particularly in their prediction mechanisms.

  • Generative AI operates by predicting the most likely next word or phrase based on training data
  • Visual AI models make educated guesses about pixel placement, which can sometimes lead to errors
  • Large Language Models (LLMs) trained on internet data encounter conflicting information, increasing hallucination risks

Root causes: The primary factors contributing to AI hallucinations include data quality issues and evaluation processes.

  • Insufficient training data and inadequate model evaluation procedures are major contributors
  • Mislabeled or underrepresented data can lead to false assumptions
  • Lack of proper model evaluation and fine-tuning processes increase hallucination frequency

Real-world implications: Recent cases highlight the serious consequences of AI hallucinations in professional settings.

  • Two New York lawyers faced sanctions for citing non-existent cases from ChatGPT
  • Air Canada was legally required to honor incorrect refund policies stated by their chatbot
  • These incidents may spark the emergence of AI model insurance products to protect companies

Mitigation strategies: Industry experts suggest several approaches to reduce AI hallucinations.

  • Training models on high-quality, company-specific datasets can improve accuracy
  • Implementing retrieval augmented generation (RAG) helps filter and focus on relevant data
  • Using specific, well-crafted prompts can help guide models toward more accurate responses
  • Maintaining human oversight for critical applications in legal, medical, and financial sectors

Future considerations: The challenge of AI hallucinations presents a critical junction for the technology’s widespread adoption and trustworthiness.

  • Organizations must carefully balance AI deployment with appropriate safeguards
  • The development of better evaluation methods and data quality controls remains essential
  • Success in reducing hallucinations will likely determine the extent of AI integration in sensitive applications
What are AI Hallucinations? When AI goes wrong

Recent News

Dareesoft Tests AI Road Hazard Detection in Dubai

Dubai tests a vehicle-mounted AI system that detected over 2,000 road hazards in real-time, including potholes and fallen objects on city streets.

Samsung to Unveil Galaxy Ring 2 and AI-powered Wearables in January

Note: Without seeing the headline/article you're referring to, I'm unable to create an appropriate excerpt. Could you please provide the headline or article you'd like me to analyze?

What business leaders can learn from ServiceNow’s $11B ARR milestone

ServiceNow's steady 23% growth rate and high customer retention paint a rare picture of sustainable expansion in enterprise software while larger rivals struggle to maintain momentum.