×
Hallucination rates soar in new AI models, undermining real-world use
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Recent “reasoning upgrades” to AI chatbots have unexpectedly worsened their hallucination problems, highlighting the persistent challenge of making large language models reliable. Testing reveals that newer models from leading companies like OpenAI and DeepSeek actually produce more factual errors than their predecessors, raising fundamental questions about whether AI systems can ever fully overcome their tendency to present false information as truth. This development signals a critical limitation for industries hoping to deploy AI for research, legal work, and customer service.

The big picture: OpenAI’s technical evaluation reveals its newest models exhibit dramatically higher hallucination rates than previous versions, contradicting expectations that AI systems would improve with each iteration.

  • OpenAI’s o3 model hallucinated 33 percent of the time when summarizing facts about people, while the o4-mini model performed even worse at 48 percent—significantly higher than the previous o1 model’s 16 percent rate.
  • This regression isn’t isolated to OpenAI, as models from other developers like DeepSeek have shown similar double-digit increases in hallucination rates.

Why this matters: Persistent hallucination problems threaten to derail critical applications where factual accuracy is essential.

  • Research assistants, paralegal tools, and customer service bots all become actively harmful when they confidently present false information as fact.
  • These limitations may fundamentally constrain how AI can be safely deployed in high-stakes environments.

The terminology gap: “Hallucination” covers a broader range of AI errors than many realize.

  • Beyond simply inventing facts, hallucinations include providing factually accurate but irrelevant answers or failing to follow instructions.
  • Understanding these distinctions helps clarify the full scope of reliability challenges facing current AI systems.

What they’re saying: Experts suggest we may need to significantly limit our expectations of what AI chatbots can reliably do.

  • Some recommend only using these models for tasks where fact-checking the AI’s answer would still be faster than conducting the research yourself.
  • Other experts propose a more conservative approach, suggesting users should “completely avoid relying on AI chatbots to provide factual information.”

The bottom line: Despite technological advancements, the AI industry appears to be confronting a persistent limitation that may require fundamental rethinking of how these systems are designed, trained, and deployed.

AI hallucinations are getting worse – and they're here to stay

Recent News

NYT strikes landmark AI licensing deal with Amazon

The prestigious newspaper establishes a template for how media organizations might monetize content in the AI era while still pursuing litigation against other technology companies.

AI chip startup Cerebras outperforms NVIDIA’s Blackwell in Llama 4 test

Cerebras's custom AI hardware delivers more than double the tokens per second of NVIDIA's Blackwell GPUs in independent testing of Meta's largest language model.

AI courses from Google, Microsoft and more boost skills and résumés for free

As AI becomes critical to business decision-making, professionals can enhance their marketability with free courses teaching essential concepts and applications without requiring technical backgrounds.