Recent “reasoning upgrades” to AI chatbots have unexpectedly worsened their hallucination problems, highlighting the persistent challenge of making large language models reliable. Testing reveals that newer models from leading companies like OpenAI and DeepSeek actually produce more factual errors than their predecessors, raising fundamental questions about whether AI systems can ever fully overcome their tendency to present false information as truth. This development signals a critical limitation for industries hoping to deploy AI for research, legal work, and customer service.
The big picture: OpenAI’s technical evaluation reveals its newest models exhibit dramatically higher hallucination rates than previous versions, contradicting expectations that AI systems would improve with each iteration.
Why this matters: Persistent hallucination problems threaten to derail critical applications where factual accuracy is essential.
The terminology gap: “Hallucination” covers a broader range of AI errors than many realize.
What they’re saying: Experts suggest we may need to significantly limit our expectations of what AI chatbots can reliably do.
The bottom line: Despite technological advancements, the AI industry appears to be confronting a persistent limitation that may require fundamental rethinking of how these systems are designed, trained, and deployed.