back
Get SIGNAL/NOISE in your inbox daily

Former Google AI researcher Raza Habib predicts that AI hallucinations—when chatbots generate false or fabricated information—will be solved within a year, though he questions whether complete elimination is desirable. Speaking at Fortune’s Brainstorm AI conference in London, Habib argued that some degree of hallucination may be necessary for AI systems to generate truly novel ideas and creative solutions.

The technical solution: Habib explains that AI models are naturally well-calibrated before human preference training disrupts their accuracy assessment.

  • “If you look at the models before they are fine-tuned on human preferences, they’re surprisingly well calibrated,” Habib said, noting that a model’s confidence correlates well with truthfulness before human feedback training.
  • The challenge lies in preserving this natural calibration while making models more responsive to human preferences through reinforcement learning from human feedback.
  • Habib’s London-based startup Humanloop, which has raised $2.6 million, focuses on making large language model training more efficient.

In plain English: AI models go through three training stages—pre-training, fine-tuning, and reinforcement learning from human feedback. During the first stage, models naturally develop good judgment about when they’re right or wrong. However, the final stage of training, which makes AI more helpful and conversational, accidentally breaks this self-awareness. The solution involves preserving the model’s original confidence calibration while still making it user-friendly.

Why perfect accuracy might not be ideal: Habib argues that eliminating all hallucinations could limit AI’s creative potential.

  • “If we want to have models that will one day be able to create new knowledge for us, then we need them to be able to act as conjecture machines; we want them to propose things that are weird and novel,” he explained.
  • For creative tasks, having models “fabricate things that are going off the data domain is not necessarily a terrible thing,” according to Habib.
  • Current user experiences already accommodate imperfect technology, similar to how Google provides ranked search results rather than definitive answers.

Real-world consequences highlighted: The panel discussed Air Canada’s costly chatbot mistake as an example of preventable AI failures.

  • Customer Jake Moffatt was incorrectly told by Air Canada’s chatbot in 2022 that he could retroactively receive bereavement fare discounts after purchasing full-price tickets totaling over $1,200.
  • When Air Canada refused the refund, citing the chatbot’s error, Canada’s courts ordered the airline to compensate Moffatt.
  • “They gave the chatbot a much wider range than what it should have been able to say,” Habib said, calling the incident “completely avoidable” with proper testing and guardrails.

What the experts are saying: Industry leaders emphasized the importance of careful AI deployment in customer-facing applications.

  • “Just because something seems to work in a proof of concept, you probably don’t just want to put it straight into production, with real customers who have expectations and terms and conditions,” said Jeremy Barnes, ServiceNow’s VP of AI product.
  • Air Canada disputed the characterization, with a spokesperson telling Fortune that “the chatbot involved in the incident did not use AI” and “predated Generative AI capabilities.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...