×
Confident nonsense: Google’s AI Overview offers explanations for made-up phrases
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google’s AI Overview feature is displaying a peculiar pattern of generating fictional explanations for made-up idioms, revealing both the creative and problematic aspects of AI-generated search results. When users search for nonsensical phrases like “A duckdog never blinks twice,” Google’s algorithm confidently produces detailed but entirely fabricated meanings and origin stories. This trend highlights the ongoing challenges with AI hallucination in search engines, where systems present invented information with the same confidence as factual content.

How it works: Users can trigger these AI fabrications by simply searching for a made-up idiom without explicitly asking for an explanation or backstory.

  • Adding “meaning” at the end of the fictional phrase seems to increase the likelihood of Google generating a detailed explanation.
  • The AI doesn’t recognize these phrases as fictional and instead creates plausible-sounding definitions and origins that appear authoritative.

The Duckdog experiment: A ZDNET writer tested the phenomenon with a colleague’s invented phrase about her dog.

  • When searching “A duckdog never blinks twice,” Google’s AI immediately produced a confident explanation claiming it meant a duck-like dog is so focused it never blinks twice.
  • Subsequent searches for the exact same phrase yielded completely different explanations, with one claiming it refers to something “so unusual or unbelievable that it’s almost impossible to accept.”

Why this matters: This trend exposes a fundamental weakness in Google’s AI Overview feature that could undermine user trust in search results.

  • While AI summaries can provide convenient quick answers, this experiment demonstrates they can also present fictional information as factual with complete confidence.
  • The issue echoes Google’s previous AI mishap from a year ago, when its system suggested dangerous recipes like “glue pizza” and “gasoline spaghetti.”

Between the lines: This phenomenon represents a classic example of AI hallucination, where large language models confidently generate plausible-sounding but entirely fictional content when faced with inputs outside their training data.

  • These systems prioritize providing answers over admitting uncertainty, creating a potentially misleading user experience.
People are Googling fake sayings to see AI Overviews explain them

Recent News

Databricks to invest $250M in India for AI growth, boost hiring

Data analytics firm commits $250 million to expand Indian operations with a new Bengaluru research center and plans to train 500,000 professionals in AI over three years.

AI-assisted cheating proves ineffective for students

Despite claims of academic advantage, AI tools like Cluely fail to deliver practical benefits during tests and meetings, exposing a significant gap between marketing promises and real-world performance.

Rust gets multi-platform compute boost with CubeCL

CubeCL brings GPU programming into Rust's ecosystem, allowing developers to write hardware-accelerated code using familiar syntax while maintaining safety guarantees across NVIDIA, AMD, and other platforms.