×
AI Humor Gap: Why Markov Chains Outshine LLMs in Comedy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The big picture: Markov chains, despite their simplicity, can produce more humorous content than advanced Large Language Models (LLMs) due to their unpredictable nature and ability to create unexpected combinations of words and phrases.

What are Markov chains? Markov chains are primitive statistical models that predict the next word based on the current context, without considering semantics or complex vector math.

  • They can be described as very small, simple, and naive LLMs
  • Markov chains are commonly used in phone keyboards for next word suggestions
  • While less accurate than LLMs for specific tasks, Markov chains excel in generating unexpected and potentially humorous content

Understanding humor: The essence of humor lies in unserious surprise, with the best jokes involving a pleasant and significant “snap” or unexpected turn.

  • Humor relies on violating patterns and expectations
  • Stronger comedic effects can be achieved through descriptive language and scene realization
  • Subjective nature of humor depends on cultural norms and individual expectations

LLMs and predictability: Large Language Models, designed for accuracy and coherence, often struggle with generating truly creative or humorous content.

  • LLMs excel at producing predictable, average responses based on their extensive training data
  • This predictability makes LLMs less suitable for creative writing and joke generation
  • Early versions of LLMs and image generation models were often funnier due to their imperfections

The humor gap: The contrast between Markov chains and LLMs highlights the challenges in algorithmic humor generation.

  • Markov chains’ unpredictability can lead to unexpected and amusing combinations of words
  • LLMs’ focus on coherence and accuracy can result in bland, corporate-like language
  • Future developments in AI may need to incorporate intentional unpredictability for more human-like creativity

Implications for AI development: The humor discrepancy between Markov chains and LLMs reveals broader challenges in creating truly human-like AI.

  • AI systems may need to balance predictability with unexpected outputs to mimic human creativity
  • Future language models might require a different approach to incorporate humor and personality
  • The ability to detect AI-generated content may increasingly rely on identifying the presence or absence of personality in text

Analyzing deeper: While Markov chains outperform LLMs in generating humorous content, this observation underscores the complexities of replicating human-like creativity and personality in artificial intelligence systems. As AI continues to advance, developers may need to explore new approaches that deliberately introduce controlled unpredictability to achieve more nuanced and engaging outputs.

Markov chains are funnier than LLMs

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.