×
AI Humor Gap: Why Markov Chains Outshine LLMs in Comedy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The big picture: Markov chains, despite their simplicity, can produce more humorous content than advanced Large Language Models (LLMs) due to their unpredictable nature and ability to create unexpected combinations of words and phrases.

What are Markov chains? Markov chains are primitive statistical models that predict the next word based on the current context, without considering semantics or complex vector math.

  • They can be described as very small, simple, and naive LLMs
  • Markov chains are commonly used in phone keyboards for next word suggestions
  • While less accurate than LLMs for specific tasks, Markov chains excel in generating unexpected and potentially humorous content

Understanding humor: The essence of humor lies in unserious surprise, with the best jokes involving a pleasant and significant “snap” or unexpected turn.

  • Humor relies on violating patterns and expectations
  • Stronger comedic effects can be achieved through descriptive language and scene realization
  • Subjective nature of humor depends on cultural norms and individual expectations

LLMs and predictability: Large Language Models, designed for accuracy and coherence, often struggle with generating truly creative or humorous content.

  • LLMs excel at producing predictable, average responses based on their extensive training data
  • This predictability makes LLMs less suitable for creative writing and joke generation
  • Early versions of LLMs and image generation models were often funnier due to their imperfections

The humor gap: The contrast between Markov chains and LLMs highlights the challenges in algorithmic humor generation.

  • Markov chains’ unpredictability can lead to unexpected and amusing combinations of words
  • LLMs’ focus on coherence and accuracy can result in bland, corporate-like language
  • Future developments in AI may need to incorporate intentional unpredictability for more human-like creativity

Implications for AI development: The humor discrepancy between Markov chains and LLMs reveals broader challenges in creating truly human-like AI.

  • AI systems may need to balance predictability with unexpected outputs to mimic human creativity
  • Future language models might require a different approach to incorporate humor and personality
  • The ability to detect AI-generated content may increasingly rely on identifying the presence or absence of personality in text

Analyzing deeper: While Markov chains outperform LLMs in generating humorous content, this observation underscores the complexities of replicating human-like creativity and personality in artificial intelligence systems. As AI continues to advance, developers may need to explore new approaches that deliberately introduce controlled unpredictability to achieve more nuanced and engaging outputs.

Markov chains are funnier than LLMs

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.