×
Book app Fable forced to implement new safety measures after offensive AI-generated messages
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A popular book-tracking app, Fable, has implemented new safeguards after its AI-powered summary feature generated racially insensitive content for users, including suggesting that a reader of Black literature should “surface for the occasional white author.”

Initial incident: Fable user Tiana Trammell received an AI-generated summary that made inappropriate comments about her reading choices focused on Black literature.

  • The AI summary implied her reading selections were leaving “mainstream stories gasping for air” and advised her to read white authors
  • Trammell discovered the offensive content in late December 2023 after completing three books by Black authors
  • She shared the concerning summary with her book club and other Fable users, who then reported similar experiences

Company response: Chris Gallello, Fable’s head of product, acknowledged the issue on Instagram and announced immediate changes to the platform.

  • The company expressed shock at reports of “very bigoted racist language” in the AI-generated summaries
  • Fable quickly responded to user complaints and committed to addressing the problem
  • The platform implemented new protective measures to prevent similar incidents

New safeguards: Fable has introduced several features to improve transparency and user control over AI-generated content.

  • Users will now see clear disclosures indicating when summaries are AI-generated
  • The platform added an opt-out option for those who prefer not to receive AI summaries
  • A new thumbs-down button allows users to flag problematic content directly

Looking ahead: AI bias concerns: This incident highlights ongoing challenges with AI bias in consumer applications and emphasizes the importance of robust testing and user feedback mechanisms before deploying AI features that interact with sensitive topics like race and culture.

Fable, a Book App, Makes Changes After Offensive A.I. Messages

Recent News

New framework prevents AI agents from taking unsafe actions in enterprise settings

The framework provides runtime guardrails that intercept unsafe AI agent actions while preserving core functionality, addressing a key barrier to enterprise adoption.

Leaked database reveals China’s AI-powered censorship system targeting political content

The leaked database exposes how China is using advanced language models to automatically identify and censor indirect references to politically sensitive topics beyond traditional keyword filtering.

Study: Anthropic uncovers neural circuits behind AI hallucinations

Anthropic researchers have identified specific neural pathways that determine when AI models fabricate information versus admitting uncertainty, offering new insights into the mechanics behind artificial intelligence hallucinations.