×
Slow Down: AI-written ADHD books on Amazon spark controversy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Amazon’s marketplace is becoming a breeding ground for AI-generated books on sensitive health topics like ADHD, raising serious concerns about medical misinformation. These chatbot-authored works, which claim to offer expert advice but often contain dangerous recommendations, exemplify the growing challenge of regulating AI-generated content in digital marketplaces where profit incentives and ease of publication outweigh quality control and safety considerations.

The big picture: Amazon is selling numerous AI-generated books that claim to offer expert ADHD management advice but appear to be entirely authored by chatbots like ChatGPT.

  • Multiple titles targeting men with ADHD diagnoses were found on the platform, including guides focused on late diagnosis, management techniques, and specialized diet and fitness advice.
  • Analysis by Originality.ai, a US-based AI detection company, rated eight examined samples at 100% on their AI detection score, indicating high confidence the books were written by artificial intelligence.

Why this matters: Unregulated AI-authored health content poses significant risks to vulnerable readers seeking legitimate medical guidance.

  • People with conditions like ADHD may make health decisions based on potentially harmful or scientifically unsound advice generated by AI systems that lack medical expertise.
  • The proliferation of such content represents a growing pattern of AI-generated misinformation on Amazon, which has previously been found selling risky AI-authored travel guides and mushroom foraging books.

Expert assessment: Computer scientists and consumers have identified alarming content within these publications.

  • Michael Cook, a computer science researcher at King’s College London, described finding AI-authored books on health topics as “frustrating and depressing,” noting that generative AI systems draw from both legitimate medical texts and “pseudoscience, conspiracy theories and fiction.”
  • Cook emphasized that AI systems cannot critically analyze or reliably reproduce medical knowledge, making them unsuitable for addressing sensitive health topics without expert oversight.

Real-world impact: Readers seeking legitimate ADHD information have encountered disturbing content in these books.

  • Richard Wordsworth, recently diagnosed with adult ADHD, found a book containing potentially harmful advice, including warnings that friends and family wouldn’t “forgive the emotional damage you inflict” and describing his condition as “catastrophic.”
  • These negative and stigmatizing characterizations could worsen mental health outcomes for readers genuinely seeking help managing their condition.

Amazon’s response: The company acknowledges the issue but relies on existing systems rather than targeted intervention.

  • A spokesperson stated that Amazon has content guidelines and methods to detect and remove books that violate these standards.
  • However, the continued presence of these books suggests current safeguards may be insufficient for addressing the volume and sophistication of AI-generated health misinformation.
‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon

Recent News

ElevenLabs launches advanced AI voice agents for conversations

ElevenLabs' new voice agent technology enables more natural AI conversations with integrated knowledge retrieval and multilingual capabilities for enterprise applications.

AI governance urgently needed to safeguard humanity’s future

Global AI development requires binding agreements that allow innovation while preventing catastrophic risks, similar to how Ulysses tied himself to the mast to hear the sirens safely.

AI language models explained: How ChatGPT generates responses

The sophisticated prediction mechanism behind ChatGPT processes text token by token, revealing both the power and limitations of AI's pattern-matching approach to language.