×
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Reddit’s AI chatbot, called Answers, was caught recommending heroin and other banned substances for pain relief, according to a healthcare worker who flagged the issue on a moderator subreddit. After the problem was reported by users and 404Media, a tech news publication, Reddit reduced the feature’s visibility under sensitive health discussions, highlighting ongoing concerns about AI chatbots providing dangerous medical advice.

What you should know: Reddit Answers pulls information from user-generated content across the platform and works similarly to ChatGPT or Gemini, but with a focus on Reddit’s own discussions.

  • A healthcare worker discovered the chatbot suggesting a post that claimed “Heroin, ironically, has saved my life in those instances” when asked about chronic pain relief.
  • In another query about pain management, the AI recommended kratom, a tree extract that’s illegal in multiple states and carries FDA warnings “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder.”
  • The chatbot was appearing under health-related subreddits where moderators had no option to disable it.

Reddit’s response: The platform took action to limit the AI’s reach after the issues were brought to their attention.

  • Reddit initially launched Answers in a separate tab from the homepage but had recently been testing integration within conversations.
  • Following reports from users and media coverage, Reddit decided to stop the chatbot from appearing under sensitive discussions.

The bigger pattern: Reddit’s AI joins a growing list of chatbots that have provided questionable or dangerous advice in healthcare contexts.

  • Google’s AI Overviews previously suggested using “non-toxic glue” on pizzas to keep cheese from sliding off.
  • ChatGPT has also been documented providing problematic health recommendations.
  • AI hallucination—when AI systems generate false or nonsensical information—remains a persistent challenge across all major chatbot platforms, particularly when dealing with medical or safety-related queries.

Why this matters: The incident underscores the risks of deploying AI systems that can access and recommend user-generated content without proper safeguards, especially in sensitive areas like healthcare where bad advice can have serious consequences.

Can't Trust Chatbots Yet: Reddit's AI Was Caught Suggesting Heroin for Pain Relief

Recent News

Tying it all together: Credo’s purple cables power the $4B AI data center boom

These $500 purple cables prevent "link flaps" that can shut down entire data centers.

Vatican launches Latin American AI network for human development

Fifty global experts gathered to ensure machines serve people, not the other way around.