back
Get SIGNAL/NOISE in your inbox daily

A 60-year-old man developed a rare condition called bromism after consulting ChatGPT about eliminating salt from his diet and subsequently taking sodium bromide for three months. The case, published in the Annals of Internal Medicine, highlights the risks of using AI chatbots for health advice and has prompted warnings from medical professionals about the potential for AI-generated misinformation to cause preventable health problems.

What happened: The patient consulted ChatGPT after reading about the negative effects of table salt and asked about eliminating chloride from his diet.

  • Despite reading that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning,” the man began taking sodium bromide over three months.
  • He developed bromism (bromide toxicity), a condition that was “well-recognised” in the early 20th century and contributed to nearly one in 10 psychiatric admissions at that time.
  • The patient presented at a hospital claiming his neighbor might be poisoning him, exhibited paranoia about water, and attempted to escape within 24 hours before being treated for psychosis.

Why this matters: The case demonstrates how AI chatbots can provide dangerous health advice without proper safeguards or professional oversight.

  • When researchers from the University of Washington tested ChatGPT themselves about chloride replacements, the response included bromide without specific health warnings or follow-up questions that “a medical professional would do.”
  • The authors warned that ChatGPT and similar AI apps “generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”

The patient’s symptoms: Once stabilized, the man reported multiple indicators of bromism beyond the initial psychiatric presentation.

  • Symptoms included facial acne, excessive thirst, and insomnia—all consistent with bromide toxicity.
  • The condition was historically caused by sodium bromide, which was used as a sedative in the early 20th century.

OpenAI’s response: The company recently announced upgrades to ChatGPT that specifically address health-related queries.

  • The new GPT-5 model claims improved ability to answer health questions and be more proactive at “flagging potential concerns” like serious physical or mental illness.
  • However, OpenAI emphasizes the chatbot is not a replacement for professional help and states it’s not “intended for use in the diagnosis or treatment of any health condition.”

What researchers recommend: Medical professionals should consider AI usage when determining where patients obtained health information.

  • The study authors noted it’s “highly unlikely a medical professional would have suggested sodium bromide when a patient asked for a replacement for table salt.”
  • While acknowledging AI could bridge the gap between scientists and the public, researchers warned about the risk of promoting “decontextualised information.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...