The dangers of using chatbots for health and medical information

TechnologyHealth & Fitness
15 Apr 2026 • 2:15 PM MYT
The Independent
The Independent

The world’s most free-thinking newspaper

The dangers of using chatbots for health and medical information

  • Experts have issued a stark warning regarding the use of AI chatbots for health and medical information, citing their tendency to provide inaccurate or misleading responses.
  • A recent study found that half of the answers provided by chatbots, including ChatGPT and Grok, to 50 medical questions were deemed "problematic".
  • Grok exhibited the most issues with 58 per cent of its responses being problematic, followed by ChatGPT at 52 per cent and Meta AI at 50 per cent.
  • Researchers highlighted that chatbots often "hallucinate" due to biased or incomplete training data and do not reason or weigh evidence, leading to potentially flawed advice.
  • The findings, published in BMJ Open, underscore the critical need for public education, professional training, and regulatory oversight to ensure generative AI supports public health effectively.

IN FULL

View Original Article