Rising Dependence on AI for Medical Advice Raises Concerns
People are increasingly using Artificial Intelligence (AI) chatbots to seek medical advice. From minor symptoms to serious health conditions, many users rely on these tools for quick answers. However, a recent study has raised serious concerns about the reliability of such advice, suggesting that this growing trend could pose risks to public health.
Study Reveals Alarming Error Rate in AI Health Responses
According to new research published in the medical journal BMJ Open, more than 50% of medical advice provided by popular AI chatbots may be incorrect or misleading. Researchers evaluated five widely used AI platforms and found that a significant portion of their responses lacked accuracy.
The study revealed that around 20% of the answers were not just inaccurate but potentially dangerous. These misleading responses could worsen a person’s medical condition if followed without professional consultation.
Which AI Platforms Were Tested?
The research examined five major AI tools, including:
- ChatGPT
- Google Gemini
- Meta AI
- Grok (by X)
- DeepSeek (China-based AI)
Each platform was tested using a structured set of health-related questions to assess their performance and reliability.
How the Study Was Conducted
Researchers asked each AI system 50 medical questions, divided into five different health-related categories. These questions included both open-ended and close-ended formats, covering topics such as vaccines, cancer, and general health concerns.
The findings showed that AI performed relatively better in answering close-ended questions and topics related to vaccines and cancer. However, performance dropped significantly when handling open-ended or complex medical queries.
Confident Tone, But Not Always Correct
One of the most concerning findings was that AI chatbots often delivered incorrect answers with high confidence. This can easily mislead users into trusting the information without verification.
In most cases, the chatbots did not refuse to answer—even when unsure. Only two instances were recorded where an AI declined to respond, highlighting a lack of caution in uncertain scenarios.
Why This Matters for Users
The study highlights a growing issue: many people are treating AI chatbots as medical advisors. While these tools can provide general information, they are not licensed medical professionals and cannot replace expert diagnosis or treatment.
Relying solely on AI for health-related decisions can delay proper treatment, worsen conditions, and in extreme cases, pose life-threatening risks.
Experts Urge Caution While Using AI for Health Queries
Healthcare professionals emphasize that AI should be used only as a supplementary tool for basic awareness—not as a primary source of medical advice. Users are strongly advised to consult qualified doctors for any health concerns.
Conclusion: Convenience Should Not Replace Professional Care
AI technology continues to evolve and offers many benefits, but its limitations—especially in critical areas like healthcare—cannot be ignored. While AI chatbots can provide quick information, blind trust in their responses can be dangerous.
As this study shows, convenience should never come at the cost of safety. Always verify medical information with trusted healthcare professionals before making decisions about your health.

