Assessing ChatGPT’s Reliability in Drug Information
In the ever-evolving landscape of artificial intelligence, ChatGPT has emerged as a prominent player, capturing the attention of users worldwide. However, recent scrutiny has been cast upon the accuracy of its responses, particularly in the realm of drug-related inquiries. This examination delves into the intricacies of ChatGPT’s performance, as revealed in a study conducted by pharmacists at Long Island University.
ChatGPT Study Results:
A thorough exploration of the study’s findings illuminates a concerning pattern. Out of 39 drug-related questions posed to the free version of ChatGPT, a mere 10 responses were deemed “satisfactory” based on established criteria. The remaining 29 questions elicited responses that were either indirect, inaccurate, or incomplete.
Methodology:
The study drew from real questions addressed to Long Island University’s College of Pharmacy drug information service, spanning from January 2022 to April of the current year. Pharmacists meticulously researched and answered 45 questions, serving as the benchmark against which ChatGPT’s responses were measured.
Key Findings:
-
- Incomplete and Inaccurate Responses: ChatGPT fell short in directly addressing 11 questions, provided inaccurate responses to 10 questions, and offered incomplete or incorrect answers to another 12.
- Lack of References: Despite the researchers’ explicit request for references in each response, ChatGPT provided references in only eight responses, each citing sources that do not exist.
- Case Study Examples:
- Drug Interaction: An alarming instance revealed that ChatGPT inaccurately indicated no reported interactions between Pfizer’s Covid antiviral pill Paxlovid and the blood-pressure-lowering medication verapamil, potentially exposing patients to preventable side effects.
- Dose Conversion: When tasked with guiding the conversion of doses between two forms of the drug baclofen, ChatGPT’s response included an unsupported conversion method, coupled with a critical error in displaying intrathecal doses in milligrams instead of micrograms.
Recommendations:
Lead author Sara Grossman underscores the imperative for caution. Users, including healthcare professionals and patients, are advised to exercise due diligence by corroborating ChatGPT’s responses with information from trusted sources, such as medical professionals or authoritative government-based medication information websites like the National Institutes of Health’s MedlinePlus.
Concerns and Limitations:
Grossman, the study’s lead author, acknowledges that the examination focused exclusively on the free iteration of ChatGPT. While recognizing the possibility of divergent results with a paid version, the study intentionally replicated the experience of the general user population, which predominantly engages with the free version.
Takeaway:
This comprehensive analysis serves as a poignant reminder of the critical importance of prudence when relying on ChatGPT for drug-related information. It advocates for users to supplement ChatGPT’s responses with information from reliable sources, highlighting the need for continual scrutiny and enhancement in the accuracy of medical information disseminated by AI chatbots.