ChatGPT is already able to provide better answers to real patients’ healthcare questions than actual physicians, a new study claims.
The study, published in the journal JAMA Internal Medicine, took questions and answers from Reddit’s r/AskDocs forum and then got ChatGPT to provide answers to them. The original and AI-generated answers were then evaluated, blindly, by other healthcare professionals. In 79% of cases, the assessors found the AI-generated answers to be superior.
What’s more, the assessors rated 27% of physicians’ answers as being of a less-than-acceptable quality, whereas only 3% of ChatGPT answers were given this rating.
The answers from ChatGPT were also consistently rated as more empathetic than those of doctors.
The researchers took a random sample of 195 exchanges between physicians and patients from Reddit’s r/AskDocs forum, which has approximately 474,000 members. Users post medical questions and verified healthcare professional post answers.
Although various different kinds of healthcare professionals answer questions posted in the Subreddit, only answers from qualified physicians were considered in the study. This was because the study researchers only wanted answers from the most highly qualified professionals to be used.
“While this cross-sectional study has demonstrated promising results in the use of AI assistants for patient questions, it is crucial to note that further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings,” the study authors caution.
“Despite the limitations of this study and the frequent overhyping of new technologies, studying the addition of AI assistants to patient messaging workflows holds promise with the potential to improve both clinician and patient outcomes.”