Patient Satisfaction News

ChatGPT AI Chatbot Proves Effective for Patient Queries, Health Literacy

AI chatbot ChatGPT gave satisfactory answers to patient queries about breast cancer screening 88 percent of the time, with the added benefit of considering health literacy.

ai chatbot chatgpt effective for patient queries

Source: Getty Images

By Sara Heath

- ChatGPT is an effective solution for online medical search for patients, with the artificial intelligence (AI) chatbot accurately answering patient queries around 88 percent of the time, according to a study from researchers at the University of Maryland School of Medicine (UMSOM).

According to Paul Yi, MD, an assistant professor of diagnostic radiology and nuclear medicine at UMSOM, ChatGPT is particularly remarkable because it delivers health information in an understandable format that considers patient health literacy.

"We found ChatGPT answered questions correctly about 88 percent of the time, which is pretty amazing," Yi, also the director of the UM Medical Intelligent Imaging Center (UM2ii) and the study’s corresponding author, said in a press release. "It also has the added benefit of summarizing information into an easily digestible form for consumers to easily understand.”

The study, published in Radiology, looked particularly at ChatGPT’s ability to answer questions about breast cancer screening. ChatGPT is an AI chatbot developed by OpenAI that gained notoriety for being able to perform a number of tasks ranging from writing code to answering medical questions.

In February, ChatGPT passed the US Medical Licensing Exam without physician input, showing its ability to collect medical information from around the web.

READ MORE: How Is Artificial Intelligence Shaping Patient Engagement Tools?

This latest study showed that ChatGPT has some utility as a patient-facing healthcare technology, particularly in terms of performing as a chatbot and symptom-checker. Online symptom checkers can be effective triage tools, but only if they report accurate information that patients can understand.

The UMSOM researchers crafted a set of 25 questions seeking advice about getting a breast cancer screening and asked ChatGPT each question three times to account for the way the chatbot varies its answers each time a query comes in.

Then, radiologists reviewed ChatGPT’s answers for accuracy and health literacy.

Overall, ChatGPT performed well, answering 22 out of 25 questions satisfactorily, the researchers said.

"We are witnessing an unprecedented revolution in health care where the integration of artificial intelligence and immersive technologies will fundamentally change the way we treat patients," Mark T. Gladwin, MD, Dean of UMSOM, vice president for Medical Affairs at the University of Maryland, Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, said in the statement.

"ChatGPT and other language models are an exciting part of this transformation - providing access to a vast database of medical knowledge and potentially offering personalized advice based on specific symptoms and a patient's medical history, albeit with certain limitations, as this study points out,” Gladwin added.

The three questions for which ChatGPT was not up to par did yield questions that revealed ChatGPT’s oft-cited pitfalls. For one question, ChatGPT offered an answer that was rooted in outdated information and practice. For the remaining two questions, ChatGPT’s responses were inconsistent when the same question was asked twice.

Those pitfalls do expose some serious caveats with which patients should approach using AI chatbots and ChatGPT. For one thing, the AI chatbot doesn’t display the full breadth of information on the internet, which can be limiting and create bias in its answers to patients, said Hana Haver, MD, a radiology resident at the University of Maryland Medical Center.

"ChatGPT provided only one set of recommendations on breast cancer screening, issued from the American Cancer Society, but did not mention differing recommendations put out by the Centers for Disease Control and Prevention (CDC) or the US Preventative Services Task Force (USPSTF)," Haver said in the press release.

ChatGPT will also manipulate information to support its argument, Yi added.

"We've seen in our experience that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims," he explained. "Consumers should be aware that these are new, unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice."

Other experts are wary about patients using ChatGPT, with a March 2023 article indicating that ChatGPT can sometimes provide vague, unclear, or indirect information about common cancer myths.

Healthcare providers need to be aware of these shortcomings and their patients’ habits around online medical search and research.

“This could lead to some bad decisions by cancer patients,” Skyler Johnson, MD, physician-scientist at Huntsman Cancer Institute and assistant professor in the department of radiation oncology at the University of Utah, who helped lead the March study, in a press release discussing the findings.

“I recognize and understand how difficult it can feel for cancer patients and caregivers to access accurate information,” Johnson added. “These sources need to be studied so that we can help cancer patients navigate the murky waters that exist in the online information environment as they try to seek answers about their diagnoses.”