Patient Data Access News

Patient Trust in AI Chatbots, ChatGPT Has Room to Grow

Patient trust in AI chatbots like ChatGPT is only at around 50%, but encouragement from their providers can push more patients to use it.

patient trust in AI chatbots and chatgpt is low

Source: Getty Images

By Sara Heath

- Can ChatGPT really replace doctors? Probably not, at least for right now, as surveying shows that patient trust in chatbots and generative AI in healthcare is relatively low.

The report from The University of Arizona Health Sciences showed that around half of patients don’t fully trust AI-powered medical advice, like the information issued from chatbots like ChatGPT. Instead, patients still trust and prefer their traditional healthcare providers.

However, patients may be more receptive to chatbot medical advice if the AI is guided by a doctor’s or human’s touch.

“While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance,” Marvin J. Slepian, MD, JD, a Regents Professor of Medicine at the UArizona College of Medicine – Tucson, said in a statement. “To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required,” added Slepian, who is also a member of the BIO5 Institute.

Led by Slepian and Christopher Robertson, JD, a professor of law and associate dean for strategic initiatives at Boston University, the researchers studied patient preferences in two phases. First, the team conducted structured interviews with patients to assess their reactions and perceptions of current and future AI systems in medical care. The team also surveyed over 2,000 patients about their potential preferences with AI.

For both groups, the researchers worked to determine whether the patients would rather consult with an AI chatbot like ChatGPT or with a physician.

Patients were generally split, with the total population slightly preferring consulting with real physicians over AI chatbots (52 and 47 percent of patients, respectively). This was generally consistent across disease severity, which researchers tested by prompting patients to imagine they had leukemia versus having sleep apnea.

There were some patient demographic factors linked to trusting AI for medical advice. Black patients were less likely than White patients to trust the technology, while Native American patients were more likely. Older patients, those with conservative political views, and those with stronger religious views were less likely to trust AI chatbots like ChatGPT.

Importantly, there were some factors that could sway a patient to place greater trust in generative AI chatbots. Particularly, the researchers found that after patients got positive provider reviews of the technologies, they were more comfortable using the tools.

Patients who previously said they did not trust AI chatbots reconsidered their stances after hearing from their primary care providers that the technology had superior accuracy or was the established choice for diagnosis or triaging. It also helped when the primary care provider reassured patients that the AI had trained counselors to hear about patient needs.

Said otherwise, offering some sense of the human touch makes patients feel more comfortable with AI chatbots issuing medical advice.

Healthcare organizations may consider patient education about the benefits of AI chatbots in initial disease diagnosis, especially as AI becomes a more important topic in healthcare. The insurgence of ChatGPT has led to questions about how the tool can help remove some secure direct messaging duties from provider workloads.

After all, tools like ChatGPT have proven to provide health information of equal or sometimes greater accuracy as human healthcare providers, and in a lot of cases this information is more understandable than provider messages. Sometimes, ChatGPT can display more empathy than clinicians, likely because clinicians are too rushed to connect deeply with patients.

But using AI chatbots like ChatGPT to replace some provider messaging, especially in low-acuity diagnosing and triaging, will only work if patients trust the technology enough to use it. This latest study indicates that patients are still hesitant about the technology. Providers who are interested in using ChatGPT or similar AI chatbots may consider outlining how the tools work, and how accurate they are, for their patients.