Patient Satisfaction News

Can the Race of AI Chatbot Avatars Impact Patient Experience?

University of Colorado researchers grapple with ethical questions about how the perceived race of an AI chatbot avatar impacts patient experience and patient trust.

examining how race of ai chatbot avatars impact patient experience

Source: Getty Images

By Sara Heath

- Does it matter what an AI chatbot’s avatar looks like? It might, according to researchers from the University of Colorado School of Medicine, who raise questions about patient experience, patient-provider racial concordance, and bioethics.

A chatbot avatar is the image that accompanies the bot’s messages that patients can see. Intentional or not, the avatar can impact how the patient engages with or perceives the AI, which is a prudent conversation considering the insurgence of conversational AI, like ChatGPT.

“Sometimes overlooked is what a chatbot looks like – its avatar,” the researchers wrote in Annals of Internal Medicine. “Current chatbot avatars vary from faceless health system logos to cartoon characters or human-like caricatures. Chatbots could one day be digitized versions of a patient’s physician, with that physician’s likeness and voice. Far from an innocuous design decision, chatbot avatars raise novel ethical questions about nudging and bias.”

The healthcare industry is currently grappling with how to handle the popularity of conversational AI. Chatbots aren’t exactly new to healthcare, but they gained notoriety during the pandemic when some organizations employed them as symptom checkers and others used them to diffuse call center volumes.

And more recently, tools like ChatGPT have raised questions about how patients will access healthcare information, and by extension, the healthcare industry.

READ MORE: Patient Trust in AI Chatbots, ChatGPT Has Room to Grow

While some researchers have focused on the quality of information patients can get from chatbots, the CU researchers have zoomed in on the patient experience of using these tools. Particularly, how does the AI chatbot impact the way patients perceive their care, queried Annie Moore, MD, MBA, a CU internal medicine professor and the Joyce and Dick Brown Endowed Professor in Compassion in the Patient Experience.

“If chatbots are patients’ so-called ‘first touch’ with the health care system, we really need to understand how they experience them and what the effects could be on trust and compassion,” Moore said in a public statement.

For one thing, the chatbot’s avatar could be a stand-in for a provider’s photo. Some healthcare organizations use their logo for the avatar, but others use a picture of the provider who is represented by the AI chatbot or a cartoon avatar. When a picture representing an actual human is featured, it raises questions about race in medicine, particularly patient-provider racial concordance.

“One of the things we noticed early on was this question of how people perceive the race or ethnicity of the chatbot and what effect that might have on their experience,” Matthew DeCamp, MD, PhD, associate professor in the CU Division of General Internal Medicine, said in the press release. “It could be that you share more with the chatbot if you perceive the chatbot to be the same race as you.”

There’s some credence to DeCamp’s argument. Although not extensively researched in AI chatbots, there’s ample evidence that patient-provider racial concordance for in-person care or telehealth—essentially any kind of care not delivered by a chatbot—makes for better outcomes. Patients who visit with a provider who is the same race as them also report better patient experience.

READ MORE: Are AI Chatbots, ChatGPT the Solution to Healthcare’s Empathy Problem?

That means that the race of an AI chatbot avatar could be something to consider. Building systems that do not simply use stock images for avatars, but rather offer numerous different races for avatars, could engender patient trust, making the interaction more meaningful.

But that begs the researchers’ next question: is that ethical?

"There does seem to be evidence that people may share more information with chatbots than they do with humans, and that's where the ethics tension comes in: We can manipulate avatars to make the chatbot more effective, but should we? Does it cross a line around overly influencing a person's health decisions?” DeCamp posited.

One key strategy could be allowing patients themselves to select what avatar accompanies chatbot messages, a move that might improve patient autonomy.

“That's more demonstrative of respect,” according to DeCamp. “And that's good because it creates more trust and more engagement. That person now feels like the health system cared more about them.”

This paper comes as the healthcare industry grapples with the potential for AI to peddle implicit biases in healthcare. Because AI is built on human-made algorithms, it is liable to bias just like humans are. Indeed, even ChatGPT is quick to admit its own biases.

Keeping track of all of the ways in which those biases can come to fruition will be key to advancing health equity, the researchers wrote in the Annals paper.

“Addressing biases in chatbots will do more than help their performance,” they said. “If and when chatbots become a first touch for many patients’ health care, intentional design can promote greater trust in clinicians and health systems broadly.”