FEATURES

Navigating Social Media’s Public Health, Medical Misinformation Problem

As public health messaging continues to find a home on social media, agency leaders must meet demands to gain user trust and beat medical misinformation.

Source: Getty Images

- Healthcare has hit the internet age, with a growing presence on social media becoming all too familiar as the need for public health information gets greater. But within that growth has emerged a new problem: the broad accessibility of medical misinformation.

While the internet has been lauded for democratizing public health information and creating avenues for health IT and information sharing never seen before, it’s brought with it a darker problem. The storm cloud of medical misinformation is hard to ignore and is bringing with it medical and public health consequences, the impacts of which may be felt for decades.

Healthcare’s fight against medical misinformation is just in its infancy. Around the country, medical schools are designing curricula to educate their trainees on how to debunk public health myths. Meanwhile, the American Medical Association, joined by a chorus of other professional societies, has put medical misinformation and the actors who spread it on blast.

The industry is in the middle of building its evidence-based best practices to battle medical misinformation. As that evidence base develops, medical and public health experts must understand how users engage with social media and the landscape of medical misinformation.

Is there an appetite for public health info on social media?

The social media landscape for public health messaging is quite the conundrum: while there is a clear appetite for consuming public health information on social media, there also isn’t much trust in media messages.

According to research published in Frontiers in Public Health, the COVID-19 pandemic accelerated the presence of public health information on social media websites.

“Social media has become a widely accepted channel for public health information and risk communication by government officials, public health agencies, and the general population,” the researchers found in an assessment of the types of social media messages spread via Tweet during the pandemic.

Most of these Tweets related to announcements of public health information, requests for evidence-based leadership and policies, and socioeconomic consequences of the pandemic.

Between the appetite for public health information and the calls for evidence-based government leadership during a public health crisis, the researchers concluded that public health agencies should maintain a strong and science-led presence on social media.

But although social media has emerged as an important tool for public health messaging, it has its pitfalls. The World Health Organization has said social media led to a widespread “infodemic” during the COVID-19 crisis, saying the power of social media to spread information has equal power to spread incorrect or even harmful information.

A 2021 study from the NYU School of Global Health found that folks who got their pandemic-related information on social media websites were less likely to be up-to-date on public health safety best practices. Meanwhile, researchers from GoodRx said exposure to medical misinformation reduces patient health literacy.

It’s unlikely that the public’s appetite for social media will go away. And although politicians and healthcare professionals alike are calling to hold social media websites themselves accountable for medical misinformation, it will be important for the public health industry to respond on its own.

Understanding medical misinformation’s presence on social media and the nascent efforts to combat it will be key to developing a strong evidence base against misinformation.

Social media’s misinformation landscape

Of course, social media is rife with misinformation, including medical misinformation.

In a 2021 literature review, researchers found that medical misinformation was prevalent in social media posts about smoking and drug products like opioids and marijuana. Medical misinformation was also common in social media posts about vaccines, particularly the HPV vaccine.

And perhaps most disturbing, the researchers found that medical misinformation could reach as many as 87 percent of users in some cases.

Users are cognizant of the prevalence of medical misinformation on social media websites, separate research has found. A 2022 survey from The Center for Black Health & Equity showed that one in four Black people see medical misinformation, specifically about COVID-19 vaccines, on social media websites.

That’s a problem, considering the extent to which Black people got their information about COVID vaccines on social media websites. Of the 791 people surveyed, 39 percent got their information about the shots on social media, but only 6 percent said they actually trust these websites.

In fact, respondents said social media platforms are most at-blame for the prevalence of COVID-19 vaccine misinformation; 27 percent said as much, while 25 percent cited the news and 15 percent cited the internet.

And it’s not just vaccines. In June 2021, researchers from the University of California San Diego found that Facebook bots were responsible for disseminating disinformation about the efficacy of mask-wearing.

The data, published as a research note in JAMA Internal Medicine, looked at Facebook posts about the Danish Study to Assess Face Masks for the Protection Against COVID-19 Infection (DANMASK-19) study posted in November 2020. DANMASK-19 looked how public health recommendations to wear a mask ultimately impacted infection rates, concluding that recommendations did not have a particularly significant effect.

But there was some nuance there. The study did not look at the efficacy of masks but rather the efficacy of mask recommendations. The study operated under the iron-clad fact that masks help protect from infection.

But Facebook bots took advantage of that easily obscured nuance, and in Facebook groups most affected by bots, conversations about DANMASK-19 with misinformation were likely. Facebook groups not usually affected by bots were less likely to have inaccurate conversations about the DANMASK-19 study.

In other words, the bots were effective at sowing disinformation about the efficacy of masks.

Being that social media is both a powerful tool for public health messaging and a breeding ground for medical misinformation and disinformation—the former referring to accidental incorrect misinformation and the latter to the deliberate spreading of incorrect information—public health professionals need to employ key strategies to remain trusted and authoritative.

Strategies for public health agencies

Public health agencies should recognize the power social media has in communication. Nevertheless, navigating limited user trust and a landscape characterized by misinformation will require a careful touch.

“Social media can also be volatile: false information, sometimes maliciously created, spreads rapidly,” the Frontiers in Public Health authors wrote.

“Thus, measures to counter misinformation and disinformation on social media channels during an emergency are necessary,” they continued. “Given that government censorship can deeply aggravate already existing mistrust, measures other than content removal is needed, as the public shares and reacts positively to factual information, especially if posted by public health agencies.”

In 2021, US Surgeon General Vivek Murthy, MD, issued an advisory on medical misinformation, urging healthcare professionals to use their media and social media presences to combat medical misinformation. The crux of this strategy involves clinicians and other trusted leaders to combat misinformation; the best defense against medical misinformation is an offense to spread correct medical misinformation.

Murthy advised health agencies to tap doctors, nurses, and other clinicians when spreading medical information. He recommended that agencies be mindful of patient health literacy levels, suggesting healthcare professionals not use technical language but rather language that is accessible to all.

The Surgeon General also suggested that health agencies partner with other trusted leaders, like church leaders, to combat medical misinformation. Collaboration between clinicians and community leaders can ensure the public gets the right message from someone they trust.

Moreover, public health agencies can invest in public sentiment research, working to understand the biggest health questions of a given community and tailoring messaging around that. Investment from larger state and federal governments will be essential to completing this work on the local level.

Finally, Murthy recommended longitudinal approaches to public health education, ultimately equipping the public with long-term resilience against medical misinformation.

Medical misinformation and health equity

Spreading high-quality medical information and combatting rampant medical misinformation is particularly important in the nation’s work toward health equity. As noted in The Center for Black Health & Equity survey, medical misinformation is common in communities of color, as is mistrust in medical establishments.

This comes as Black people and other racial and ethnic minorities see health disparities in most disease states.

Ensuring equitable access to high-quality medical information is key, and it begins with the right messenger.

Using Murthy’s public health communication strategies, healthcare agencies should partner with trusted community leaders to reach historically disenfranchised populations. Churches and places of worship have proven effective at disseminating public health messaging and were key partners in the COVID-19 vaccine rollout.

Barbershops have helped spread awareness about chronic disease prevention and blood pressure. Meanwhile, community health workers fluent in Spanish increased the number of Latino people accessing COVID-19 public health services, like free tests.

Each of these solutions is replicable and can be applied to different public health messaging needs.

But health equity does not just refer to race. Public health messaging and efforts to curb social media medical misinformation must also be tailored to other minority groups and individuals with disabilities, like the hearing or vision impaired, for whom messages may be less accessible.

“This issue with messaging for people with disabilities is centered on accessing the information,” Bonnielin Swenor, PhD, MPH, said in a 2020 interview at the onset of the pandemic. “When you think about people who have vision or hearing loss, there are a lot of challenges to even accessing that information. By and large, the messaging is not and has not been accessible.”

Swenor, an epidemiology and ophthalmology expert from Johns Hopkins Medicine, said public health agencies need to make use of image alt text for individuals using screen readers, use plain language, and add captions at the bottoms of videos.

Adapting public health messaging to the internet age is only just beginning, and most experts do not know all of the answers. But by understanding the public’s desire to consume medical information on social media, the dangers of medical misinformation, and the current strategies available, public health leaders may help create evidence-based best practices.