Patient Satisfaction News

Can Hospital Ratings Systems Truly Portray Hospital Quality?

A new report has prompted debate throughout the medical industry regarding how hospital ratings systems methodology reflects hospital quality.

hospital ranking systems portray hospital quality

Source: Getty Images

By Sara Heath

- A new report published in the New England Journal of Medicine Catalyst has sparked a war of words about hospital ratings systems and whether they accurately portray hospital quality.

The article, “Rating the Raters,” looked at hospital rating systems from the Leapfrog Group, Healthgrades, the Centers for Medicare & Medicaid Services (CMS), and US News & World Report. These groups represent popular hospital rating systems that aim to issue consumer-facing reports about hospital quality, safety, and patient experience.

Ultimately, many of these groups hope to arm healthcare consumers with the knowledge needed to make an informed healthcare access decision.

But these various rating systems often generate different results, making it hard for patients to make an informed decision, according to Karl Bilimoria, MD, director of the Northwestern Medicine Surgical Outcomes and Quality Improvement Center.

"Current hospital quality rating systems often offer conflicting results - a hospital might rate best on one rating system and worst on another," Bilimoria, who is also the report’s lead author, said in a statement. "We wanted to provide information on how to interpret these contradictory ratings, so people can better select the best hospital for their needs."

READ MORE: What the CMS Hospital Star Ratings Mean for Care Quality, Patients

Bilimoria was joined by five other ratings system evaluators to look at the Leapfrog Group, Healthgrades, CMS, and US News hospital ranking systems.

After disclosing potential conflicts of interest, excusing themselves when they deemed appropriate, and meeting with ranking system administrators, the evaluators created a standardized rubric by which they would grade each of the ratings system.

The rubric would assess each rating system based on likelihood to misclassify hospital performance, the importance of impact with patient populations, the scientific acceptability of the rating system, history of iterative improvement, transparency, and system usability.

The researchers also looked at measures that might be concerning or have flaws, risk adjustment, the types of hospitals included in the quality ratings, the distribution of hospital ratings, use of data, response to stakeholder feedback, use of peer review, transparent methodology, and ease of use for hospital assessment.

Meetings with each ratings system likewise informed evaluator scores.

Overall, the raters did not grade these systems particularly high. US News received the highest grade, coming in at a B, while the CMS Hospital Star Ratings received a C, Leapfrog received a C-, and Healthgrades received a D.

And while the evaluators were generally in consensus scoring most categories, there was some variation ranting from C grades to A for an individual measure across raters.

There was also variation in measure grades for one individual rating system. For example, the CMS Hospital Star Ratings received poor ratings for its likelihood to misclassify hospital quality and ability to improve over time. However, CMS also received relatively high grades for its transparency and hospital and consumer usability, earning a B in both.

Healthgrades fared worse, earning only Cs and Ds in each of the assessment categories. Although the evaluators recognized that Healthgrades is salient among most patients, the Healthgrades ratings received a D+ for both data transparency and scientific acceptability and Cs in the much of the remainder of evaluation categories.

The US News ratings received a B or B+ in every category, whereas grades for the Leapfrog Group were mixed. Leapfrog scored as high as a B- on its ability to improve over time, but received Cs, a C-, and a C+ in nearly every other category.

Across each rating system, the researcher pointed out flaws in data use, saying that limited datasets and flaws in data collection limit the effectiveness of each rating system. Additionally, measurement development, lack of peer review, apples to oranges hospital comparisons, and potential financial conflicts detract from the overall functionality of these ratings.

Using better data, creating more meaningful measures, carrying out more meaningful hospital audits, and conducting external peer review could improve these ratings systems, the evaluators recommended. Ultimately, these changes could make hospital rating systems more usable for patients, the ultimate end-user for these tools.

"There are all these competing rating systems. More and more are coming out," Bilimoria said. "The public needs some way to know which ones are valid and reliable; otherwise, it will be pure chaos when you are trying to figure out which is the best hospital for you."

But there may be more to the story, as each of these ratings systems tells their side of the story after seeing the final data.

Leah Binder, president and CEO at the Leapfrog Group, pointed out that this report does not carry the weight of a traditional, peer-reviewed study.

“’Rating the Raters’ appears in the Catalyst section of the New England Journal of Medicine because it is an opinion piece, not a peer-reviewed study,” Binder said in a statement. “As a result, it is not designed to offer the evidence and replicability that a traditional study would offer, nor do the authors detail what standards they applied to reach their conclusions about the four ratings programs. That said, the ratings organizations in the piece would not allow themselves the luxury of issuing hospital ratings as random opinions, without basic rigor and transparency.”

Binder also asserted that the researchers did not accurately portray the Leapfrog Group’s surveying methods.

“The authors are entitled to their own opinions and it is valuable to hear their perspectives,” she stated. “However, they are not entitled to their own facts. Rudimentary fact checking would have uncovered serious errors in the description of Leapfrog’s ratings programs in the piece.”

Specifically, the researchers stated that the Leapfrog Group audits only a handful of its Hospital Survey responses, a claim that Binder says is not true. Leapfrog authenticates each of its survey responses electronically and with the help of a survey expert, Binder said. The organization additionally consults a random group of hospital respondents for more information and conducts on-site visits for another randomized respondent cohort.

Finally, Binder pointed to what she said are inherent conflicts of interest between the researchers and the hospital ratings groups.

“In addition to basic fact-checking, future iterations of this paper would have greater credibility if the majority of authors were not employed at health systems with a history of feuding with one or more of the ratings organizations they analyze,” she noted. “The piece would appear more objective without that conflict.”

Representatives from other ratings systems included in the NEJM Catalyst report likewise issued their own statements.

The NEJM Catalyst report did not accurately portray hospital quality reporting methodology at Healthgrades, according to Mallorie Hatch, PhD, the director of Data Science at the organization.

“Healthgrades Hospital Quality Ratings are designed to have the greatest relevance for consumers and we work hard to make the information transparent, accessible and easy-to-understand,” Hatch said in an emailed statement.

Hatch also outlined Healthgrades’ ranking methodologies and listed other key considerations in relation to the article.

The article that was published today on hospital quality is a highly inaccurate portrayal of Healthgrades’ hospital ratings. The opinions provided regarding Healthgrades are flawed as there are numerous inaccuracies: 1) our methodologies are fully transparent and publicly available here; 2) the authors only assessed our overall hospital award, misrepresented that methodology and conveniently did not include an analysis of our other service line ratings and awards, which would have addressed many of the criticisms in the article; 3) while Healthgrades was invited to explain our methodologies and we corrected the above inaccuracies, our feedback was not incorporated. In addition, many experts agree that clinical outcomes measures (mortality and complications) are the most important indicators of quality, as they can have the greatest impact on the overall health outcome of a patient, and frankly are the most important measures to a patient.

CMS is reiterated its efforts to drive patient-centricity and empower consumers with decision-making information, a spokesperson from the agency said in an emailed statement.

CMS is committed to empowering patients by ensuring they have access to quality and cost information. The agency is confident the Overall Hospital Quality Star Ratings drive systematic improvements in care and safety and are critical in getting hospitals to compete on the basis of quality. CMS continues to work with patients, hospitals, and the healthcare community to improve Hospital Compare and the Star Ratings, with a focus on transparency and responsiveness to stakeholder concerns while empowering patients with practical, simple-to-use information to make the best healthcare decisions. After receiving feedback from hospitals and other stakeholders through a series of listening sessions and input from a technical expert panel, CMS developed potential changes to the Star Ratings methodology, which were released for public comment this past spring. CMS appreciates the feedback we’ve received so far from a variety of stakeholders on the Star Ratings methodology, including the work of the researchers in the study you reference, and look forward to sharing improvements to the Star Ratings in the future.

Conversely, US News, which received the highest ratings of any included in the study, said that this report offers key insights into the resources patients review when making care decisions.

“This study takes an important look at hospital ratings, including those published by U.S. News & World Report, which patients have come to count on when choosing where to receive care,” Ben Harder, managing editor and chief of Health Analysis at US News & World Report, said in an email.

“Regarding the authors’ assessment of the U.S. News rankings, we’re gratified that the study recognized how responsive we have been to advances in measurement science and feedback from patients, doctors and other stakeholders. The methodology changes we made this year reflect our commitment to ongoing enhancement of our rankings. The researchers also tipped their hats to our decision to make statistical adjustments for socioeconomic status and other factors to ensure fair comparisons among hospitals.