Leapfrog names the ABCs of hospital-acquired conditions

Since the 2012 establishment of the Leapfrog Group’s Hospital Safety Grade health care rating system, patient safety has improved across the country, including a 21 percent reduction in hospital-acquired conditions (HACs). However, significant patient safety problems persist. For example, over 1000 people are estimated to die each day from preventable errors—the third leading cause of death in the country.

Rating

The Leapfrog Hospital Safety Grade is the only national health care rating focused on errors, accidents, and infections. The program has assigned letter grades—A, B, C, D, F—to general acute-care hospitals in the U.S. since 2012 based upon national performance measures from CMS, the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), and the American Hospital Association’s Annual Survey and Health Information Technology Supplement.

Improvement

A significant area of improvement is the 21 percent reduction in HACs between 2010 and 2015. The positive stride is attributable, in part, to Patient Protection and Affordable Care Act (ACA) (P.L. 111-148) provisions designed to reduce HACs (see By any measure, national effort to increase health care safety succeeded, Health Law Daily, December 13, 2016). However, the HAC progress is not without its caveats. Estimates of hospital related patient harms put the number of hospital deaths related to preventable errors at over 400,000 per year.

The Leapfrog Group identified other areas of progress, regarding the reduction of medication errors through increased adoption and functionality of computerized physician order entry systems, as well the development of public and private partnerships to reduce HACs.

Grades 

Five years into the Leapfrog Hospital Safety Grade scoring, 63 out of over 2,600 hospitals have achieved an “A” in every national scoring update. In the most recent rating of 2,369 hospitals, 823 earned an “A,” 706 earned a “B,” 933 earned a “C,” 167 earned a “D” and 10 earned an “F”. The five states with the highest percentage of “A” hospitals were Maine, Hawaii, Oregon, Wisconsin and Idaho.

Do patient safety indicators provide the full picture?

Common indicators used to rate hospital safety, such as the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Indicators (PSIs) and CMS Hospital-acquired Conditions (HACs) may not accurately portray quality of care, according to a new study.  The study, which comes from Johns Hopkins Armstrong Institute for Patient Safety and Quality was published in Medical Care, and found that only one out of 21 measures used by these agencies met the criteria for being a true indicator of hospital safety. The problem is, however, that they are being used more and more frequently, despite concerns over their accuracy.

The study found that these indicators are being used for pay-for-performance and public reporting, including Leapfrog’s Hospital Safety Score and CMS’ Star Ratings.  Because of their  potential to misinform patients and wrongly classify hospitals serious harm could ensue. Hospitals could experience financial problems and also harm to their reputations if the wrong data are projected. The study emphasized that the indicators need to be rigorously evaluated because of these issues.

Part of the problem, the study suggests, is that the measures were created more than a decade ago. Further, that data is pulled from billing data and not actual clinical data, which could be traced back to patient health records. The researchers also pointed out that certain factors tied to medical coding and human error could potentially make the results unreliable.

The researchers looked at studies run from between January 1, 1990 and January 14, 2015 that addressed the validity of the HAC measures and PSIs.  They found that in 80 percent of cases the data used in the studies matched. Of the 21 measures created by AHRQ and CMS, 16 of those did not have enough data to be evaluated. Only five of the measures contained enough data for it to have a positive predictive value, meaning it could evaluate a useful measure in an accurate manner. Out of those five, however, only one was valid for use today.

With the odds being a 21 and one record, the bet is against the hospitals when it comes to the usefulness of these indicators. Should these types of indicators remain in use, they will need to be thoroughly evaluated and tied to measures that are also checked over time.