Common indicators used to rate hospital safety, such as the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Indicators (PSIs) and CMS Hospital-acquired Conditions (HACs) may not accurately portray quality of care, according to a new study. The study, which comes from Johns Hopkins Armstrong Institute for Patient Safety and Quality was published in Medical Care, and found that only one out of 21 measures used by these agencies met the criteria for being a true indicator of hospital safety. The problem is, however, that they are being used more and more frequently, despite concerns over their accuracy.
The study found that these indicators are being used for pay-for-performance and public reporting, including Leapfrog’s Hospital Safety Score and CMS’ Star Ratings. Because of their potential to misinform patients and wrongly classify hospitals serious harm could ensue. Hospitals could experience financial problems and also harm to their reputations if the wrong data are projected. The study emphasized that the indicators need to be rigorously evaluated because of these issues.
Part of the problem, the study suggests, is that the measures were created more than a decade ago. Further, that data is pulled from billing data and not actual clinical data, which could be traced back to patient health records. The researchers also pointed out that certain factors tied to medical coding and human error could potentially make the results unreliable.
The researchers looked at studies run from between January 1, 1990 and January 14, 2015 that addressed the validity of the HAC measures and PSIs. They found that in 80 percent of cases the data used in the studies matched. Of the 21 measures created by AHRQ and CMS, 16 of those did not have enough data to be evaluated. Only five of the measures contained enough data for it to have a positive predictive value, meaning it could evaluate a useful measure in an accurate manner. Out of those five, however, only one was valid for use today.
With the odds being a 21 and one record, the bet is against the hospitals when it comes to the usefulness of these indicators. Should these types of indicators remain in use, they will need to be thoroughly evaluated and tied to measures that are also checked over time.