hospital-corridor
Medical Care

Evaluation of Patients' Safety in Hospitals Is Misleading, Research Says

The next time you read a list of “safe hospitals,” better take it with a grain of salt. Researchers say that the evaluation methods are far from perfect.

In fact, Common measures used by government agencies and public rankings to rate the safety of hospitals do not accurately capture the quality of care provided, new research from the Johns Hopkins Armstrong Institute for Patient Safety and Quality suggests.

The findings, published in the journal Medical Care, found only one measure out of 21 met the scientific criteria for being considered a true indicator of hospital safety. The measures evaluated in the study are used by several public rating systems, including U.S. News and World Report’s Best Hospitals, Leapfrog’s Hospital Safety Score, and the Center for Medicare and Medicaid Services’ (CMS’) Star Ratings. The Johns Hopkins researchers say their study suggests further analysis of these measures is needed to ensure the information provided to patients about hospitals informs, rather than misguides, their decisions about where to seek care.

“These measures have the ability to misinform patients, misclassify hospitals, misapply financial data and cause unwarranted reputational harm to hospitals,” says Bradford Winters, M.D., Ph.D., associate professor of anesthesiology and critical care medicine at Johns Hopkins and lead study author. “If the measures don’t hold up to the latest science, then we need to re-evaluate whether we should be using them to compare hospitals.”

Hospitals have reported their performance on quality-of-care measures publicly for years in an effort to answer the growing demand for transparency in health care. Several report performance using measures created by the Agency for Health Care Research and Quality (AHRQ) and CMS more than 10 years ago. Known as patient safety indicators (PSIs) and hospital-acquired conditions (HACs), these measures use billing data input from hospital administrators, rather than clinical data obtained from patient medical records. The result can be extreme differences in how medical errors are coded from one hospital to another.

“The variation in coding severely limits our ability to count safety events and draw conclusions about the quality of care between hospitals,” says Peter Pronovost, M.D., Ph.D., another study author and director of the Johns Hopkins Armstrong Institute for Patient Safety and Quality. “Patients should have measures that reflect how well we care for patients, not how well we code that care.”

logo

The latest for the greatest!

Get up-to-the-moment health + wellness info
  right to your inbox, plus exclusive offers!