I first became a mortality rate skeptic when Peter Pronovost’s piece, Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away, was published in BMJ five years ago. He and his co-author highlighted many of the pitfalls of using mortality rates to assess hospital quality.
Hospital-wide risk-adjusted hospital mortality rates based on routinely collected data are blunt and inaccurate screening tools for identifying hospitals that are putatively more unsafe than others. They can falsely label hospitals as poor performers and fail to detect many others that harbor problems. In contrast, timely review of all in-hospital deaths and continuous monitoring of diagnosis-specific mortality trends within hospitals may provide more productive and acceptable means for identifying and responding to unsafe care.
He cites obvious causes for the measure’s weakness: low event rates (of the sort we have interest in, i.e., unexpected, “negligent” deaths as a subset of all deaths, ~ 5%), coding mischief and risk adjustment, and the weak and inconsistent correlation between mortality rates and other quality measures.
I understand why mortality rates have become important. They have salience, they measure easy, and the public understands death. But folks assessing this stuff should know better–and I include regulators and hospital administrators in the mix. As they wield this half-baked metric as both gospel and cudgel, I cant help but feel demoralized. Ian’s final flourish says it best: “First do no harm by avoiding faulty statistics and interrogate every death.” Correct.
The piece will take you less than 5 minutes to read and is well worth your time.