The patient safety world was set abuzz this week by yet another article, this one in Health Affairs, that seemed to offer additional “evidence” that hospitals are even more dangerous than we previously thought (and we already thought they weren’t very safe). Using the Institute for Healthcare Improvement’s (IHI) “Global Trigger Tool,” the study found that one in three patients admitted to three large, modern, safety-savvy hospitals suffered medical harm during their hospitalization.
While the tone of the article’s press coverage was foreboding and even accusatory, I don’t think the indictment is entirely accurate. “Hospital Errors May Be Far More Common Than Suspected: New tracking system uncovers 10 times as many medical mistakes,” screamed one typical headline. Such headlines were misleading because the study assessed neither errors nor preventability. The fact that this distinction was too subtle for most reporters to pick up illustrates the challenges of measuring patient safety, both in terms of the appropriate measurement targets and tools.
In this blog, I’ll focus on the target piece – particularly the wonky but vital matter of distinguishing harm from preventable harm. In my next post, I’ll turn to the issue of measurement tools, analyzing the role of the Global Trigger Tool, institutional incident reporting systems, and other measures of safety like the AHRQ Patient Safety Indicators.
In the field of patient safety, measurement turns out to be critically important. This might sound like one of those, “Well-Thank-you-Captain-Obvious”-type statements (sort of like, “Did you realize that Washington is dysfunctional?”), but I, for one, didn’t fully appreciate this at first. I do now, particularly since safety measures have become the scaffolding for local QI efforts, as well as state and federal reporting, pay for performance, and “no pay for errors” initiatives. I hope you’ll bear with me as we wade into the definitional swamp.
OK, now that you have your hip waders on, let’s think about medical harm, preventable harm, and errors.
The IHI defines medical harm as:
Unintended physical injury resulting from or contributed to by medical care (including the absence of indicated medical treatment), that requires additional monitoring, treatment, or hospitalization, or that results in death.
Harm is bad, and people quite naturally want to avoid it. But whether harm is the right target for the safety field (or an appropriate measure of our progress) is debatable, and debated.
This is because about half (such as in these two recent studies, here and here) of cases of harm in hospitalized patients were not judged to have been preventable, nor were they preceded by an identifiable error. The most familiar type of non-error, non-preventable harm is a medication side effect. Assuming that the drug was given for an appropriate indication and taken correctly, a side effect – a rash from an antibiotic, for example – would be classified as harm. It would be nice to prevent it, but at this point we don’t know how to do that.
Hopefully, an example (drawn from my book Understanding Patient Safety) will clear this up a bit. Think about a patient taking Coumadin for a recent pulmonary embolism. Unfortunately, she has a gastrointestinal bleed a few weeks after starting the anticoagulant, while her INR is in the therapeutic range. We’d say that she has suffered an adverse event, or medical harm (the terms “medical harm” and “adverse event” are used interchangeably). If her INR had been above the therapeutic range (say, 3.5, when we’re aiming for 2-3) but there was no overt error identified (she was on a reasonable dose, being monitored at the correct intervals), we’d call that preventable harm (you wouldn’t have to think too hard to envision a system that might have caught and fixed this problem before the bleed), but not an error. If, however, the physician had unwittingly prescribed an antibiotic that has a documented drug interaction with Coumadin, and the interaction led to the supratherapeutic INR, that would be harm, preventable harm, and an error. Finally (are you still with me?), if the physician had prescribed both the Coumadin and the offending antibiotic and the INR climbed sky high, but the patient didn’t bleed due to dumb luck, that would be no harm (and, obviously, no preventable harm), but it would be an error (we’d call it a near miss or a close call). If you like graphics, here’s a figure from my book that represents my best effort to make this semantic morass clear.
Since the patient safety field is concerned with avoiding errors and preventable harm, what would be the point of measuring all harm? Here’s where we take a detour from science into religion, since some harm proponents (well, that’s not quite right – they don’t like harm, they just like measuring and reporting it) have a belief, often quite a passionate one, that all harm is preventable, even if we don’t exactly know how to do so today. The authors of the Health Affairs study, who seem to fall into this camp, write,
Because of prior work with Trigger Tools and the belief that ultimately all adverse events may be preventable, we did not attempt to evaluate the preventability or ameliorability (whether harm could have been reduced if a different approach had been taken) of these adverse events. [Underline added]
The “report all harm” camp’s usual Exhibit A is central line-associated bloodstream infections (CLABSI), which might have been called non-preventable a decade ago, but which we now know are largely avoidable with rigorous adherence to a bundle of prevention practices. This is a wonderful and inspiring tale, but as of today, it is an outlier. We don’t know how to prevent all falls, or decubitus ulcers, or blood clots, or handoff errors, or diagnostic errors, and we certainly don’t know how to prevent all surgical complications or medication side effects. And we won’t anytime soon.
That’s not to say that their belief isn’t theoretically right. It might be, ultimately. It’s just that the prevention of all medical harm won’t happen in my lifetime. Moreover, aggressive efforts to rid ourselves of all harm are likely to be cost-prohibitive (we could prevent virtually all falls by placing a sitter in every patient room) or have unacceptable collateral effects (we could also prevent all hospital falls by forbidding patients to walk before we wheel them to the hospital exit at discharge). Neither strategy seems like a great idea.
Because the best evidence says that we can prevent perhaps half of all medical harm in hospitalized patients, I like using “preventable harm” as an organizing principle and a target for action. Last Friday at a conference in Princeton, New Jersey, I happened to hear an excellent presentation by Ken Sands, the director of quality at Boston’s Beth Israel Deaconess Medical Center and a leader in the effort to use preventable harm to measure safety. As Ken explained it, in 2008, the BIDMC governing board set an audacious target of purging the organization of preventable harm by January 1, 2012. This forced the hospital’s leaders to figure out how they’d measure their performance against this goal. Beginning with a similar definition of harm as the IHI’s (though limiting the cases to a slightly more severe subset, those that require or prolong a hospitalization or that result in permanent disability or death), they determine the preventability of their identified cases of harm based on two tests:
- Did the injury result from failure to provide care to the existing institutional standard?OR
- Could the existing standards have been reasonably changed in a way that would be expected to decrease the risk of future injuries by the same mechanism?
The interpretation of these standards by the BIDMC is impressively conservative. For example, for the first criterion, they’ll call a harm case preventable even if the lapse seems unlikely to have caused the harm. For example, if a patient developed a ventilator-associated pneumonia after a week-long ICU stay, and they identify a single nursing shift during which oral hygiene or bed elevation were not documented, they’ll deem that case preventable (even though it is quite unlikely that the single omission led to the infection).
As for the second mechanism, Ken described a wrong-level spinal surgery case in which the OR team followed the existing standards for a surgical time out and site marking. When BIDMC analyzed the case, they found that each of their neurosurgeons was using a different system to prevent wrong-level spine surgery, and that the existing standards, which work reasonably well in preventing wrong-site surgery (i.e., right vs. left leg), are insufficient for wrong-level spine surgery (we’ve seen the same thing at UCSF). After their root cause analysis, they developed a new, tighter process. Because of this enhancement, they labeled the case as representing preventable harm, per their second criterion.
The advantages of using preventable harm instead of all harm is that the former is actionable, and that the resulting number is large enough to be galvanizing, but not so overwhelming as to be demoralizing and paralyzing. Ken described an impressive program at BIDMC to attack the underlying causes of preventable harm. His quality department reports BIDMC’s preventable harm on a public website, and presents more granular case-based data and stories on their hospital Intranet. While an effort to attack overall cases of harm would elicit a ho-hum response, avoiding preventable harm has become a rallying cry on many units of the hospital. Although BIDMC won’t meet the board’s audacious goal of total elimination of preventable harm by 2012, they’ve made great progress on a number of fronts.
Finally, there is the matter of errors. BIDMC explicitly chose to focus on preventable harm rather than errors, and I think that’s the right call. But I hope that institutions reserve a small amount of their patient safety bandwidth for errors that don’t lead to harm. One of the problems of coming at patient safety by working backwards from harmed patients (such as the ones identified by the Global Trigger Tool) is that you overlook many cases involving near misses, which sometimes have great value in understanding an organization’s vulnerabilities (see Albert Wu’s excellent new book on close calls for more on this).
Why is preventability so important? If all harm is deemed preventable, then a study like this week’s (showing one patient harmed in every three admissions) nearly begs for armies of regulators and accreditors (not to mention malpractice attorneys) to storm America’s hospitals, turning up the heat. And if they’re all preventable, then perhaps they should all be publicly reported and hospitals should have their reimbursements slashed for high rates of harm. To me, this seems wasteful, overly depressing, and manifestly unfair.
On the other hand, by focusing on preventable harm, you end up with a manageable target to shoot at (about 150 cases a year at BIDMC). Not a small one, mind you – now instead of one in three hospitalizations with harm, you’d be analyzing preventable harm in one in 7-8 admissions – but one that you can get your arms around. And you’d be on firm footing using all of our available tools to promote improvement, such as local rate tracking and investigations, as well as public reporting or payment changes. Finally, you’d catalyze research designed to increase the fraction of all harm that is, in fact, preventable.
In other words, you’d be in a far better place, which is why I believe that preventable harm – augmented by a system to detect at least some of the near misses – is the right target for institutional safety programs and for state and national initiatives focused on improving safety.
One more word about the Health Affairs study. I believe the paper moves the ball forward by demonstrating just how much better chart review is at finding cases of harm or preventable harm (particularly when it’s made more efficient through the use of the Global Trigger Tool) than local incident reporting or the AHRQ Patient Safety Indicators. (I’ll return to this subject in my next post.) But the paper is also a cautionary tale: studies that report rates of all-cause harm without using robust methods to judge preventability are virtually guaranteed to be misinterpreted by patients, the media, and policymakers.