The Science and Religion of Patient Safety: Harm, Preventable Harm, and Trigger Tools (Part I)

>
By  |  April 11, 2011 | 

The patient safety world was set abuzz this week by yet another article, this one in Health Affairs, that seemed to offer additional “evidence” that hospitals are even more dangerous than we previously thought (and we already thought they weren’t very safe). Using the Institute for Healthcare Improvement’s (IHI) “Global Trigger Tool,” the study found that one in three patients admitted to three large, modern, safety-savvy hospitals suffered medical harm during their hospitalization.

While the tone of the article’s press coverage was foreboding and even accusatory, I don’t think the indictment is entirely accurate. “Hospital Errors May Be Far More Common Than Suspected: New tracking system uncovers 10 times as many medical mistakes,” screamed one typical headline. Such headlines were misleading because the study assessed neither errors nor preventability. The fact that this distinction was too subtle for most reporters to pick up illustrates the challenges of measuring patient safety, both in terms of the appropriate measurement targets and tools.

In this blog, I’ll focus on the target piece – particularly the wonky but vital matter of distinguishing harm from preventable harm. In my next post, I’ll turn to the issue of measurement tools, analyzing the role of the Global Trigger Tool, institutional incident reporting systems, and other measures of safety like the AHRQ Patient Safety Indicators.

In the field of patient safety, measurement turns out to be critically important. This might sound like one of those, “Well-Thank-you-Captain-Obvious”-type statements (sort of like, “Did you realize that Washington is dysfunctional?”), but I, for one, didn’t fully appreciate this at first. I do now, particularly since safety measures have become the scaffolding for local QI efforts, as well as state and federal reporting, pay for performance, and “no pay for errors” initiatives. I hope you’ll bear with me as we wade into the definitional swamp.

OK, now that you have your hip waders on, let’s think about medical harm, preventable harm, and errors.

The IHI defines medical harm as:

Unintended physical injury resulting from or contributed to by medical care (including the absence of indicated medical treatment), that requires additional monitoring, treatment, or hospitalization, or that results in death.

Harm is bad, and people quite naturally want to avoid it. But whether harm is the right target for the safety field (or an appropriate measure of our progress) is debatable, and debated.

This is because about half (such as in these two recent studies, here and here) of cases of harm in hospitalized patients were not judged to have been preventable, nor were they preceded by an identifiable error. The most familiar type of non-error, non-preventable harm is a medication side effect. Assuming that the drug was given for an appropriate indication and taken correctly, a side effect – a rash from an antibiotic, for example – would be classified as harm. It would be nice to prevent it, but at this point we don’t know how to do that.

Hopefully, an example (drawn from my book Understanding Patient Safety) will clear this up a bit. Think about a patient taking Coumadin for a recent pulmonary embolism. Unfortunately, she has a gastrointestinal bleed a few weeks after starting the anticoagulant, while her INR is in the therapeutic range. We’d say that she has suffered an adverse event, or medical harm (the terms “medical harm” and “adverse event” are used interchangeably). If her INR had been above the therapeutic range (say, 3.5, when we’re aiming for 2-3) but there was no overt error identified (she was on a reasonable dose, being monitored at the correct intervals), we’d call that preventable harm (you wouldn’t have to think too hard to envision a system that might have caught and fixed this problem before the bleed), but not an error. If, however, the physician had unwittingly prescribed an antibiotic that has a documented drug interaction with Coumadin, and the interaction led to the supratherapeutic INR, that would be harm, preventable harm, and an error. Finally (are you still with me?), if the physician had prescribed both the Coumadin and the offending antibiotic and the INR climbed sky high, but the patient didn’t bleed due to dumb luck, that would be no harm (and, obviously, no preventable harm), but it would be an error (we’d call it a near miss or a close call). If you like graphics, here’s a figure from my book that represents my best effort to make this semantic morass clear.

Since the patient safety field is concerned with avoiding errors and preventable harm, what would be the point of measuring all harm? Here’s where we take a detour from science into religion, since some harm proponents (well, that’s not quite right – they don’t like harm, they just like measuring and reporting it) have a belief, often quite a passionate one, that all harm is preventable, even if we don’t exactly know how to do so today. The authors of the Health Affairs study, who seem to fall into this camp, write,

Because of prior work with Trigger Tools and the belief that ultimately all adverse events may be preventable, we did not attempt to evaluate the preventability or ameliorability (whether harm could have been reduced if a different approach had been taken) of these adverse events. [Underline added]

The “report all harm” camp’s usual Exhibit A is central line-associated bloodstream infections (CLABSI), which might have been called non-preventable a decade ago, but which we now know are largely avoidable with rigorous adherence to a bundle of prevention practices. This is a wonderful and inspiring tale, but as of today, it is an outlier. We don’t know how to prevent all falls, or decubitus ulcers, or blood clots, or handoff errors, or diagnostic errors, and we certainly don’t know how to prevent all surgical complications or medication side effects. And we won’t anytime soon.

That’s not to say that their belief isn’t theoretically right. It might be, ultimately. It’s just that the prevention of all medical harm won’t happen in my lifetime. Moreover, aggressive efforts to rid ourselves of all harm are likely to be cost-prohibitive (we could prevent virtually all falls by placing a sitter in every patient room) or have unacceptable collateral effects (we could also prevent all hospital falls by forbidding patients to walk before we wheel them to the hospital exit at discharge). Neither strategy seems like a great idea.

Because the best evidence says that we can prevent perhaps half of all medical harm in hospitalized patients, I like using “preventable harm” as an organizing principle and a target for action. Last Friday at a conference in Princeton, New Jersey, I happened to hear an excellent presentation by Ken Sands, the director of quality at Boston’s Beth Israel Deaconess Medical Center and a leader in the effort to use preventable harm to measure safety. As Ken explained it, in 2008, the BIDMC governing board set an audacious target of purging the organization of preventable harm by January 1, 2012. This forced the hospital’s leaders to figure out how they’d measure their performance against this goal. Beginning with a similar definition of harm as the IHI’s (though limiting the cases to a slightly more severe subset, those that require or prolong a hospitalization or that result in permanent disability or death), they determine the preventability of their identified cases of harm based on two tests:

  1. Did the injury result from failure to provide care to the existing institutional standard?OR
  2. Could the existing standards have been reasonably changed in a way that would be expected to decrease the risk of future injuries by the same mechanism?

The interpretation of these standards by the BIDMC is impressively conservative. For example, for the first criterion, they’ll call a harm case preventable even if the lapse seems unlikely to have caused the harm. For example, if a patient developed a ventilator-associated pneumonia after a week-long ICU stay, and they identify a single nursing shift during which oral hygiene or bed elevation were not documented, they’ll deem that case preventable (even though it is quite unlikely that the single omission led to the infection).

As for the second mechanism, Ken described a wrong-level spinal surgery case in which the OR team followed the existing standards for a surgical time out and site marking. When BIDMC analyzed the case, they found that each of their neurosurgeons was using a different system to prevent wrong-level spine surgery, and that the existing standards, which work reasonably well in preventing wrong-site surgery (i.e., right vs. left leg), are insufficient for wrong-level spine surgery (we’ve seen the same thing at UCSF). After their root cause analysis, they developed a new, tighter process. Because of this enhancement, they labeled the case as representing preventable harm, per their second criterion.

The advantages of using preventable harm instead of all harm is that the former is actionable, and that the resulting number is large enough to be galvanizing, but not so overwhelming as to be demoralizing and paralyzing. Ken described an impressive program at BIDMC to attack the underlying causes of preventable harm. His quality department reports BIDMC’s preventable harm on a public website, and presents more granular case-based data and stories on their hospital Intranet. While an effort to attack overall cases of harm would elicit a ho-hum response, avoiding preventable harm has become a rallying cry on many units of the hospital. Although BIDMC won’t meet the board’s audacious goal of total elimination of preventable harm by 2012, they’ve made great progress on a number of fronts.

Finally, there is the matter of errors. BIDMC explicitly chose to focus on preventable harm rather than errors, and I think that’s the right call. But I hope that institutions reserve a small amount of their patient safety bandwidth for errors that don’t lead to harm. One of the problems of coming at patient safety by working backwards from harmed patients (such as the ones identified by the Global Trigger Tool) is that you overlook many cases involving near misses, which sometimes have great value in understanding an organization’s vulnerabilities (see Albert Wu’s excellent new book on close calls for more on this).

Why is preventability so important? If all harm is deemed preventable, then a study like this week’s (showing one patient harmed in every three admissions) nearly begs for armies of regulators and accreditors (not to mention malpractice attorneys) to storm America’s hospitals, turning up the heat. And if they’re all preventable, then perhaps they should all be publicly reported and hospitals should have their reimbursements slashed for high rates of harm. To me, this seems wasteful, overly depressing, and manifestly unfair.

On the other hand, by focusing on preventable harm, you end up with a manageable target to shoot at (about 150 cases a year at BIDMC). Not a small one, mind you – now instead of one in three hospitalizations with harm, you’d be analyzing preventable harm in one in 7-8 admissions – but one that you can get your arms around. And you’d be on firm footing using all of our available tools to promote improvement, such as local rate tracking and investigations, as well as public reporting or payment changes. Finally, you’d catalyze research designed to increase the fraction of all harm that is, in fact, preventable.

In other words, you’d be in a far better place, which is why I believe that preventable harm – augmented by a system to detect at least some of the near misses – is the right target for institutional safety programs and for state and national initiatives focused on improving safety.

One more word about the Health Affairs study. I believe the paper moves the ball forward by demonstrating just how much better chart review is at finding cases of harm or preventable harm (particularly when it’s made more efficient through the use of the Global Trigger Tool) than local incident reporting or the AHRQ Patient Safety Indicators. (I’ll return to this subject in my next post.) But the paper is also a cautionary tale: studies that report rates of all-cause harm without using robust methods to judge preventability are virtually guaranteed to be misinterpreted by patients, the media, and policymakers.

Share This Post

6 Comments

  1. Michael Rie April 11, 2011 at 5:35 pm - Reply

    Dear Bob

    This is a very important area you are plowing. Defining preventable from non preventable harms will be mired in surveillance comparative public reporting of hospitals and units within hospitals while providers continue to react to public reporting. In the February issue of Critical Care Medicine, Teplick provides an in depth critique of catheter associated bloodstream infection rates and distinguishes CLABSI from CRBSI (central line related bloodstream infection). The complexity of temporal blood culture timing and other clinical contextual issues will always be beyond the public’s understanding of epidemiologic surveillance data. As Teplick points out, the surveillance function of comparison may be useful but will have problems in applying bonuses or non payments by such criteria. This does not diminish the utility of some narrow specific core measures like timing of preoperative prophlyactic antibiotics.
    Michael Rie

  2. Jan Krouwer April 11, 2011 at 11:26 pm - Reply

    You equate “no overt error identified” to mean no error. Overt means hidden or not observed. Not finding the cause of an error is not the same as no error.

  3. Noel E April 13, 2011 at 2:47 am - Reply

    One other thing about the paper in Health Affairs: the charts reviewed were from 2004; 6.5 years ago. Not clear how relevant the findings are to the current reality… a lot has changed in PS since then.

  4. Brian Clay, MD April 13, 2011 at 3:01 am - Reply

    Bob —

    Although BIDMC has set themselves up with an audacious definition of preventable harm, I am concerned that they will run afoul of the “post hoc ergo propter hoc” fallacy — defining a harm event as preventable if (1) the probability of the deviation from standard procedure contributing to the harm is not zero, and (2) the deviation occurred prior to the harm.

    To drive for perfect adherence to medical center policy and procedure is indeed a stretch goal. I’ve got to give Paul Levy and the folks at BIDMC credit for setting their sights high.

  5. […] assumption was effectively disabused by Dr. Bob Wachter in his blog article, ”The Science and Religion of Patient Safety: Harm, Preventable Harm, and Trigger Tools (Part I),”  largely a review of a Global Trigger paper published in Health Affairs. Wachter concludes, […]

Leave A Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

About the Author: Bob Wachter

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.

Categories

Related Posts

By Gian Toledanes, DO
March 17, 2023 |  0
Ableism is a common yet misunderstood “–ism”. Yet the common thread that ties ableism and other –isms/ forms of discrimination like racism, sexism, and homophobia, is the belief that one group or identity is “less than” others. Specifically, ableism is discrimination of and prejudice against people with disabilities and is rooted in the belief that […]
By Suchita Shah Sata, MD, SFHM
November 15, 2022 |  0
When RaDonda Vaught, a registered nurse at Vanderbilt University Medical Center, was criminally prosecuted for a medication error, it sent shockwaves through the medical community. Over 20 years after the landmark National Academy of Medicine (NAM) report To Err is Human and over a decade after Peter Pronovost catapulted the scientific approach to patient safety, […]
By Lanna Felde, MD, MPH
March 9, 2022 |  1
Could being on Twitter make you a better note-writer? We certainly think so! That was one of the many hot takes from February’s #JHMChat, with special guests Drs. Blair Golden, Robert Centor, and Andrew Olson. We explored the most fundamental question in the electronic health record (EHR): what makes a good note? Honest question, has […]
Go to Top