Earlier this month, the National Quality Forum released its revised list of “Serious Reportable Events in Healthcare, 2011,” with four new events added to the list. While the NQF no longer refers to this list as “Never Events,” it doesn’t really matter, since everyone else does. And this shorthand has helped make this list, which will soon mark its tenth anniversary, a dominant force in the patient safety field.
The NQF was founded in 1999 at the recommendation of Al Gore’s Presidential Advisory Commission on healthcare quality. For its founding chair, the organization selected Ken Kizer, a no-nonsense, seasoned physician-administrator who had just done a spectacular job of transforming the VA system from the subject of scathing articles and movies into a model of high-quality healthcare, a veritable star in patient safety galaxy.
Kizer original charge at NQF was to develop a Good Housekeeping seal-equivalent for quality measures (“NQF-endorsed measures”). But soon after he arrived, Kizer added another item to the NQF’s wish list: the creation of a list of medical errors and harm that might ultimately be the subject of a nationwide state-based reporting system. As Kizer said at the time,
This is intended to be a list of things that just should not happen in health care today. For example, operating on the wrong body part [or] a mother dying during childbirth. That’s such a rare event today that it’s generally viewed as something that just shouldn’t happen. Now, there’s probably going to be an occasion now and then when it happens and everything was done right, but it’s so infrequent that it means you have to investigate it every time it occurs. So “never” has quotes around it in this case. Now, wrong-site surgery is a different story—that should never happen. There’s no way that you should take off the right leg when you’re supposed to do the left one. So in this case, never really means never.
Unsurprisingly, the items on the list quickly became known as “Never Events.” Twenty-seven of them were announced in 2002, and the list was expanded and revised four years later. (This primer, written by my colleague Sumant Ranji for our patient safety website, AHRQ Patient Safety Network, is the best description of the list and some of its policy implications.)
While the publication of a list of serious safety snafus was relatively uncontroversial, things grew more interesting after the list became the core of several other efforts to promote safety and quality. For example:
- The majority of states (27) now require reporting of serious adverse events – either the NQF list itself or a slightly amended version known as “Serious Adverse Events.” In some of these states, these reports are made public, often in a yearly report (such as this widely cited one from Minnesota). In others (including California), hospitals must promptly perform Root Cause Analyses (RCAs) after they experience events on the list. Health departments in most of the 27 states can swoop in to investigate hospitals after such reports and levy penalties, including fines. I blogged previously about how California’s 2007 reporting requirement led my hospital to completely overhaul its RCA process, for the better.
- Since 2005, the Joint Commission (TJC) has required reporting of “sentinel events,” defined as “an unexpected occurrence involving death or serious physiological or psychological injury, or the risks thereof.” The NQF list sits at the heart of TJC’s sentinel event list.
- Finally, in 2007 Medicare stopped paying hospitals the extra costs associated with Never Events (I’ve written about this policy here). While the money being withheld is relatively small (in the tens of millions of dollars nationally, because hospitals still receive their extra payments if they document additional complications beyond those on the list, and most times they do), the policy has captured the attention of administrators and providers everywhere.
Kizer’s decision to create the Never Events list was a stroke of genius. Remember that in the early days of the safety field, we didn’t know which errors were important and how best to structure reporting programs. Some governmental bodies responded to this uncertainty by casting a humongous net: some states (most prominently Pennsylvania) or countries (the UK) required that all errors be reported to a central authority. While Pennsylvania has managed to turn lemons into dilute lemonade, putting out periodic reports on trends in reported events (these reports are well done, but I still don’t think the effort is worth the time and money), the UK pulled the plug on its National Patient Safety Agency – set up to manage this reporting system – last year, mostly because the bang of such large-scale reporting wasn’t worth the considerable bucks, er, pounds.
In contrast to these “report everything” systems, the Never Events list lent some structure to reporting systems by putting reasonable boundaries around them. Reporting a small subset of errors or instances of serious harm can allow states or accreditors to gain a better understanding of the problems out there, and to use such data to populate public reporting and reimbursement programs designed to promote safety. Over the past few years, that’s precisely what they’ve done.
So what are the problems with the Never Events list? First of all, many of the events on the list lack standard definitions, meaning that whether an event falls within or outside the list is subject to interpretation. This isn’t a huge deal when the list is used internally, but it’s a major flaw when the list is fodder for public reporting or “no pay for errors” programs.
For example, several years ago I was called by a reporter from Indiana who wanted my opinion on the results of his state’s first published reports of its hospitals’ Never Events. One large hospital system, he told me, had 15 reported events in the past year; another had none. I told him I wouldn’t be caught dead the one with zero (or perhaps I would!): either the folks at that facility were liars or they didn’t know what was happening on their watch. This vividly illustrated the problem: with relatively ambiguous definitions for many of the events, there is a risk that hospitals with more aggressive surveillance systems or more honest reporting might look worse than their competitors, while actually being better.
I also continue to worry about unanticipated consequences. As Sharon Inouye wrote a couple of years ago in the NEJM, one way to prevent all patient falls (Never Event #16) is to tether patients to their beds, thereby foregoing the demonstrated benefit of early ambulation. To further illustrate the point, last year my UCSF colleague Som Mookherjee reported a clever study in which he asked our residents about how they would respond in a variety of clinical scenarios related to potential Never Events. Residents randomized to receive additional information about the “no pay for errors” policy were significantly more likely to favor care that, while resulting in better reimbursement, was clinically inappropriate.
Medicare’s “no pay for errors” policy raises other questions. While the program has resulted in relatively puny reimbursement cuts to date, as other insurers get into the act – or as CMS feels more and more pressure to cut its costs – what began as a sensible promoter of safety is likely to morph into a budget axe cloaked in the reassuring garb of patient safety. For example, I know of several instances in which private insurers have tried to suspend all payments to hospitals after the occurrence of a Never Event.
That might not seem unreasonable until you take a closer look at what it means. One case involved an elderly patient with multi-organ system failure in a community hospital’s ICU. The hospital was being paid on a per diem basis by the private insurer. On day 8 of what was to be a 95-day (and several-hundred thousand dollar) hospitalization, the patient was noted to have a new decubitus ulcer, a common complication in sick, malnourished ICU patients, and one that sometimes occurs despite perfect care. The private insurer moved to withhold payments to the hospital… for days 9 to 95! After all, the insurer’s representatives pleaded quite innocently, “this was a Never Event – something that never should have happened.” Luckily, the hospital fought back and won, and I haven’t heard of such policies being enacted recently. But I don’t think we’ve heard the last of this kind of egregious misappropriation of the concept.
There is also the usual matter of “what you measure matters.” The Never Events list doesn’t capture certain types of errors, such as diagnostic errors and errors of overuse (unnecessary CT scans, for example). To the extent that the list becomes the focal point of policy efforts to promote safety, we risk further skewing the field toward the events it contains and away from equally important events that it misses. For example, just last week a study of settled malpractice claims found that ambulatory errors made up nearly half of all claims; of these, diagnostic errors were the most common type. None would have been captured by the Never Events list, nor addressed through any public reporting or payment policies that flowed from it.
Finally, there remains the issue of preventability. As the list has expanded, more and more items on it – while unambiguously being “serious adverse events” – are not known to be fully preventable. If they were 90 percent preventable, let’s say, I could live with the unfairness of having the events be publicly reported or the subject of “no pay” policies. But some of them are no more than 50 percent preventable, at least according to today’s science. To penalize a hospital (by fining them or by cutting their reimbursement) when they did everything right seems manifestly unfair.
What to do? I like the proposal, first advanced by Pronovost and Colantouni in 2009, to report the outcome alone (i.e., the occurrence of an event on the list) when the harm is known to be almost fully preventable, such as in the cases of central-line associated bloodstream infections or wrong-site surgery. But, when the adverse event is only partly preventable (such as with post-op DVT or ventilator-associated pneumonia), they suggest the use of a linked outcome-process measure. Under such a system, a partly-preventable adverse event would trigger a chart review looking for evidence of appropriate processes of care. If such evidence was absent (a post-op DVT with no documented DVT prophylaxis; a case of VAP with no documented mouth cleansing or head-of-bed elevation), then one could deem the harm a “preventable adverse event,” subject to public reporting, no pay programs, or if sufficiently egregious, even fines. If the hospital or system did everything right, on the other hand, then any implication that it was a preventable event is unjust.
Moreover, at a more macro/policy level, if the best literature says that only 30-50 percent of decubitus ulcers or serious falls can be prevented despite perfect care, the assumption of 100% preventability – a logical assumption if one bought the term “Never Event” – is grossly misleading.
As we approach the 10-year anniversary of the National Quality Forum’s list of Serious Reportable Events in Health Care, let’s take a moment to applaud the list, and Ken Kizer’s visionary leadership in creating it. There is no question in my mind that the list has been the “Intel Inside” for several policy initiatives that have propelled the patient safety field forward.
But we must be thoughtful with this and similar lists of adverse events, taking care to modify them as new evidence emerges, and paying particular attention to preventability. If healthcare organizations and providers are to be penalized for adverse events through public reports, accreditation or regulatory actions, or payment cuts, we must ensure that these penalties are triggered by the failure to adhere to evidence-based processes of care, or by such high rates of events that they cannot be attributed to random chance alone.
The term “Never Events” was an eye-catching bit of spin that helped the NQF list capture its rightful place in the pantheon of patient safety and quality measures. While the term remains appropriate for the egregious events of the type that Ken Kizer envisioned a decade ago, for the rest of the NQF’s and similar lists of serious adverse events – the decubitus ulcer in the elderly patient who has received perfect care, the deep venous thrombosis in the post-op patient who received evidence-based prophylaxis, the healthcare-acquired infection in the patient who received all parts of the correct “bundle” – it seems a good time to say, never say never (event).