Last year, I (with Peter Pronovost) wrote the toughest paper of my life – one that critiqued the Institute for Healthcare Improvement’s 100,000 Lives Campaign. This is the healthcare equivalent of criticizing both Mother Teresa and your local food bank in a single sitting (you can also read Don Berwick and his team’s response here). Although some of our concerns were over IHI’s methodologically suspect “122,300 Lives Saved” estimate, we also criticized the Campaign’s decision to include the establishment of a Rapid Response Team as a national standard.
Don’t get me wrong. The concept of a Rapid Response Team is attractive. It isn’t hard to find patients who die in hospitals or require emergent transfer to the ICU in whom evidence of deterioration was present for hours – sometimes days – before the crash. In some of these cases, nurses failed to recognize the early signs of deterioration (tachycardia, tachypnea), or, even more troubling, did recognize these signs but were unable to summon the cavalry. How could this happen? Sometimes, there simply was no one around to help. Other times, busy docs didn’t want to be bothered, or failed to recognize their own limitations, or rigidly adhered to an unyielding hierarchy (as when the intern doesn’t want to demonstrate his “weakness” by calling the senior resident or attending).
So who could argue against the concept of a Rapid Response Team, sort of a Code Blue Team in drag? The first reports of these Teams appeared in the literature in about 2004, and anecdotal evidence began emerging from early adopters that RRTs were saving lives left-and-right, swooping in to catch an early deterioration and either fix the problem or safely and calmly transfer the patient to the ICU.
So far, no problema. From my vantage point, RRTs were a reasonable idea designed to address an important problem. It was great that hospitals were experimenting with them, and terrific that some respected academic leaders (in the U.S., most notably Pitt’s Michael DeVita) embraced the concept, hosting conferences, catalyzing research, and even sharing their bias that this was probably a better mousetrap. Even better, researchers in Australia chose to organize a multi-site trial of RRTs to see whether the concept truly lived up to its hype.
But here’s where the ball left the fairway. RRTs emerged at a time of increasing pressure on hospitals to implement a variety of safety and quality practices, some with scanty evidence of benefit and the potential for significant disruption. Given the only-anecdotal evidence that RRTs help, and the results of the Australian multi-center trial – which showed absolutely no benefit for the practice – here was an idea that was truly not ready for prime time, if by prime time we mean a push for universal adoption. An important paper, written by the Hopkins group, made this point eloquently, under the headline “Walk, Don’t Run.” And a recent systematic review and consensus conference confirmed the overall dearth of strong supportive evidence.
That’s where our 100K Lives critique came in. I am a big fan of IHI, and Don Berwick is one of my heroes. Don is brilliant, charismatic, and visionary, and IHI has been an important force for good. But I believe that IHI’s decision to include RRTs as one “plank” in their “100,000 Lives Campaign” was a mistake.
The campaign’s unstoppable momentum created an environment in which many hospitals felt that they literally had no choice but to adopt RRTs. Some of them probably saved some lives. But others, I’m convinced, failed, in part because we knew so little about how to organize these Teams effectively. Moreover, even if they do work, we have no idea whether the money spent on RRTs was a good investment compared with alternative uses, such as on computerization, teamwork training, or more nurses or pharmacists. (There was some magical thinking in the early days about how an RRT could be implemented for nothing. I’d love somebody to explain to me how a trained nurse or doctor can leave his or her post for several hours a day at no cost to the system – what exactly were these people supposed to be doing when the RRT call came if leaving their primary job was inconsequential?)
As Yale’s Harlan Krumholz, one of the nation’s top outcomes researchers (and, I’m pleased to say, a former UCSF resident) recently wrote, “To transform a guideline recommendation into a performance measure, the supporting evidence should be unassailable and the net benefit to patients should be clear…. Those who call physicians to account for their clinical decisions must use performance measures that are based on strong science. A crucial—but often overlooked—element of the scientific rationale is a proven link between guideline adherence and outcomes.” And my colleagues Andy Auerbach, Kaveh Shojania, and Seth Landefeld echoed this point in their very thoughtful NEJM essay on evidence-based standards for quality.
At my own institution (UCSF Medical Center), our original RRT structure was a disaster (one I take partial blame for, as a member of the RRT committee that cooked up the idea). We did it on the cheap, using a hospitalist-based team appended to our med consult service. The floor nurses were instructed to call the primary medical or surgical team first and wait a while before calling the RRT. Nobody ever did, and there was confusion galore over who the team was, what it did, and when one should call.
Earlier this year, we bit the bullet and created (at a cost of several hundred thousand dollars per year, in the form of 4.5 RN FTEs) an RRT staffed by dedicated ICU-trained nurses and RTs. My impression is that it is doing some good, but I still can’t tell whether it is worth the investment. An early peak at the data (mortality and out-of-ICU codes) does not demonstrate clear benefit. But the nurses clearly like it, as do many of the docs, so it is likely here to stay.
Which is probably a good thing. I applaud my institution for experimenting with different RRT models, and studying the results. This is what should be happening at hospitals everywhere. And I applaud the Joint Commission for its 2008 National Patient Safety Goal, which requires that hospitals have a system to identify deteriorating patients and deal with them effectively, a standard that will encourage hospitals to innovate in addressing an important problem. Many of these innovations, but not all, will include RRTs, of one flavor or another.
Finally, I applaud researchers for continuing to pursue the question of whether RRTs actually lead to benefit. Just this week, a major article in JAMA showed a stunning 18% mortality improvement associated with a RRT program at Packard Children’s Hospital, and two recent single hospital studies (here and here) also found benefit. But, also this week, another article chronicled the challenges of implementing an obstetrical RRT. The plot is thickening more than my aunt’s Thanksgiving gravy.
Watching this literature closely over the past few years, I think the evidence (particularly the JAMA paper, though one wonders whether this Childrens’ Hospital study’s results are generalizable to adult cases) is slowly mounting that RRTs are a good idea. I hope they are. That said, it doesn’t change my feeling that we should be careful about making complex and expensive practices “must-haves” before we are reasonably certain of their benefits, their costs, and the best ways to implement them.