Could It Be That Patients Aren’t Any Safer?

>
By  |  November 25, 2010 | 

On the occasion of last year’s tenth anniversary of the IOM Report on medical mistakes, I was asked one question far more than any other: after all this effort, are patients any safer today than they were a decade ago?

Basing my answer more on gestalt than hard data, I gave our patient safety efforts a grade of B-, up a smidge from C+ five years earlier. Some commentators found that far too generous, blasting the safety field for the absence of measurable progress, their arguments bolstered by “data” demonstrating static or even increasing numbers of adverse events. I largely swatted that one away, noting that metrics such as self-reported incidents or patient safety indicators drawn from billing data were deeply flawed. Just look at all the new safety-oriented activity in the average U.S. hospital, I asked. How could we not be making patients safer?

I may have been overly charitable. Today, in an echo of the Harvard Medical Practice Study (the source of the 44,000-98,000 deaths/year from medical mistakes estimate, which launched the safety movement), a different group of Harvard investigators, led by pediatric hospitalist and work-hours guru Chris Landrigan, published a depressing study in the New England Journal of Medicine. The study used the Institute for Healthcare Improvement’s Global Trigger Tool, which looks for signals that an error or adverse event may have occurred, such as the use of an antidote for an overdose of narcotics or blood thinners. Following each trigger, a detailed chart review is performed to confirm the presence of an error, and to assess the degree of patient harm and the level of preventability. While the tool isn’t perfect, prior studies (such as this and this) have shown that it is a reasonably accurate way to search for errors and harm – better than voluntary reports by providers, malpractice cases, or methods that rely on administrative data.

Using this method in a stratified random sample of ten North Carolina hospitals, the authors found no evidence of improved safety over a five-year period, from 2002-2007.

Before taking out the defibrillator paddles and placing them on our collective temples, it’s worth considering the possibility that the findings are wrong. We know that the Trigger Tool misses certain types of errors (such as diagnostic or handoff glitches; it’s worth looking at this recent paper by Kaveh Shojania, which emphasizes the importance of using multiple methods to get a complete picture of an organization’s safety), and perhaps the study overlooked major improvements in these blind spot areas. That said, the tool does capture a sizable swath of safety activities – and the lack of improvement in those areas is still disappointing.

I guess it’s also possible that these ten North Carolina hospitals are unrepresentative laggards. But North Carolina has been relatively proactive in the safety world, and these hospitals volunteered to participate in the study, an indication that they were proud of their safety efforts. While I would have liked a bit more information about the state of the safety enterprise at each hospital (did they have computerized order entry during the period in question, for example), I think the findings are generalizable.   

Another slight caveat surrounds measurement and ascertainment bias. Because safety is far harder to measure than quality (the latter can be captured with measures like door-to-balloon time and aspirin administration after MI – and, as Joint Commision CEO Mark Chassin notes in Denise Grady’s article in today’s NY Times reviewing the Landrigan piece, these types of publicly reported quality measures have been improving briskly), there is always the risk that things will look worse when people begin looking for harms more closely… which, of course, they must do to make progress. This is the fatal flaw when we think about using provider-supplied incident reports to measure safety. While the Trigger Tool is more resistant to this concern, it is not completely immune. For example, the hospital that is more attuned to preventing decubitus ulcers will undoubtedly examine patients more carefully during their hospitalization for signs of early bedsores. The Trigger Tool might mistakenly read these “extra cases” as evidence of declining safety. The same holds for falls: our new attention to fall prevention may cause us to chronicle patient falls more carefully in the chart. But such issues only raise concerns for the minority of the triggers; I can’t see how measuring administration of antidotes for oversedation and overanticoagulation, or 30-day readmission or return-to-OR rates, should be biased by a hospital’s greater focus on safety.??

So, despite my best efforts at nitpicking, I’m left largely believing the results of the Landrigan study. Lots of good people and institutions have spent countless hours and dollars trying to improve safety. Why isn’t it working better?

I think the study tells us something we’ve already figured out: that improving safety is damn hard. Sure, we can ask patients their names before an invasive procedure, or require a time out before surgery. But we’re coming to understand that to make a real, enduring difference in safety, we have to transform the culture of our healthcare world – to get providers to develop new ways of talking to each other and new instincts when they spot errors and unsafe conditions. They, and healthcare leaders, need to instinctively think “system” when they see an adverse event, and favor openness over secrecy, even when that’s hard to do. Organizations need to learn the right mix of sharing stories and sharing data. They need to embrace evidence-based improvement practices, while being skeptical of practices that seem like good ideas but haven’t been fully tested. And policymakers and payers need to create an environment that promotes all of this work – policies that don’t tolerate the status quo but steer clear of overly burdensome regulations that strangle innovation and enthusiasm.

In other words, the fact that we haven’t sorted all this out only seven years after the launch of the Good Ship Safety shouldn’t be too surprising. And my sense – although I can’t prove it – is that things are starting to improve more rapidly. Remember that the observation period in the North Carolina study ended in 2007. The first several years of the safety field involved skill building and paradigm changing. Some of the big advances in safety – the embrace of checklists, more widespread implementation of less clunky IT systems, mandatory reporting of certain errors to states, widespread use of root cause analysis to investigate errors – all began in the 2005-2008 period (and some, like IT, are really only cresting now). It will be crucial to follow up this study over time to see if there are signs of progress. I suspect the results will be more heartening.

What now? As I’ve noted many times before, I worry that a harmful orthodoxy has crept into the safety field. We need to figure out ways to ensure that we do the things that we know work, like checklists to prevent central line infections and surgical errors, fall reduction programs, and teamwork training. We need to develop new models for those areas that haven’t worked as well as we’d hoped, like widespread incident reporting and CPOE. We must do the courageous and nuanced work of blending our “no blame” model with accountability when caregivers don’t clean their hands or perform a pre-op time out. And we must allocate the resources, at the institutional and federal level, to do these things and study them to be sure they’re working.

The study by Landrigan and colleagues is a wake-up call. Let’s figure out what’s working, and do more of it. Let’s figure out what’s not working, and do something different. And let’s not stop until we can prove that we have made our patients safer.

Happy Thanksgiving to you and yours.

Share This Post

9 Comments

  1. Menoalittle November 25, 2010 at 5:06 am - Reply

    Bob,

    Your usual articulate style is a pleasure to read.

    I find the safety analysis of these complex systems superficial and insufficient.

    With all of this CPOE and EMR kool aid being served, the Landrigan study was particularly deficient in its failure to distinguish between wired and unwired hospitals.

    Though Landrigan laments, in the NY Times article, that only “17 percent of hospitals have such systems” (ie CPOE devices), if these care controlling devices are so superb, why did he not give them a boost by using his study to show how much they cut the errors on which he reports? It is highly likely he would have, if they did.

    Unfortunately, decades of highly evolved safety resilience was flipped off with the flip of the CPOE switch.

    Even Leapfrog has backed off of its delusions of the CPOE safety panacea. But you have not, Bob. Why not? What is the proof that wiring an entire medical care system provides better outcomes and reduces hip fracturing falls?

    Getting back to the Landrigan study, was it that the unwired hospitals had nurses and physicians who had more time to tend to the real patients in bed, rather than tending to the idiosyncratic, meaningfully unusable, cognition depleting, care delaying, communication disrupting CPOE devices that have become a defacto patient, requiring sophisticated IT problem solving skills and training in order to reconcile, retrieve, and clinically correlate the data for the patient neglected in bed? Quite a mouthful, no?

    The mistakes we have seen are legion. How did you like the blank screens in the Chicago hospitals recently? Those patients got a lot of timely care, correct?

    Pathetically, there is neither after market surveillance of EMR and CPOE devices, nor pre-market oversight. What do you expect, a safety miracle when the vendors use a billing platform for clinical care and a vendor originated certification process that ignores safety and efficacy?

    Doctors would be wise not to compromise their patient care by putting the purchase of such devices on hold until we know the truth of their safety.

    Best regards,

    Menoalittle

  2. Hilda Simpson, MD November 25, 2010 at 6:02 pm - Reply

    “…like checklists to prevent central line infections and surgical errors, fall reduction programs, and teamwork training. We need to develop new models for those areas that haven’t worked as well as we’d hoped, like widespread incident reporting and CPOE.”

    This expresses naivety as to the root causes of unexpected and never errors, and most HIT systems, not being team players, will be proven to be a big fat negatives on the overall safety of the hospital environment. Thus far, the HIT sellers have done an excellent jobe with its HIMSS and CCHIT and CHIME to sell vapourware under the illusion that it improves safety. Where is the unbiased evidence?

  3. Gawfyxkj November 26, 2010 at 7:09 am - Reply

    nice post!!

  4. rsm2800 November 28, 2010 at 3:14 pm - Reply

    I agree with some but not all of Menoalittle’s comments about the limitations of EHR’s and CPOE, particularly the risks arising from the lack of “after market surveillance” of EHR’s. The American Medical Informatics Association’s recent position paper on ethics and HIT vendor contracts raises this very point and calls for the HIT community to “re-examine whether and how regulation of electronic health applications could foster improved care, public health, and patient safety.” See this abstract. However, I disagree with Menoalittle’s assertion that the “vendor originated certification process…ignores safety and efficacy” – assuming this is a veiled reference to Certification Commission for Health Information Technology (CCHIT). The Oncology Workgroup of CCHIT recently published its draft criteria for ambulatory oncology add-on EHR’s, here.

    By my count 27 of the 47 proposed criteria deal directly with patient safety, and probably others do as well. Perhaps some may feel it is a case of too little too late, but given the inherent risks associated with chemotherapy administration, it seems a reasonable initial step at this time.

    The criteria are open for public comment until 12/10/10 at this site.

  5. Maggie Mahar November 28, 2010 at 7:46 pm - Reply

    This, together with you post on the recent NEJM article explaining the many factors that conspired to create a “breathtaking error” underline the need for hospitals and reformers to focus on creating “systems” that protect doctors, nurses and other hospital workers against their own
    inevitable fallibility

    (I’ve written about both of these posts on HealthBeat.)

    Eventually, healthcare IT will play a role, but at the moment, the state of the art leaves much to be desired. Too many vendors selling, and selling hard, withiout adequate knowledge of how hospitals actually operate–and without enough support after making the sale.

    We need to study how the VA, Kaiser and others have made IT work to reduce errors.

  6. Kerry OConnell November 29, 2010 at 6:47 pm - Reply

    Hello Bob
    It sounds like you have changed your outlook a bit since we spoke in Keystone. The dearth of studies this year showing no safety improvement is certainly not a surprise to patients who have been harmed, nor do I believe it is a revelation to our physician community who witness the harm every day. A better question might be why we would expect patients to be any safer when physicians and nurses still work utterly insane hours. After 10 years we still cannot agree on the really simple things like the color and type of connectors IV tubes should have or what a standardized patient chart should contain. Perhaps during the past 10 years our system has be treating the symptoms of harm instead of digging deeper to find the root causes of our system failures.
    I would start the digging by linking several dozen claims databases from our largest med mal insurers to create a very comprehensive picture of nationwide harm. We could then identify the procedures, specialties, and physicians that cause the most harm. Step 2 would be to create a nationwide database of RCA’s to dig even deeper into the weakest system links. The process will undoubtedly find an abundance of issues which like you say may be really hard to fix.
    Personally I feel that one of our biggest root causes is that half of healthcare providers do not take patient safety seriously, specifically the male half. Go to any Patient safety conference and you will find 80% of the attendees to be women. Go to an infection control conference and you will find 90% of the participants to be women. Even among patient advocates the vast majority are women. Women cannot make this system safe all by themselves. Male provider/leaders must find the courage to step up and take responsibility for the failed outcomes and flawed systems which are far too common. Perhaps the current generation will never understand, but then we must find a way to instill in the 22 year olds in medical school that sharing their failures is far more important that maintaining some mythical reputation. We have to build a generation that truly believes from the bottom of their hearts “That just because medical errors happen does not mean that they must happen”!

  7. weakanddizzy December 8, 2010 at 2:00 am - Reply

    Kerry OConnell
    Do I detect a bit of gender bias in your post? As a male physician who cares deeply about the harms our medical system inflicts on patients I think you correctly surmise that many physicians do not take safety issues seriously. We need to build a system that rewards physician behaviors that promote patient safety. Unfortunately our current system rewards procedural volume and so that is what we get.

  8. ndmd11 December 20, 2010 at 2:04 pm - Reply

    Thank-you Menoalittle… amen! I do not believe the proliferation of check-lists and “quality” based initiatives have necessarily improved patient care. The main issue that I see (as a front-line hospitalist) is that the patient chart is already a foot thick before the patient even enters the hospital and time spent checking boxes and responding to third parties not directly involved in my patients’ care robs time I have actually CARING for patients. The various protocols are well intentioned and sometimes effective but steal a lot of my valuable time: time that could be spent extracting histories, talking to patients and actually laying hands on people and interacting with families. In effect, I spend more time justifying my care and documenting what I do than actually doing. This is a disconnect that needs to be reconciled. My anecdotal experience (?isn’t life experience anecdotal.. but I digress.. too human and not statistically relevant I guess) is that an integral component to quality care is actually caring for the patient and having the TIME to do it. Unhurried, unfettered and good ol’ fashioned human interaction needs to be reenforced, encouraged and celebrated. The CPOE, protocols, checklists, etc. need to be designed to make this more likely to happen and help restore time spent with patients! I believe quality of patient care will improve on many fronts when this happens.
    That’s my two cents.

  9. autumn November 21, 2012 at 1:52 pm - Reply

    I agree with ndmd11 that “Unhurried, unfettered and good ol’ fashioned human interaction needs to be reenforced, encouraged and celebrated”. Unforunately, many practitioners “exam” the chart verse examing the patient. How many times have you seen collegues document (care provided) in the patient chart without walking into the patient room?

    So many root cause analysis in healthcare comes down to lack of communication. Talk to the family, the patient, disciplinaries involved, nursing, and consultants. Sometimes even more important is listening. This collaboration is by far best practice.

    CPOE & EHR in my oponion is long overdue. We live in the 21st century. Put down the old fashion pen please. No one can read your handwriting anyway.

Leave A Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

About the Author: Bob Wachter

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.

Categories

Related Posts

By Gian Toledanes, DO
March 17, 2023 |  0
Ableism is a common yet misunderstood “–ism”. Yet the common thread that ties ableism and other –isms/ forms of discrimination like racism, sexism, and homophobia, is the belief that one group or identity is “less than” others. Specifically, ableism is discrimination of and prejudice against people with disabilities and is rooted in the belief that […]
By Suchita Shah Sata, MD, SFHM
November 15, 2022 |  0
When RaDonda Vaught, a registered nurse at Vanderbilt University Medical Center, was criminally prosecuted for a medication error, it sent shockwaves through the medical community. Over 20 years after the landmark National Academy of Medicine (NAM) report To Err is Human and over a decade after Peter Pronovost catapulted the scientific approach to patient safety, […]
By Lanna Felde, MD, MPH
March 9, 2022 |  1
Could being on Twitter make you a better note-writer? We certainly think so! That was one of the many hot takes from February’s #JHMChat, with special guests Drs. Blair Golden, Robert Centor, and Andrew Olson. We explored the most fundamental question in the electronic health record (EHR): what makes a good note? Honest question, has […]
Go to Top