How UCSF’s Root Cause Analysis Process Became Our Most Useful Patient Safety Activity

>
By  |  December 17, 2009 |  18 

Hospitals face so many urgent tasks in safety – computerize, promote teamwork, implement evidence-based safety practices, discover unsafe conditions – that it’s hard to know where to start. If you’re struggling, I recommend that you put your Root Cause Analysis enterprise on steroids. This is what we did at UCSF Medical Center, and it was the most important change we’ve made in our safety journey. Here’s the story, a case of function following form:

RCA, like many of our approaches to patient safety, is familiar to other industries (such as engineering and aviation) but, until recently, alien to medicine. It involves the dissection of an error or a near-miss with an eye toward getting at “root causes” – the underlying system flaws (“latent conditions”) that set up the individual caregivers to cause harm. It doesn’t deny the possibility of human error, but recognizes that while a knee-jerk response that focuses on the person with the smoking gun may be satisfying, it leaves the underlying flaws in the system unaddressed – a setup for a repeat performance.

I first became aware of the power of the RCA when we launched our Quality Grand Rounds series in the Annals of Internal Medicine in 2002. Our first case in the series involved a breathtaking error: a patient received an invasive cardiac procedure intended for another patient with a similar last name. In their masterful analysis of this case, Drs. Mark Chassin (now President of the Joint Commission) and Elise Becher highlighted 17 individual problems that set up the hospital for this mishap. They wrote:

Even though many individuals made errors, none was egregious or causative by itself. Instead, the systemic problems of poor communication, dysfunctional teams, and the absence of meticulously designed and implemented identity verification procedures permitted these errors to do harm. Just as we screen asymptomatic patients for hypertension, all health care systems should assess how well communication, teamwork, and protocols are functioning. Just as treating hypertension effectively prevents strokes, addressing underlying system flaws will greatly increase the likelihood that the inevitable errors of individuals will be intercepted and prevented from causing harm.

I immediately became an RCA fan. But, in our first several years of conducting them, something seemed amiss – the exercise was not producing the impact we expected.

But why? After all, we were doing what everybody else was (and mostly is) doing. When a bad error occurred, we threw together a group of experts and senior administrators to analyze it. The group was predominantly members of our Risk Management Committee, which cast a legalistic shadow over the proceedings. We then identified the front-line participants in the case, and added them to the invite list. Needless to say, trying to get this gaggle of individuals (committee members and caregivers) in the same room was challenging, particularly when the agenda (for the caregivers) involved a painful rehash of a terrible error. (Think about how enthusiastic you are when scheduling an elective root canal and you’ll get the picture.)

Once we found a workable time slot, often more than a month after the event, the conduct of the RCA was a bit haphazard, with little systematic development of an action plan or follow-up. Although some meaningful changes flowed from these exercises, they were clearly not reaching their potential.

As so often happens in safety and quality, it takes a combination of a strong leader and external pressures to overcome inertia. I’ve introduced you previously to our Chief Medical Officer at the time, Ernie Ring, a passionate guy who wasn’t afraid to break an egg or two. Ernie was also uncomfortable with our RCA process. In mid-2007, the State of California implemented a requirement that hospitals report every serious adverse events (basically the NQF “Never Events” list) to state authorities within 30 days. Ernie sprang into action.

“I don’t think our RCA process gets us where we need to be,” I recall him saying. “We need to approach it differently.” Ernie’s a smart guy, but I’m not sure even he recognized how transformative his solution would become.

First, Ernie announced that the RCA committee (euphemistically called the “Clinical Events Oversight Committee”) would have a standing two-hour meeting each week, every Wednesday from 9-11. “Are you crazy, Ernie? Who has the time for that,” skeptics asked.  

OK, I asked.

I was wrong. The decision to have a standing weekly RCA committee meeting was essential. After describing what happens at the meeting, I’ll tell you why.

Now, when we learn of an error (through informal channels or via our incident reporting system), our patient safety manager Kathy Radics does some preliminary investigation, putting together the basics of “what happened” and “who was involved.” On Thursday or Friday of the same week, an email goes out from the Chief Medical Officer’s office to all the involved participants – ranging from a ward clerk to the chair of surgery. Basically, it says that we’ll be discussing your case next Wednesday at 9. Be there. Aloha.

This is a big deal – invariably, some of the docs have surgeries or clinic visits scheduled. But the organization is signaling something vital: learning from our mistakes takes priority over virtually everything else. Yes, it may inconvenience a patient or two, but I’d argue that this prioritization is more “patient-centric” than analyzing the error after all the detailed memories of the incident have faded and dozens more patients have been subjected to unfixed system risks.

“But how do you know you’re going to have a terrible error every week?” you might logically ask. We don’t (and, in fact, we don’t have a terrible error every week, thank goodness). But this is an example of an “unpredictably predictable” phenomenon, and needs to be addressed accordingly. By way of analogy, for the first 8 years of my hospitalist group, we had no formal plans to deal with maternity leaves, other than knowing that we needed to cover for them. So I’d get a call from a female faculty member, asking, “Hey Bob, can I talk to you for a minute this afternoon?” (I was generally the second person to know after the spouse). Moments later, we’d begin informing a bunch of people that they were now unexpectedly scheduled for additional clinical service. Their joy over the blessed event was always tinged by the pain of the additional coverage obligation. About 3 years ago, it dawned on me (yes, I’m that thick) that, in a group of 50 young physicians (about 30 of whom are women), while one couldn’t predict which person would get pregnant and when, one could predict that 2-4 people would be pregnant every year. And so we developed a maternity leave “jeopardy” schedule, essentially hiring 1-2 additional FTEs and creating a clear and predictable coverage schedule for maternity leaves. When folks on “macro-jeopardy” are tapped for coverage, they are no longer surprised or disappointed. Everybody is happier.

Similarly, errors requiring RCAs are not predictable, but predicting that a 600-bed hospital will have 30-40 such errors yearly is. Given this, systematizing the schedule to analyze them is essential. Knowing that the committee is available from 9-11am each Wednesday also lowers the threshold for employing the RCA technique for scary near-misses or other issues that might benefit from this kind of scrutiny. Yes, we cancel the first hour of our two-hour meeting (the hour we reserve to review new cases) every now and then… but not very often. Last year, we conducted 40 RCAs.

The committee is ably led by my hospitalist colleague and Associate CMO Adrienne Green, and includes many members of the “C-Suite” (COO, CIO, CMO, CNO), several senior faculty, a few experts in patient safety, and a couple of front-line nurses and docs. During that first hour (9-10am), all the involved caregivers are there as well. Adrienne begins the meeting with introductions, then lays out the plan and the purpose: this is a “no blame” and confidential forum; our goal is to fix system problems; we can only learn about them from front-line providers; and so on. The effect is as calming as it can be, given the meeting’s agenda.

Then we spend about 30 minutes hearing about what happened and dissecting the error, trying to understand the underlying systems factors. This is pretty fluid – although some organizations use a formal structure to ensure that no stone is left unturned, I personally find these things a bit too formulaic and prefer a more conversational forum, as long as the members of the committee are well schooled in safety science and ultimately explore all the key issues.

After coming to a shared understanding of what happened and why, we begin brainstorming fixes. We cover the usual suspects: Was this a staffing issue? Were there cultural issues such as poor communication or steep hierarchies? Does a process need to be standardized or simplified? Would a checklist help? Is there a feasible IT solution? Over the years, we have learned to be skeptical of fixes that involve “increased awareness” or “education”, instinctively favoring solutions that change the process of care or build in forcing functions. We also know that it is easy to suggest solutions while sipping a café mocha in a conference room; the true test will be whether they actually work at the point of care. After we’ve hammered out a tentative plan, Adrienne will often ask the caregivers, “Tell us why this solution, which seems right to us, won’t work in your world.” Better to hear the answer there than inflict the wrong fix.

The last 10 minutes of the hour are devoted to creating a formal action plan. What are we going to do? Who is going to do it? Do they need any new resources? (Last year, we implemented 144 action items from our 40 reviews.) A few specifics about the case are also addressed: Has the patient been informed about the error? (If not, we create a plan to do so.) Should we waive the patient’s bill? (If it was truly our fault, we do.) Does this error need to be reported to the state? (Even when it is a close call, we do that too.)

The accountable individuals are then charged with completing any further investigation and implementing the action plan, assisted by Dr. Green, Ms. Radics, and other members of our quality or risk management departments. Although they are clearly motivated by ethics and professionalism, they have another incentive: the responsible individual is scheduled to return to the committee in 1-2 months to report what actually happened.

The second hour of the committee meeting (from 10-11am) is devoted to hearing these reports, usually in 15-minute intervals. The committee members are all still there, so that the same individuals who heard the RCA of the dramatic error a few weeks earlier also review the follow-up report. When a report comes back with, “we stalled out because we encountered political problem X” or “we need to buy a $50,000 piece of equipment,” the senior administrators, who control the resources, are there – and, since they heard the case, they are much more inclined to make hard choices than they would have been had they read a bloodless report prepared by an underling. The RCA committee accepts some follow-up reports, while others are deemed incomplete (with plans for a return report after new steps are taken).

Why is this process transformative? First, the most senior leaders of the organization are devoting two hours a week to error analysis and problem solving. Their time and energy is our most precious, and scarce, resource. Moreover, my concern that busy people wouldn’t be able to find this much time has been allayed – compared with all of the stultifying meetings those of us with administrative responsibilities attend, this meeting has a unique sense of drama, immediacy, and results-orientation. People, even extraordinarily busy ones, make the effort to be there.

Second, we, the RCA committee members, are improving with experience. We have learned how to function more effectively as a committee, what solutions tend to work (and don’t), and how to sniff out themes. Today, we might hear a terrible ER error and say, “you know, the underlying issues in that case are very similar to the ones from that case last month from the endoscopy suite.” And that changes the priority we give to the problem and its solutions.

The meeting also more broadly benefits the organization’s safety culture. Every week, nearly a dozen front-line caregivers are brought face-to-face with senior hospital leaders to discuss a terribly upsetting error. The providers wouldn’t be human if they weren’t anxious about this process, but, in my experience, most leave the meeting feeling that a) the organization is serious about safety; b) I now understand a bit more about that “systems thinking” thing; c) the senior leaders aren’t so bad after all. The effect is to improve the safety culture and problem-solving capacity of our institution, a little bit each week. The participants become mini-ambassadors for safety when they return to their clinical homes.

The process isn’t perfect. Some organizations have included on their committees patient representatives or non-clinical experts in areas like human factors, and we haven’t done that. I recently learned of an organization that conducts some of its RCAs in the actual site of the error (the OR, the L&D suite), finding that an appreciation of the physical setting is often crucial to understanding the problem. We sometimes fix an error at its source but lack the resources to generalize the fix to similar clinical areas that may have similar risks.

But I’m quite confident that our RCA transformation has been the most powerful thing we’ve done to improve safety. While some safety fixes (such as computerization and incident reporting systems) have been less useful than I would have expected, our RCA process has been more effective – because it really is much more than an error-analysis exercise. It is an organizational-messaging, culture-changing, capacity-building process.

That sure seems like it’s worth two hours a week.

Share This Post

18 Comments

  1. SandralpsRN December 18, 2009 at 1:58 pm - Reply

    Our RCA system is very similar. although we are still evolving the process. We meet for an hour every Thursday with the CMO, CNO, Quality Director, Quality team members who have cases, Risk manager, and Customer Service Director. We’ve recently added customer service complaints to our agenda as we’ve discovered many patient safety issues are reported via Customer Service. However, we assign RCAs and approve action plans at that meeting only. Our RCA’s still tend to be 3-4 weeks after the event. I like the idea of a standing RCA but am unsure if we would have the time as we review many events which could have caused serious harm but did not. I plan to share this post with my boss.

  2. Mark Rogers December 18, 2009 at 4:41 pm - Reply

    I appalude your efforts. As a patient safety guy/risk manager for over 12 years, I have seen many version sof the RCA process used. I am a traditionalist and stick to the Joint Commission template–but I find that, sometimes, this is just not “deep enough.”

    I have seen other systems (though I have never tried them) like the Taproot system and the VA’s system.

    The idea of using a “Care Oversight” committee is a good one and at the hospital I am working at now, we are beginning to involve our existing committee into the RCA process a little more. However, I do have a concern/question.

    Do you think that using physicians, who may or may not be trained in the whole “systems thinking” process, yields the real root cause? My experience has been that some docs look at an issue much like they do a patient: here’s the symptoms; here’s the cure. Yet the symptoms is just that: a symptom–not the real, underlying problem.

    I would beinterested to hear more.

    Thanks for sharing your expertise!

  3. Bob Wachter December 19, 2009 at 7:14 am - Reply

    It has been interesting watching the thinking of the physicians on the committee evolve. Early on, our instinct tended to be to focus too much on the individual — after all, that’s how we were trained and socialized. Later, many of the docs (and others) tended to focus more on education as the best fix. Then we entered a stage in which checklists or double checks tended to be the most common solution. I think we’re now at a point in which we explore the full range of solutions, recognizing that our providers are experiencing a bit of double-check/checklist overload. As our IT scaffolding gets better, I suspect that there will be a reflex to employ computerized solutions.

    In the end, if you are too religious about any one fix, you won’t do patient safety very well. It’s important to explore the full range of solutions and try to fit the best one (or ones) to the circumstances. At this point, I believe the docs on the committee are as adept at this as any of the other members.

  4. jb December 21, 2009 at 4:11 am - Reply

    I’m a doc in the trenches (surgeon) with a healthy skepticism of everything that goes on in the C-suite. (I have been involved in things as chief of surgery and a member of the MEC at 2 medium sized institutions). I think that just about everyone involved in these types of activities is well meaning, but so much of it seems like rent-seeking behavior, or stuff done just to satisfy dictates of JCAHO, state regulators, or other folks whose primary task is to justify their jobs.

    My experience, from the bottom up rather than from the top down, is that most screw-ups in hospitals result from someone just not paying attention to the task at hand. The sentinel-type events in 200 bed hospitals are fortunately quite rare, but when they occur, it’s invariably human error, or to put it in the most familiar terms, someone just not paying attention.

    We spend a minute or so at the start of every operation going through the meaningless and counterproductive “time-out.” Is there any documentation that this has ever prevented the problems that it is intended to prevent? In the two hospitals in 2 different states that I have been associated with since it became mandatory, it has been a rote reading of the op permit- I have never seen any nurse go under the drape to recheck the wristband, which should be part of the deal, no? It’s as meaningful as the Pledge of Allegiance in 2nd grade. More effort is put into documenting that it was done than actually checking that we are doing it right- it’s apparently mission-critical that both nursing and anesthesia record that it was done the same minute, but again nobody checks to make sure that we are not doing Mr. Jones’ operation on Mr. Smith. If that does happen, it will be because the nurse was not paying attention.
    The nurse will get in trouble, big trouble, if the time-out is not documented correctly. If I have to interrupt my operation because she forgot to attach the bovie grounding pad (a once per month occurrence), or the sutures I need are not available (once per week at least), she may or may not get in trouble, and I may fuss a bit, but that is not a concern of management.
    Another example: we had a “surprise” JCAHO visit last year. C-suite went into a frenzy. 6-figure administrators were literally running around the hospital with unsigned charts, waiting outside the OR for me and my colleagues to appear. They were not there to encourage me to close my wounds more securely, or to auscultate my patients’ hearts more carefully, or prescribe medications appropriately. No, it seems that I, and my colleagues, already do that stuff very well. The problem was that there were some orders on charts of patients that had long ago left the hospital (one way or the other), and some of the orders were dated but not timed, or dated and timed but not signed. Again, nobody questioned whether the orders were appropriate for the clinical condition.
    We spend an inordinate amount of time and energy documenting things just because we can, and we pretend that it helps. This takes precious time away from a conscientious nurse or tech or doc actually paying attention to the task at hand. It occurs on our wards and in our clinics where the charts get filled with irrelevant crap, making it harder to find the useful information. In the OR, the circulating RN hunched over the keyboard instead of coordinating the operation is a cliche that surgeons no longer laugh about or protest over- we have been beaten down by forces greater than us, and resistance is truly futile.
    It’s too late to get back to concepts like professionalism, pride in doing the job right, and just paying attention to the job, and it’s impossible to even consider going back to a system where common sense would rule when utilization reviewers think it’s appropriate to interrupt surgery to tell me that I have to change someone’s status from observation to inpatient. Let’s just stop deluding ourselves in thinking that that we are doing any good. Most of what we do is just to do it.

  5. PJ December 23, 2009 at 4:05 pm - Reply

    jb, I once had a close call with a wrong-site surgery. If the OR staff hadn’t checked the site and I hadn’t spoken up, it would not have been caught until it was too late. In fact, the confusion should never have been allowed to progress almost to the point of anesthesia. I can understand how the wrong site would have been marked on the X-ray, but clearly the process failed at multiple points. Is this a systems problem or is it inattention to detail, or is it just sloppiness? As far as I know, no one ever went back and reviewed the sequence of events to see if procedures could be tightened up or improved.

    I’m not gonna argue that the executive suite doesn’t set the tone for the whole organization. They need to be as committed as everyone else. But at some point we have to stop using management as the scapegoat/excuse.

    If you check with the Minnesota Department of Health, you’ll find they’ve done some very good work and some observational studies on getting the surgical site right. One of the things they’ve learned is that rote reading is not the way to do it. It frankly bothers me deeply to hear a surgeon saying this is “meaningless and counterproductive.” I don’t want to be the patient who is on the receiving end of this attitude.

  6. jb December 24, 2009 at 4:21 am - Reply

    PJ, it’s meaningless and counterproductive if getting through the time out and documenting that it has been done is more important than actually checking to see that the correct operation is being done on the correct part of the correct patient. That is actually what once occurred in the hospitals and ORs of this country, when professionalism and pride in doing the job right were more important than going through a checklist and making sure that the nurse and anesthesiologist have identical times documented for the “time-out.” Surely you don’t mean to state that absent the mandated time-out procedure, the OR staff would not have checked the site, or that you would not have spoken up before proceeding with the wrong site surgery. I would be delighted to receive data to the contrary, and will expeditiously change my position if I do, but thus far there is no evidence that that this meaningless and counterproductive exercise has done anything to decrease the fortunately very low incidence of such disasters. This is very similar to the debate over work hour limitations for residents. Of course nobody wants a physician who is tired. What is provided today is a physician who knows nothing about you except what she was hurriedly told during checkout rounds, and has the attitude that her responsibility for your welfare ends in a few hours. This is the way residents are trained today; the predicted benefits of improved care have not materialized, but poor work habits that are currently being ingrained in our trainees will be carried through their professional lifetimes. I’ll take the tired but conscientious physician over the perky but clueless one, please.
    I completely agree that rote reading is not the way to do it, but my observation is that rote reading is the way that it is done, at least in the 3 hospitals that I have worked in since time-out became the law of the land. If you think about what I wrote, you will likely agree with me- rote reading is actually meaningless and counterproductive, but it satisfies the requirements of what the OR personnel are required to do. Nobody wants to be in the receiving end of any medical mishap- we are on the same side here. It has been my impression that the time-out exercise is not decreasing and can not decrease the incidence of such mishaps. Like the expensive and time-consuming Wednesday morning meetings that Dr. Wachter has convened, it makes people feel all warm and fuzzy to believe that they are doing something useful, but evidence that this actually occurs is not available. As Dr. Stead said, “the secret of the care of the patient is in caring for the patient.” Going to meetings and spending time documenting how righteous you are does not benefit the patient. Let’s concentrate on the critical mission: caring for the patient.

  7. Jan S. Krouwer December 24, 2009 at 11:12 am - Reply

    jb,

    Slips (non-cognitive errors) are made by everyone including people who are paying attention and are mitigated with double checks (time outs). Most double checks don’t detect an error because none was made, but that does not detract from their value.

    Regulatory bodies focus too much on documentation, perhaps because checking documentation is easier than actually measuring error rates.

  8. PJ December 24, 2009 at 4:16 pm - Reply

    I totally agree that documentation for documentation’s sake does not make sense. Unfortunately that is how many of the accountability systems are designed – to document everything, because if you don’t, then it “didn’t happen.” I think to really make health care safer, what we need is genuine mindfulness of what we’re doing. But I have no idea how to get people on board with this, and the frenetic, hurry-up, understaffed environment surely does not help.

    Re the pre-surgery time out: I think we’re still trying to figure out what is the best and most effective way of doing this. If the final time-out is not decreasing the incidence of wrong-site surgeries, does this mean the time-out doesn’t work? Or is the real issue with how the time-out is being conducted? The MN Dept. of Health came up with several recommendations after doing a series of observational studies in hospital ORs around the state. The findings are posted on their Web site.

    Yes, I do mean to state I would actually have had a wrong-site surgery if it hadn’t been for a final double-check. The mistake apparently happened upstream – the wrong site was marked on an X-ray which was forwarded to the surgeon. The techs had no way of knowing it was wrong and in fact they did not know until I told them. The surgeon wasn’t even present in the room when the final check was conducted. So the entire process, at least in this case, came down to me and whether I was able to catch this myself and say something before it was too late. I find that rather frightening.

    I think this is all a long and arduous process. It doesn’t change overnight, and health care professionals can’t be expected to do this alone.

  9. Jan S. Krouwer December 29, 2009 at 12:18 pm - Reply

    Time outs work but they won’t prevent all wrong site surgeries, see: http://archsurg.ama-assn.org/cgi/reprint/141/4/353.

  10. JOanie guy January 2, 2010 at 9:49 am - Reply

    This site is really interesting to read. The surgeon talking about the non value added of the rout time out is speaking from the frontline. I do sense that many surgical teams are just going through the motions of a time out and not focusing on the content. However surgeons are supposed to be the captain of the ship. Last I heard the captain steers the ship not the crew. If time outs are going to be meaningful, perhaps the surgeons need education and support in how to steer a team towards a successful time out process.

  11. Online Checking Account January 11, 2010 at 7:09 pm - Reply

    Re the pre-surgery time out: I think we’re still trying to figure out what is the best and most effective way of doing this. If the final time-out is not decreasing the incidence of wrong-site surgeries, does this mean the time-out doesn’t work?

  12. bev M.D. January 20, 2010 at 1:49 pm - Reply

    OK I’m late to the party, but my comment to the surgeon jb is, why don’t you take leadership of the time out process and MAKE it meaningful? Why are you just sitting there and let it be rote, or let the nurse not look under the drape? It’s because you don’t believe in the process yourself, isn’t it?

    I am a pathologist who has seen many, many errors which could have been prevented with better systems thinking. Human error is indeed inevitable (that which you call, “not paying attention.” It will happen to you someday.)

    My old boss used to say, there are M.D.’s who have been sued and there are M.D.’s who will be sued. Don’t let it be because you let the process make you cynical about the result.

  13. Frontline RN January 23, 2010 at 1:34 pm - Reply

    Jb, professionals with good work ethics who are passionate about patient safety and have stellar performance still make mistakes. The elements of a timeout in my OR take less than 30 seconds and, in my OR, are lead by the surgeons. They start by saying the name of the patient they think they’re operating on and the nurse checks the armband and reads the name – the anesthetist hollers “ok” or “right on” or some other catchy affirmation it’s correct. The surgeon calls out the procedure, the nurse reads the procedure off the permit and the witty anesthetist confirms they match again in his witty way. My point is we all know it’s important and we get it done but instead of making us irritated about the whole thing – the docs figured out a way to make it fun and still retain the meaning. We have the best group of gas guys around! One surgeon and gas guy rap the whole timeout! It’s hysterical but still effective! (but two minutes for a timeout?? that’d never happen in our OR – try lighting up a bit dude.. life is too short.. just make sure the life that’s too short isn’t your patient on the table! Oh and one more payoff to a “team” working relationship with the nurses is we can quit being so tense you’re gonna lose your head and try to get me fired that I can relax and catch my mistakes like failing to ground the Bovie)

  14. Patrice Spath January 25, 2010 at 1:18 am - Reply

    Getting back to the original post and the RCA process used at UCSF. I’m wondering how “lessons learned” from the RCAs get disseminated throughout the organization. Often similar processes are in need of fixing, but in many organizations (especially large teaching hospitals like UCSF) the word doesn’t get out and eventually a patient is harmed by the problematic process located in a different unit.

  15. Bob Wachter January 25, 2010 at 4:16 am - Reply

    Yes, Patrice — I see this as the major challenge. The process is quite good at identifying and fixing the root causes of the adverse event itself, but not nearly as good at identifying and fixing similar error-prone areas scattered around the 600-bed academic medical center, nor in “getting the word out.” I’d be interested to see if any readers are at an organization that has figured this one out — I think it is really tough.

    • Bob Latino July 3, 2012 at 12:28 pm - Reply

      I am interested to know if UCSF utilizes their RCA process proactively? In my experience in HC, most RCA efforts are fueled strictly by reactive, regulatory drivers (meeting minimal requirements). This conditions the use of RCA only after bad things happen.

      As a career RCA investigator, I find the greater benefit of RCA is when it is used proactively in conjunction with reactive efforts. For example, why couldn’t/shouldn’t RCA be applied to legitimate FMEA results so we learn about why certain identified risks are unacceptable? Why couldn’t/shouldn’t we look at the chronic failures that happen so often (that do not rise to the level of a Sentinel Event) we develop workarounds to compensate for them. I find these chronic types of failures, when left unattended, find themselves as contributing factors to the more severe sporadic or acute events.

      WRT to dissemination of RCA results across a system for lessons learned, there are RCA knowledge management systems that tag certain individuals to receive a summary RCA report on a certain type of incident that may affect their working environment. Also, educators could be included in this distribution list to aid in their internal educational programs.

      Lastly, I have found that creating a short summary RCA case study video library (keeping the videos to 7 – 9 minutes), provides a more interesting format for delivering lessons learned to individuals as well as for use in educational programs. Standard MS tools exist to do this with no additional expense, just the time of an individual to narrate the case using their desktop and a microphone.

      Is anyone else using RCA proactively or disseminating their RCA results in this manner?

  16. marklee February 21, 2010 at 2:22 pm - Reply

    I have seen other systems! It has been interesting watching!

  17. Blogfan February 10, 2011 at 6:27 am - Reply

    What is happening with the Immunogenetics Lab investigation. They are still using unlicensed personelle to perform the high complexity work which is against state law.

Leave A Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

About the Author: Bob Wachter

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.

Categories

Related Posts

By Gian Toledanes, DO
March 17, 2023 |  0
Ableism is a common yet misunderstood “–ism”. Yet the common thread that ties ableism and other –isms/ forms of discrimination like racism, sexism, and homophobia, is the belief that one group or identity is “less than” others. Specifically, ableism is discrimination of and prejudice against people with disabilities and is rooted in the belief that […]
By Suchita Shah Sata, MD, SFHM
November 15, 2022 |  0
When RaDonda Vaught, a registered nurse at Vanderbilt University Medical Center, was criminally prosecuted for a medication error, it sent shockwaves through the medical community. Over 20 years after the landmark National Academy of Medicine (NAM) report To Err is Human and over a decade after Peter Pronovost catapulted the scientific approach to patient safety, […]
By Lanna Felde, MD, MPH
March 9, 2022 |  1
Could being on Twitter make you a better note-writer? We certainly think so! That was one of the many hot takes from February’s #JHMChat, with special guests Drs. Blair Golden, Robert Centor, and Andrew Olson. We explored the most fundamental question in the electronic health record (EHR): what makes a good note? Honest question, has […]
Go to Top