The Crash of Air France 447: Lessons for Patient Safety

>
By  |  December 31, 2011 |  17 

From the start of the patient safety movement, the field of commercial aviation has been our true north, and rightly so. God willing, 2011 will go down tomorrow as yet another year in which none of the 10 million trips flown by US commercial airlines ended in a fatal crash. In the galaxy of so-called “high reliability organizations,” none shines as brightly as aviation.

How do the airlines achieve this miraculous record? The answer: a mix of dazzling technology, highly trained personnel, widespread standardization, rigorous use of checklists, strict work-hours regulations, and well functioning systems designed to help the cockpit crew and the industry learn from errors and near misses.

In healthcare, we’ve made some progress in replicating these practices. Thousands of caregivers have been schooled in aviation-style crew resource management, learning to communicate more clearly in crises and tamp down overly steep hierarchies. Many have also gone through simulation training. The use of checklists is increasingly popular. Some hospitals have standardized their ORs and hospital rooms, and new technologies are beginning to catch some errors before they happen. While no one would claim that healthcare is even close to aviation in its approach to (or results in) safety, an optimist can envision a day when it might be.

The tragic story of Air France flight 447 teaches us that that even ultra-safe industries are still capable of breathtaking errors, and that the work of learning from mistakes and near misses is never done.

Air France 447 was the Rio de Janeiro to Paris flight that disappeared over the South Atlantic Ocean on June 1, 2009. Because the “black box” was not recovered during the initial searches, the only clues into how an Airbus 330 could plummet into the sea were 24 automatic messages sent by the plane’s flight computer to a computer system in Paris used for aircraft maintenance. The messages showed that the plane’s airspeed sensor had malfunctioned and that the autopilot had disengaged. With the black box seemingly unrecoverable (its acoustic pinger stopped transmitting after a few months, and the seabed near the crash site was more than two miles deep), the aviation industry steeled itself against the likelihood that the crash would remain a mystery forever.

Miraculously, in April 2011, a salvage boat recovered the plane’s black boxes, and their contents do reveal precisely what happened the night when Flight 447 vanished, killing all 288 people on board. The most gripping article I read this year – in Popular Mechanics, which seemed like an unlikely place for high drama – reconstructs the events, most of the narrative coming from the pilots themselves.

We now know that AF 447 was doomed by a series of events and decisions that hew perfectly to James Reason’s famous “Swiss cheese” model of error causation, in which no single mistake is enough to cause a catastrophic failure. Rather, multiple errors permeate relatively weak protections (“layers of Swiss cheese”), ultimately causing terrible harm.

In a nutshell, the first problem – which began the tragic chain of errors – was the crew’s decision to fly straight into a mammoth thunderstorm, in an equatorial area known as the “intertropical convergence,” where such storms are common. This may have been an example of what safety expert Edward Tenner calls a “Revenge effect”: safer systems that cause people to become complacent about risks and engage in more dangerous acts (the usual example is that safer cars lead people to drive faster). (My interview with Tenner is here.) We’ll never know why the pilots chose that route, but we do know that several other planes chose to fly around the worst of the storm that night.

Since commercial pilots are not permitted to fly more than eight hours consecutively, on this 13-hour flight senior pilot Marc Dubois left the cockpit for a nap about two hours into the flight. (The most chilling passage of the Popular Mechanics article: “At 2:02 am, the captain leaves the flight deck to take a nap. Within 15 minutes, everyone aboard the plane will be dead.”) This left the plane in the hands of the two co-pilots, David Robert, 37 and Pierre-Cédric Bonin, 32. Bonin, the least experienced of the three, took Dubois’ seat, which put him in control of the flight.

Now in the middle of the thunderstorm, the pitot tube, a four-inch gizmo that sits outside the plane underneath the cockpit and monitors airspeed, froze over, which caused the airspeed gauge to go blank. Even worse, robbed of its usual inputs, the plane’s autopilot disengaged, leaving the co-pilots to fly the old fashioned way, but in the dark and without airspeed information.

Moments later, Bonin made a terrible – and to many experts – inexplicable, decision: to pull up on the controls, lifting the planes nose and causing it to stall out in the thin air six miles up. (Note that the term “stall out” is misleading. As James Fallows points out in the Atlantic, the Airbus’s engines continued to work just fine; Bonin’s inappropriate ascent created an “aerodynamic stall” in which the angle of the wings to the wind created insufficient lift to keep the plane airborne.) It will never be known exactly why Bonin did this, or why he continued to do it until it was too late; experts speculate that he may have been overwhelmed by the storm, the sound of ice crystals forming on the fuselage, and the two-second alarm that signaled the disengagement of the autopilot, leaving him to do something that today’s pilots don’t do very much: fly a plane by themselves outside of takeoff and landing.

The technology on modern planes is so sophisticated that these aircraft have become virtually crash-proof – assuming, that is, that the pilots don’t mess things up. There’s even a joke that says that a modern aircraft should have a pilot and a dog in the cockpit: the pilot to watch the controls and the dog to bite the pilot if he tries to touch the controls. While today’s jetliners can nearly fly themselves, these sophisticated technologies can have unintended consequences, just as they do in healthcare. As the Popular Mechanics and Atlantic pieces both explain, Air France 447 had several of them, each a layer of Swiss cheese.

First, the pilots may have assumed that 447 could not stall, because the Airbus’s computers are designed to prevent this from happening. The crew may not have realized that most of the built-in protections were bypassed when the plane flipped out of autopilot.

Second, on most commercial airliners, the right and left seat controls are linked; on such a plane, Robert would have been able to detect Bonin’s mistaken decision to lift the plane’s nose and correct it. For unclear reasons, the Airbus designers delinked the 330’s controls, which made it possible for Robert, and later the pilot Dubois (who returned to the cockpit as the plane was falling), to remain unaware of Bonin’s error until it was too late to fix.

Third, the technological sophistication of modern aircraft means that new pilots are no longer well trained in flying without the assistance of modern gadgetry. When the computers break down, many young pilots are at a loss. “Some people have a messianic view of software, meaning that it will save us from all our problems,” aviation safety expert Michael Holloway told PBS’s NOVA. “And that’s not rational, or at least it’s not supported by existing evidence.” Many older commercial airline pilots first earned their wings in the military, where they gained experience in flying manually, sometimes without power or while dodging hazards like mountains and missiles. Bonin may have erred because he hadn’t received sufficient training to ensure the correct response.

Even these problems might not have been enough to allow an intact modern jetliner to fly into the ocean. The interaction between the two co-pilots during the moments of crisis demonstrates remarkably poor communication, despite their training in crew resource management. Moreover, there was a marked lack of situational awareness, with everyone focusing on a few small details while ignoring a blaring cockpit alarm, which repeated the word “stall” 75 times before the plane crashed.

Bob Helmreich of the University of Texas is probably the leading aviation safety expert who has worked to translate aviation safety to healthcare. He says, “we’ve seen accidents where people were actually too busy trying to reprogram the computer when they should have been looking out the window or doing other things.”

You can bet that since the discovery of the black boxes every commercial airline pilot in the world now knows what happened to Flight 447, and airlines and regulators such as the FAA have instituted new mandatory training requirements. A worldwide directive to replace the pitot tubes with more reliable sensors was quickly issued, and other technological fixes will be put in place as well. Aviation crashes are now so rare that those that do occur lead to rapid analysis and mandatory changes in procedures, technologies and training. Thankfully, this will make a repeat of AF 447 unlikely.

What are the lessons from this terrible tragedy for healthcare? Well, it certainly doesn’t mean that we should abandon aviation as a safety model. But the crash is a cautionary tale of the highest order. We need to ensure that our personnel have the skills to manage crises caused by the malfunction of technologies that they’ve come to rely on. We should continue to push crew resource management training and work on strategies to bolster situational awareness (I haven’t found anything better than the old House of God rule: “In a Code Blue, the first procedure is to take your own pulse.”) We need to redouble our efforts to promote realistic simulation training, and to build systems that allow us to learn from our mistakes and near misses so we don’t repeat them.

Those of us working in patient safety can only hope that one day our system approaches aviation’s safety record. When we do, we will congratulate ourselves for the lives we’ve saved, but the hard work will be far from over. James Reason, who calls safety a “dynamic non-event,” has pointed to the risks of complacency even in very safe systems. “If eternal vigilance is the price of liberty,” writes Reason, “then chronic unease is the price of safety.” The tragedy of Air France 447 teaches us that the quest for safety is never ending.

Share This Post

17 Comments

  1. Mark MacG December 31, 2011 at 11:55 am - Reply

    Another great blog, and much to learn from other industries. However, some of the comparisons with aviation seem a little unfair, and can be demoralising for healthcare staff. The equivalent events to a plane going down would be a hospital fire killing the majority of staff and patients – thankfully rare events due to fire safety.

    A truly fair comparison would include military and private aviation where crashes are more common (is that because of poorer safety management or more dangerous environments?) – just as we can’t only look at safety in our planned care patients.

    It would also include 30 day mortality just as we do for Hospital Standardised Mortality Rate. How many people die from their PTE after flying, and do the airlines ever know about these cases?

    At the end of the day, airlines deal with a low risk population, and indeed screen out sick people from travelling. Straight comparisons will always make healthcare look poor. Which is not to say that we can’t learn from some of their approaches!

  2. Joel December 31, 2011 at 3:20 pm - Reply

    You didn’t consider the primary problem. Doctors and nurses don’t report the medical equivalent of plane crashes 93% of the time according to HHS and others (see http://www.patient-safety.com/Medical.Reporting.htm). Two thirds of the rest of the time they don’t report it accurately. Only 2% of adverse events are reported accurately in medicine. Whenever there is discussion of medicine learning from aviation, this most important and fundamental problem is ignored. So improvement is insubstantial if it exists at all, as the last decade has shown. Doctors and nurses subjectively interpret the evidence of their senses in self-serving ways and find nothing to report (ask any patient with iatrogenic injuries what was in the record about it). The lessons of aviation are of only cosmetic value until a way is found to get honest information about what happens in medicine. Until then, the Root Cause Analysis of aviation will save too few lives in medicine to measure no matter how much time and money is spent on it.

    Medicine is a jumbo jet without a black box disappearing everyday without anyone knowing why.

  3. alan December 31, 2011 at 4:49 pm - Reply

    This is such an insightful presentation. The culture, I believe, is far more important than the technology, as is something that Bob did not mention- union representation.

    I used to be on staff of a hospital whose CEO controlled physicians and administrators through extraordinarily high salaries. This created a society of drones unwilling to question authority for fear of losing a salary that could not be replicated anywhere in the area, as well as a restrictive covenant that had been enforced stridently.

    In this case, the sociopathy of the CEO extended to the medical staff. There were numerous systems for quality assurance, but it never resulted in change, and was countermanded by the fiats of the CEO.

    Hopefully, this system will be investigated by the OIG and the local state board, but the JCAHO, despite numerous complaints from nurses who have witnessed horrors, have remained strangely passive.

    We are not even close to the aviation standard. It is time for us to exceed it.

  4. Alfredo Guarischi. MD December 31, 2011 at 6:08 pm - Reply

    Bob this was not only a pilot slip. It was a systemic error. This is what happens in high reliability organizations most of times. Flight AF447 had two “chief residents”, but just one “senior staff” in the surgical block. This is not good in the OR and in the skies, but industry call out. The shorter way to remove a tumor may damage the aorta, and if there is pressure or many cases to be done we may have big bleeding. Pilots are working harder than in the past. The BISS has a delay at least in 3 minutes and anesthesiologist must check other parameters. The new generation of pilots and doctors are not trained in using hands, ears, smell and touch. We are training people to read numbers (not data). Pilots must do pilotage, clinicians have to use stethoscopes and surgeons must know anatomy at least. Finally the black-box (we should have ours) were not found by miracle, but because was very important. Now we know that was a pilot error. But we must remember that this people (at sharp end) make mistakes and paid with their lives. There was not a turbine failure or other industrial item, just normal people now dead. The system may do more profit. Safety is a cultural issue. We cannot buy or rent. We may err and have no damage. We may have damage and people look it as complication. Aviation is bench marketing in safety but error is human. Technology is not a safety synonym.

  5. John December 31, 2011 at 11:01 pm - Reply

    Bob as always you have great insight and a question is raised in my mind — how do we balance the need to learn to use the newest (and hopefully safest) technology against going “off autopilot” and being able to “fly manually” so to speak.

    The best example I can think of this in healthcare is the placement of central lines. Today we have bedside ultrasound that allows much safer placement with fewer complications yet is there not value in not only knowing the theory behind “manual” placement but also having done some of these placements during your training? I saw this change dramatically with the new surgical residents where I once worked — they always went for the portable ultrasound while it was not uncommon for the senior resident to do a placement without the new technology (and sometimes look down on the junior for “relying” on technology too much). Now in most circumstances if I was the patient (or advocating for them) I would want the use of ultrasound.

    However consider what would happen if the situation was such that an ultrasound was not feasible (e.g. equipment failure, natural disaster). If the person doing the procedure had no or little experience with “manual” placement the tables have now turned — I would NOT want them doing the procedure. Yet how do we teach and expect a provider to be proficient in both? Is it fair to expect a patient to have to risk the complications of a central line placement without the use of ultrasound if the technology is readily available just so the provider can gain experience in case they need to “fly manually” so to speak?

    Follow-up questions then are how many manual placements should one do and how should one maintain proficiency in this as the years pass from initial training. Would simulators alone be adequate or do they have to be done “live”? And what about the cost in maintaining proficiency in a older method which would likely be used infrequently, if ever — who pays for that?

    I think the question is where exactly lies the balance in learning and being able to use the latest technology and still being proficient to know what to do when when you have to “fly manually” while always keeping patient safety at the forefront.

  6. Brad F December 31, 2011 at 11:44 pm - Reply

    Bob
    Aside from the greater points as this post relates to healthcare, as the other commenter’s write, what most folks will overlook is how seamlessly you have woven your sources and compactly produced a wonderful and meaningful post.

    This narrative is experience and effort talking–yours, and you made it look easy, Thanks for the behind the scene lifting–you compressed a lot with citations and all, and I hope this piece gets the eyes it deserves.

    Brad

  7. Rich Davis January 2, 2012 at 1:55 am - Reply

    Thank you for the post. Although not responsible for safety systems, as a hospitalist I am certainly involved with them and continuously on the periphery of improvement efforts. I have been very interested in systems, assessment of systems and improvement. The tragedy of F447 captured my attention in 2009 as the preliminary data, later confirmed, is that the aircraft entered a high altitude aerodynamic stall at approximately 38,000 ft, maintained a nose-up attitude and fell 4 min 23 sec until striking the water at a vertical speed of 10,912 ft/min. This is absolutely stunning as the Airbus has safety systems unlike Boeing or any other aircraft. All modern aircraft have stall warning systems, but the A300 is far more sophisticated, having actual stall prevention systems that will override any pilot input that threatens the aircraft (the concept of ‘protection laws’ within the software). The activation of protection laws and NOT following pilot input has repeatedly saved lives (ie: “… but for the last few seconds of the glide, with Sullenberger’s stick fully back, the computers intervened and gently lowered the nose to keep the wings flying.”)1

    The stunning element in the F447 loss is how can an aircraft, technologically almost incapable of entering an aerodynamic stall, actually enter this disastrous flight configuration and even more stunning, how is it that the pilots never recovered? There are some hard lessons here about how humans interact with computers, automation and safety systems. There are also some hard lessons here about assumptions; assumptions made by operators and assumptions made by designers.

    The stall warning alarms are intended to warn pilots that they are approaching, or entering a stall configuration. Although the F447 stall warning was triggered for almost a full minute, once the forward speed of an Airbus falls below a certain threshold, the stall warning shuts off as the speed interpretations are not reliable and an aircraft at or below this threshold is generally in the process of landing and no longer in need of an alarm. Unique to this event was that the pitot tubes temporarily iced up at high altitude. When this happened, speed assessment became impossible and the stall protection system was automatically disengaged as it is not reliable without airspeed information. From this time on, the controls will do something they would never do under normal circumstances; they will FOLLOW pilot inputs without any override of potentially dangerous inputs. As with Sullenberger mentioned above, I have to wonder if the pilots had come to rely on the software protection system override and were therefore less concerned about extreme control inputs, being mistakenly confident that the computer would intervene.

    I have to wonder if the pilots had ever experienced a high altitude disengagement of the stall protection system and a subsequent stall. I wonder if the pilots really knew, in a deep and concrete way, that if the plane ever entered a complete stall and lost sufficient forward speed, the stall alarms would SHUT OFF. I wonder, if the pilot maintained erroneous input (full back on the stick) to the point of loss of forward speed and thus silenced the stall alarm, that this could reinforce the deadly behavior as the pilot could now mistakenly think he’s no longer in a stall. Could it be that the pilots interpreted the intermittent absence of the stall alarm as an indication the plane was once again flying, when in fact, it was falling. An engineering/design assumption I also wonder about is that pilots need to know when they are approaching a stall so they can respond accordingly, but once in a stall, an alarm is no longer needed as a stall is usually an obvious aerodynamic state . . . did the designers making this implicit assumption clearly communicate it to the pilots in a solid and unshakably memorable way?

    The Airbus has side-mounted joy sticks rather than mechanically interconnected dual yokes like other modern planes. This is likely something that has evolved from single seat fighter plane where a side mount frees up central space and the concern of dual input is a none-issue. Although Airbus experimented with tactile feedback between the two sticks to increase situational awareness of unintended dual inputs from both pilots, they finally settled on an aural alert along with the illumination of a “dual input” warning light. I see this as a design problem as mechanically interconnected yokes will self-resolve if one pilot pushes while the other pulls as each pilot is forced into momentary awareness of what the other is doing. With the Airbus, the absence of “feel” means the pilot not-flying never really knows, moment to moment, what the pilot-flying is doing. The aural alert for dual input was inadequate in a setting of enormous stress, time compression, numerous other aural alarms and finite attentional resources; this (aural) cognitive input channel was simply maxed out and a tactile input could have conveyed information via a alternative and less overwhelmed input channel to the brain of the other pilot.

    The captain left the cockpit to take a nap without ever clearly designating which copilot was responsible for the aircraft. I suspect this diffusion of responsibility only added to the chaos of those final minutes as there wasn’t a clear declaration of who is in command which led to both copilots frantically trying things in an uncoordinated way.

    To me, this four and a half minute nightmare most closely relates to the practice of hospital medicine in a code situation. There is much literature out about physician decision making, but very little about a crucial antecedent step; physician sense-making. In a code/arrest situation, the responder must first make sense of what’s happening, and then can move on to the next step of making decisions about what to do. I believe simulation is crucial to developing assessment and decision making skills under time pressure. A clear and unambiguous leader can declare an assessment of a patient’s current state; can declare a differential for this state and can then begin a stepwise process of working the problem. In a broader sense, I am certain there are lessons here on the dangers of human-computer interactions and will be thinking more about this in the future.
    Thanks again for your efforts.
    1. See: Fly By Wire; The Geese, the Glide, the ‘Miracle’ on the Hudson; Langewiesche, 2009

  8. Menoalittle January 2, 2012 at 2:51 am - Reply

    Bob,

    Based on your superb analysis, applying it to highly EHR wired hospitals, when one considers the meaningful complexity built in to CPOE and CDS devices to order an aspirin or an insulin drip and others, there will continue to be deaths facilitated by the very devices which are supposed to improve safety.

    The aftermarket surveillance of CPOE and CDS devices is nil and pre-market assessment is zero. When entire systems crash and the patients’ records disappear, what happens to the patients?

    No one is recording the outages, if even for 2 minutes. No one is talking except for comments emanating from the corner suites declaring that patient care was not affected by the crash, as reported here:

    http://www.post-gazette.com/pg/11358/1199140-53.stm?cmpid=localstate.xml

    As was seen in the airbus crash, when the device malfunctioned due to weather (HIT devices also malfunction due to weather and random events), cognitive function of the users became distracted and impaired, which is one of many things that happen to EHR users in hospitals when the HIT infrastructure fails.

    Is there a place where researchers can obtain the data on all hospitals’ EHR outages? Even one minute is enough to lose orders and have to start over.

    Are the deaths and critical conditions of patients that occur within a week after an EHR crash investigated by those who are not financially conflicted to determine if the delays from the infrastructure failure are etiology? What are you doing in this regard at UCSF?

    Best regards,

    Menoalittle

  9. 999999999999999999% January 2, 2012 at 3:54 pm - Reply

    Putting the nose up resulted in the stall. Hmmm, adminstrators of hospitals with their $ millions in compensation and sweetheart deals with HIT vendors have their noses up causing the stall in determining the true magnitude of the dangers and near misses of EMRs and their attachments.

  10. Geff McCarthy January 2, 2012 at 9:23 pm - Reply

    I’m an aviation safety expert, USAF piilot, MD, and retired hospital CEO. I have long pushed for inclusion of aviation culture into our hospitals, and have followed the AF447 accident with interest. A few brief responses to Bob’s excellent analysis and to the correspondents’ additions:
    1. Analogies to aviation safety reach only so far… There is a fundamental difference in human performance in hospitals and in the air: On the ground, the operator’s personal survival is not threatened; in the air it is. Decision paralysis is a real phenomenon that is experientally accepted, but cannot be experimentally demonstrated. When one’s life is truly in peril, cognition ceases. CF Yerkes-Dodson perf curve: the endpoint is panic and paralyisis.
    When training airline pilots in a simulator ( I consult peripherally on this training) we attempt to introduce a “startle factor” to put additionall stress on cognition and performance.
    2. The organizational pre-conditions are similar to the Chunnel train failures in 2009. Some of the Eurostars had snow filters installed, and performed as designed in the rare, fine, dry snow. 5 of the rest failed in the Chunnel. Many of the Airbus already had the FAA and EASA- required heated angle of attack sensors installed; the accident aircraft did not. In both cases maintenance management procedures were inadequate.
    3. In the past, many airline pilots had prior military training. Now few do. In France there is a designated training school for airline pilots ab initio. No military training, no aerobatics, no unusual attitude training. In a current experiment to teach stall recovery in a simulator, the lack of experience and training is alarming…many airline pilots cannot recover a simulator from a stall or roll at low altitude. But…all can be trained.
    4. I would respectfully disagree with Dr. Davis, who commentary reveals considerable expertise, in the resuscitaion situation. A clear leader is needed, one who is not compressing or defibrillating, etc., but who stands back and synthesizes the current state of the system. But…if that credible leader states definitively what his/her conclusion is, and directs others, CRM stops. Better he/she should offer a hypotheis and challenge the team members to add/subtract their perceptions, then decide.
    5, EHR and CPOE failures are common. Early in the deployment of the VA VISTA system, the potential for regression to paper, and loss of Situational Awareness from a crash was noted. A back up system was installed to capture current data in a raw form. This raw data was easily convertible to orders, meds, etc. System disruption is minimal with this method.
    6. The comments appealing for manual procedures proficiency are all valid. I have put in numerous central lines; i had no idea that an ultrasound could improve my performance. Good! Airline pilots, and I would argue, anesthesiologists do need occasional manual skill practice.
    7. Lastly, the Airbus tragedy followed a smaller, lethal version of same: stall, and pull up instead of push down. Colgan Air in Buffalo NY a few years ago. Congress imposed minimum flight time requirements for commuter airline crew as a result, but IMHO, the time requirement is not nearly as good as precise demonstration of aircraft recovery skills, regardless of total flying time.

  11. Rober E Wilson January 3, 2012 at 1:00 pm - Reply

    After 40 years of flying, I still and always will subscribe to the #1 rule in aviation: first and foremost FLY THE AIRPLANE. Everything else –navigation, communication, etc. — is attended to after that.

    Modern aviation, from commercial operations to today’s computer-driven, all-glass instrument panels even in small aircraft, can become overwhelming. It’s called data diarrhea – an overload of information that can distract from that fundamental rule.

    Even a minor problem can quickly become a major problem. Just such a tragedy occurred on 29 Dec 1972, where a seasoned crew of an L-1011 on Eastern Airlines Flight 401 was distracted by a burned out light bulb that should have indicated the nose gear was down and locked prior to landing. But, they forgot to fly the airplane first and foremost, crashing into the Everglades near Miami with great loss of life.

    Presumably, that rule also will apply to OR environments and Code Blue procedures. Always focus on the first and foremost. Don’t be distracted by details. They come after that.

  12. JoanneConroyMD January 4, 2012 at 9:48 pm - Reply

    Great Post.
    Aside from the fact that there appear to be more questions than answers from an aviation perspective, there are several critical lessons for healthcare providers. I am an anesthesiologist….and vigilance was drilled in to us beginning the first day of our training. Although our specialty has been described as 90% boredom and 10% horror, Roy Basch in the House of God was right…take your own pulse first. Although your neurons are firing rapidly in these situations, don’t move so fast that you miss critical pieces of information.

    I love the reference to lack of situational awareness, with everyone focusing on a few small details while ignoring a blaring alarm. I ran to a code once as 4th year student and while three residents were looking at the monitor ( who knows… figuring out an arrhythmia???), the ambu bag lay on the pillow next to the patients’ slightly cyanotic facial features…unused.

    So healthcare has a lot of work to do…to begin to work as teams (discarding the role of physician as captain of the ship who is never challenged), to question ourselves and each other when things don’t seem right, and to become chronically non complacent about how care is delivered to patients.

  13. Jan Krouwer January 5, 2012 at 1:54 pm - Reply

    This raises the question of recurrent training and testing. Even private pilots (who fly for recreation and can’t charge passengers) are required to undergo flight reviews and medical exams in order to keep flying. What recurrent training and testing is required for physicians.

  14. Chris Johnson January 11, 2012 at 6:51 pm - Reply

    I think the analogy to code situations is a good one. Simulations help, especially if one throws in mechanical and system failures. For example, the oxygen saturation monitor is unreliable, the monitoring leads don’t work, and the anesthesia/ventilation bag has a hole in it (or a broken seal of some sort). It is amazing how lost junior folks get when, as well as figuring out what is going on, they need to assess the patient with only a stethoscope, their hands, and their eyes.

  15. Matthias Maiwald August 28, 2013 at 12:39 am - Reply

    It is now often mentioned that medicine, in terms of patient safety, can learn a lot from the aviation industry. This is also echoed in the blog above, despite the Air France tragedy (which is a rare event). However, one issue makes me think. Among quite a number of differences between aviation and healthcare, there is a particular one that I find personally interesting. The aviation industry does not evaluate the usefulness of, and need for particular safety measures in the way of randomized trials — e.g. by planes being randomized and then exposed to one or another treatment or measure — in which one of the measured outcomes is whether planes are dropping from the skies. Yet, the equivalent of exactly this kind of experiment constitutes the core of the belief system of modern evidence-based medicine. So, how does the aviation industry determine what is useful and necessary? I don’t know exactly, since I am no aviation expert, but from what I can see, they seem to be doing this by extracting information from (multiple) empirical observations and accident analyses, and by making reasonable, thoughtful inferences from scientific principles, including the principles of physics, and by applying common sense. If one would attempt to be doing this — or admit to be doing this — in medicine, one would immediately face harsh criticism and be decried as “non-evidence-based”. What the airline industry appears to be doing ranks among the lowest possible sources of evidence in common evidence hierarchies (http://en.wikipedia.org/wiki/Hierarchy_of_evidence), meaning that in evidence-based medicine terms, this constitutes really “bad” evidence to act upon. Whether this difference in gathering and analyzing evidence between the two industries has any causal relationship to the differences in safety records is something that I do not know and cannot comment upon, but it is noticeable that an industry that acts upon such “bad” evidence generally achieves the much better outcomes.

  16. Bill Palmer July 21, 2014 at 11:02 pm - Reply

    Very interesting discussion. The parallels of hospital medical practice and aviation are many.

    As an airline pilot and long time instructor, I recognized there were many lessons in the Air France 447 accident that aircraft operators had to learn also.

    In response to that, I wrote a “Understanding Air France 447” to explain the causes and answers to accidents such as this. Originally written for my fellow airline pilots, the narrative includes enough background information that an aviation background is not necessary to fully grasp the lessons.

    It’s available as an ebook and paperback (216 pages, color) on Amazon, ibooks, google play, and other on-line retailers or visit http://uderstandingAF447.com

  17. Wen Gen July 22, 2014 at 8:21 am - Reply

    Indeed, Bill’s book is the authority on AF447.
    His narrative clarifies the BEA’s report and more importantly provides his real-life recommendations in preventing another disaster.

Leave A Comment

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

About the Author: Bob Wachter

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.

Categories

Related Posts

By Gian Toledanes, DO
March 17, 2023 |  0
Ableism is a common yet misunderstood “–ism”. Yet the common thread that ties ableism and other –isms/ forms of discrimination like racism, sexism, and homophobia, is the belief that one group or identity is “less than” others. Specifically, ableism is discrimination of and prejudice against people with disabilities and is rooted in the belief that […]
By Suchita Shah Sata, MD, SFHM
November 15, 2022 |  0
When RaDonda Vaught, a registered nurse at Vanderbilt University Medical Center, was criminally prosecuted for a medication error, it sent shockwaves through the medical community. Over 20 years after the landmark National Academy of Medicine (NAM) report To Err is Human and over a decade after Peter Pronovost catapulted the scientific approach to patient safety, […]
By Lanna Felde, MD, MPH
March 9, 2022 |  1
Could being on Twitter make you a better note-writer? We certainly think so! That was one of the many hot takes from February’s #JHMChat, with special guests Drs. Blair Golden, Robert Centor, and Andrew Olson. We explored the most fundamental question in the electronic health record (EHR): what makes a good note? Honest question, has […]
Go to Top