Patient Safety in the US and UK, Part II: Top-Down vs. Bottom-Up

By  |  September 24, 2011 | 

In my last post, I discussed the role of physicians in patient safety in the US and UK. Today, I’m going widen the lens to consider how the culture and structure of the two healthcare systems have influenced their safety efforts. What I’ve discovered since arriving in London in June has surprised me, and helped me understand what has and hasn’t worked in America.

Before I arrived here, I assumed that the UK had a major advantage when it came to improving patient safety and quality. After all, a single-payer system means less chaos and fragmentation—one payer, one regulator; no muss, no fuss. But this can be more curse than blessing, because it creates a tendency to favor top-down solutions that—as we keep learning in patient safety—simply don’t work very well.

To understand why, let’s start with a short riff on complexity, one of the hottest topics in healthcare policy.

Complexity R Us
Complexity theory is the branch of management thinking that holds that large organizations don’t operate like predictable and static machines, in which Inputs A and B predictably lead to Result C. Rather, organizations operate as “complex adaptive systems,” with unpredictability and non-linearity the rule, not the exception. It’s more Italy (without the wild parties) than Switzerland.

Complexity theory divides decisions and problems into three general categories: simple, complicated, and complex. Simple problems are ones in which the inputs and outputs are known; they can be managed by following a recipe or a set of rules. Baking a cake is a simple problem; so is choosing the right antibiotics to treat pneumonia. Complicated problems involve substantial uncertainties: the solutions may not be known, but they are potentially knowable. An example is designing a rocket ship to fly to the moon—if you were working for NASA in 1962 and heard President Kennedy declare a moon landing as a national goal, you probably believed it was not going to be easy but, with enough brainpower and resources, it could be achieved. Finally, complex problems are often likened to raising a child. While we may have a general sense of what works, the actual formula for success is, alas, unknowable (if you’re not a parent, trust me on this).

Understanding these differences is crucial because our approaches must match the types of problems at hand, and improving patient safety often involves dealing with complicated and complex problems and settings. A checklist may be a fabulous fix for a simple problem, but a distraction for a complex one. Enacting a series of rules and policies may seem like progress (it almost certainly does to the issuer) but may actually set us back if it stifles innovation and collegial exchange. Sometimes the best approach to a complex problem is to try an approach that seems sensible, measure the results (making sure workers feel able to speak truthfully and keeping ears to the train tracks for unanticipated consequences), and repeat this cycle over and over.

Appreciating the complexity of healthcare systems should not lead one to embrace anarchy or decide that rules are for wimps. “A somewhat surprising finding from research on complex adaptive systems,” observes organizational expert Paul Plesk, “is that relatively simple rules can lead to complex, emergent, innovative system behavior.”  Atul Gawande expands on this point in The Checklist Manifesto, describing how the best checklists lead to improvements that go well beyond adherence to a few tasks—mostly by creating a limited number of high-level constraints and encouraging cross talk among frontline staff.

The bottom line from analyses of complex systems is that over-managing workers through boatloads of top-down, prescriptive rules and directives may be more unsafe than tolerating some degree of flexibility and experimentation on the front lines. It’s a message that can cause frustration, but those who don’t learn it seem to make the same managerial mistakes over and over again.

The Benefits and Risks of Centralized, Prescriptive Safety Standards
When the patient safety field launched, around the year 2000, both the US and UK needed to respond. Like typecast actors playing their parts for the umpteenth time, both countries followed their respective scripts: the UK favored central rules and the US favored, well, a mixture of this and that. These responses are deeply ingrained in our two countries’ cultural DNA.

In the US, the safety imperative that began with To Err is Human ran up against a leadership vacuum. No national organization was in a position to grab the safety ball and run with it. The Joint Commission filled this gap in part through its hospital accreditation work, as did the Agency for Healthcare Research and Quality (AHRQ) in research and education. But these organizations could not articulate a national strategy, nor did they have the power to enforce tough rules to their constituents (Joint Commission certification is voluntary, funded by the accredited hospitals, markedly limiting the accreditor’s degrees of freedom). Even as these organizations began to rise to the challenge, major gaps remained, and were filled by an alphabet soup of other stakeholders: physician certifying boards like ABIM, training program accreditors like ACGME, business coalitions like Leapfrog, non-profit organizations like Institute for Healthcare Improvement, and state hospital associations. But there was no central authority to truly “own” patient safety.

Soon caregivers and hospital administrators were begging for “harmonization.” Translated: “We accept the fact that you, [Fill in the Blank], are going to boss us around on safety, but can’t you get your act together with the 10 other organizations doing the same thing?”

While I would have loved for a central authority to have made hand washing or prompt discharge summaries national standards, this unruliness had its virtues. Individual healthcare organizations—hospitals, specialty societies, multispecialty groups—had the space to develop their own safety programs without being overwhelmed by a huge compliance burden.

And good ones did just that. Over a few years, a stream of innovations—checklists, time-outs, debriefings, Executive Walk Rounds, trigger tools, new approaches to disclosure—bubbled up from front line clinicians, researchers, and managers, who had the freedom to try things out, see if they worked, and then disseminate them. This happy result only occurred because some clinicians gained skills in safety, were motivated to try new approaches, and were given some leash.

Contrast this with the UK, where the launch of the safety field occasioned lots of prescriptive rulings issued by the various tentacles of the National Health Service. Here, the instinct to embrace centralized solutions to important problems is facilitated by the country’s small size (I have to keep reminding myself that California is nearly 3 times larger than England in land mass and has a matching Gross “Domestic” Product—about $1.9 trillion), the centrally-controlled single-payer system, and a societal bias that often places the interests of the community over those of the individual.

Take the issue of emergency department door-to-floor time. In the US, we are under pressure to try to shorten this time, certainly a sensible goal. So the time is now being measured and reported internally, and may soon be publicly reported or even subject to incentives. In the UK, however, the NHS approached this issue by mandating a four-hour maximum ED door-to-discharge time (either home or hospital admission) in 2002. Hospitals that miss their four-hour target can be hit with major penalties. (I heard of one institution where a physician leader was fired for his inability to meet this benchmark.)

Is this good or bad? When safety standards are supported by strong evidence and we’ve sorted through the unexpected consequences, then centrally-decreed mandates are fine, propelling us toward safer care faster than a wishy-washy, pluralistic system. On the other hand, boatloads of top-down rules can, and I believe have, created a feeling among front-line staff here that safety is something the government tells us to do. It’s a guaranteed enthusiasm-sapper and innovation-stifler. As you can imagine, the four-hour rule has improved some things but has also generated tons of gaming and new problems.

Moreover, clinicians here view many of the NHS’s rules as overly politicized ,and even a little silly. Virtually everyone I’ve met here has shared a favorite story of some safety rule whose genesis was a single bad case in a single hospital, where harm befell a friend or relative of a Member of Parliament. Poof: another national standard. These stories are told with bemused helplessness.

And let’s not forget about those pesky complex systems. I mentioned two posts ago that the program to computerize every English hospital has been a fiasco—it was completely bollixed from the top down, violating everything we know about change management in complex systems. (Yesterday, it was formally announced that the program will be taken off life support—after having burned through about $20 billion—but anyone following the story knew that it was DOA years ago.)

Less expensively, one hears that the initial phase of the WHO surgical checklist program—which UK hospitals are now required to adopt—has been a major struggle, largely because it arrived as a central mandate without much room for local adaptation or buy-in.

The problem isn’t limited to the relationship between the central NHS authorities and individual hospitals—the top-down instinct is marbled throughout the entire system. The Trust (hospital system) manager who spends her life receiving directives from the NHS is likely to use the same approach with her clinicians (and then lament about why they don’t just follow the rules). And the government managers, of course, are those who have been promoted from senior leadership roles in healthcare systems, or visa versa. Once the tone is set this way, it is hard to change it: central authorities accustomed to wielding power have an awfully hard time parting with it willingly.

Top-Down or Bottom-Up: Finding the Sweet Spot
In the US, our individualism and mistrust of government causes us to resist central solutions, even to critical societal problems. When we’re lucky, this leaves space for grassroots engagement of and innovation by front line caregivers, and—perhaps—more robust solutions once they finally do emerge. All educators know the maxim, “If you tell your learners the answer, you may prevent them from learning it.” So it often is in patient safety.

On the other hand, America’s antipathy toward top-down directives permits wildly different rates of adoption of clearly effective practices, makes progress maddeningly slow (as every individual clinician and institution retains veto power over anything they don’t like) and contributes to massive disparities in quality—with far too many have-nots scattered among the haves.

Because of this, I see the US now moving in the UK’s direction, with a more prescriptive and top-down approach. You can see the early signs in Medicare’s increasingly aggressive use of transparency and value-based purchasing, and in the patient safety-related activities of various states (public reporting of “Never Events,” hospital inspections and fines, and some state laws in areas like MRSA screening and nurse-to-patient ratios). With several studies documenting our sluggish progress in patient safety, America’s patience with letting 1000 flowers bloom ebbs. The gardener has arrived, and he’s carrying his pruning shears.

Interestingly, just as the US is sliding toward a more central and prescriptive line of attack, I see growing recognition in the UK of the limitations of the top-down approach, more appreciation of the importance of caregiver engagement, and stronger efforts to train physicians and other providers in leadership and safety skills. Just yesterday, a safety expert studying the UK’s surgical checklist program told me that some surgical teams have successfully adapted the checklist to their local environments, with promising results.

“The Americans can always be counted on to do the right thing… after they have exhausted all other possibilities,” famously observed Winston Churchill. In our world, it appears that both the Americans and the Brits are honing in on the right thing: creating systems that are prescriptive when they need to be, while allowing the wisdom and enthusiasm of front line workers to be nurtured and tapped in addressing the complex problems that dominate patient safety and healthcare quality.

And—for both countries—that’s progress.


  1. ffolliet September 24, 2011 at 9:39 am - Reply

    the issue of “wicked” problems i think is best addressed by understanding that there is NO perfect, universally accepted for the problem in hand and the idea of bricollage, locally constructed solutions is a better way forward assuming that there is one single, universally applicable solution.

  2. aadesmd September 24, 2011 at 5:04 pm - Reply

    Both of these “algorithms” require that all parties are heading to the same goal. For example, I was part of a hospital medical staff where the CEO used these various mechanisms to instill fear and loathing among the medical and non-medical staff. One aspect was to use the broader “disruptive physician” classification to those that pushed for adherence to quality mandates. I still like to think that both approaches be led by physivians, and not CEO’s whose desires may conflict with patient care and cost control. This type of sociopathy should be relegated to the trash heap.

  3. Ferienwohnung Steiermark September 27, 2011 at 2:52 pm - Reply

    I am watching this site from last many days now i have decided that its time to tell every one that visit blogs here for health care.

  4. Menoalittle September 28, 2011 at 2:53 am - Reply


    I always appreciate your comments about Atul’s checklist. It is such a brilliant idea. Is it not provocative that a paper checklist has saved more lives than electronic medical records, both in the UK and the US? Do you think that health care professionals’ communication improves with complete reliance on CPOE?

    For those interested in a refresher on the UK HIT horlicks straight off the UK mainline IT blog, e-Health Insider:

    “The departing head of the NHS IT programme Richard Granger has said he is ashamed of the quality of some of the systems put into the NHS by Connecting for Health suppliers, singling (an American vendor name omitted) out for criticism.

    Going further than he before in acknowledging the extent of failings of systems provided to some parts of the NHS – such as Milton Keynes – the Connecting for Health boss, said “Sometimes we put in stuff that I’m just ashamed of. Some of the stuff that (American vendor) has put in recently is appalling.”

    He said a key reason for the failings of systems provided was that (American vendor) and prime contractor (name omitted) had not listened to end users.”

    Bob, I am as appalled as Dick Granger that the same US vendors are pulling the same stunts in the US, and the top (White House level) is requiring doctors in the US to use the same systems that remain unfit for purpose.

    You are deceiving yourself to think that the US top has any interest in what the bottom thinks or needs.

    It behooves the US top, eg POTUS, to study the fiasco in the UK and deduce that the US is on a steeper more costly curve down.

    Best regards,


  5. Unsafe_AtAnyClick September 28, 2011 at 3:15 pm - Reply

    The impact of the NHS NPfIT on patient safety and outcomes has not been reported. Let the truth be known.

    NY Times coverage of the UK HIT fiasco is here:

  6. Sue Jaimes, RN October 5, 2011 at 9:41 pm - Reply

    There is an interesting safety commentary from UCSF that made it to the Wall Street Journal. It is about time that the unintended consequences of pay for quality receive some press:

  7. Rick October 20, 2011 at 12:51 am - Reply

    Top-down vs bottom-up is an interesting conundrum. Complexity theory talks about emergence, in which complex structures arise from simple pieces, fitting together in unpredictable ways. And problems would not be complex if there were predictable solutions. But what strikes me about the NHS vs private healthcare model is that there is sort of a trend for hospitals to give up independent status and join large for-profit systems. My experience, in a hospital that took that route, is that the private enterprise for-profit hospital corporation becomes more top-down, in some ways, than a single-payor system can be. And forcing the same protocols on a 20-bed hospital in Wyoming and a 200-bed hospital in Los Angeles can get fairly bizarre.

Leave A Comment

About the Author:

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.


Related Posts

By  | March 1, 2017 |  0
We sat in the living room at a colleague’s home, drinking beer, wine or sparkling water, eating desserts, and talking. Talk started with residents comparing notes about clinical sites or rotations, worries about being prepared for boards, congratulations on fellowship matches, and discussions about trying to decide what to do post-residency. “How are you doing?” […]
By  | February 2, 2017 |  0
Yeah, I know the headline drew you in.  I sleuthed ya—but I have a reason. A study out in BMJ today, and its timing is uncanny given the immigration ban we are now experiencing. First, to declare my priors. I will take an IMG to work by my side any day of the week.  You need […]
By  | June 12, 2012 |  7
Professor James Reason is the intellectual father of the patient safety field. I remember reading his book Managing the Risks of Organizational Accidents in 1999 and having the same feeling that I had when I first donned eyeglasses: I saw my world anew, in sharper focus. Reason’s “Swiss cheese” model, in particular – which holds […]