Role Models of Diagnostic Excellence: Goop Dhaliwal and the Car Talk Guys

By  |  November 4, 2011 |  11 

In a “Clinical Problem Solving” session at my annual Hospital Medicine conference last week, I presented a fiendishly hard case to Gurpreet Dhaliwal, a UCSF associate professor of medicine based at our San Francisco V.A. You can imagine how hard this is for the discussant: he’s hearing a case for the first time, absorbing and processing scores of facts in real time, while simultaneously praying that he hasn’t miss something obvious (“Oh, I spaced on the fact that his hematocrit fell from 36 to 17. Sorry.”). Meanwhile, the 650 audience members are tracking every thought, hoping to learn something about the art of diagnostic reasoning while also being entertained, in a bullfighting audience kind of way (and the speaker knows that some of them are undoubtedly rooting for the bull). At the end of the case, the discussant is forced to put a nickel down on a single diagnosis.

While I enjoy moderating these sessions, the thought of being on the receiving end of one is my own vision of hell.

The Wachter-Dhaliwal show has become an annual ritual at my conference. This year’s case concerned a middle-aged Indian man with six months of relapsing and remitting fever, an extensive travel history, and a positive PPD—in short, lots of clues pointing toward tuberculosis or another indolent infection. With each new piece of data, Dhaliwal—known by his friends and students simply as Goop— described his thinking, including how he was guarding against being ensnared in his own cognitive biases. “We hear that the patient is of Indian origin, but he was born and raised in the U.S., so I’m immediately on watch for the tendency to be overly swayed by his country of origin.”

An MRI scan showed diffuse marrow infiltration throughout the spine. Here Goop demonstrated the importance of using feedback to improve one’s diagnostic performance. “I was presented a case like this a few months ago and said it was probably a disseminated infection. But the ID consultant told me afterwards that this kind of diffuse infiltration on MRI is very unusual for infection, which tends to be patchier, and more likely to be tumor.”

The patient later developed pancytopenia, leading to a bone marrow biopsy—which was essentially normal. Rather than being overly swayed by this result, Goop’s immediate response was that such biopsies are often negative in hematologic malignancies. And so, when I pressed him for his “final answer,” Goop declared that he would repeat the bone marrow biopsy (“it’s often a good idea to repeat a high yield test that has a high rate of false negatives”) and, in light of the MRI appearance, guessed that the patient had a hematologic malignancy.

The repeat biopsy shows Hodgkin’s lymphoma. Bingo.

The audience was wowed, not just because Goop got it right, and not just by his vast fund of knowledge. (I jokingly accused him of making up the disease “Chikungunya,” which was on his differential diagnosis list. He promptly gave a textbook description of the rare mosquito-borne viral illness.) Rather, Goop impresses audiences with his poise, his humor, his ability to articulate his real-time thinking in an accessible way, and—most of all—his humility.

One of the reasons I enjoy these sessions so much is that they support my overall goal of ensuring that diagnostic decision-making has its proper place in the larger field of patient safety. About five years ago, I was concerned that a growing emphasis on existing quality and safety measures—door-to-balloon time, beta-blocker use, readmission and pneumovax rates—would lead to a dumbing down of medical training. This, it seemed to me, might be the natural result of schools and healthcare systems deemphasizing diagnostic accuracy as they struggled to make room for process-oriented quality activities that, frankly, can be mastered by anybody armed with a good system.

Mind you, I have also been pushing the importance of systems for years and believe passionately in their role in improving safety. But when it comes to great systems vs. smart doctors, this is an “and,” not an “or.” We need both.

I’ve been gratified by the shift since several safety leaders have joined me in pounding this particular drum. A number of books on diagnostic reasoning have been published, most prominently by Groopman and Sanders, along with bestsellers on the more general issue of cognitive biases (such as Daniel Kahneman’s new book, Thinking, Fast and Slow). The fourth annual Diagnostic Errors in Medicine conference was held last month in Chicago, and was reportedly a smashing success. A growing number of researchers have published widely on topics that include new ways of measuring misdiagnoses, the value of diagnostic checklists, and the hazards of heuristics.

All of this activity is having an effect. At our M&M conference last week back at UCSF Medical Center, I presented a really tough case to the residents. One of them, Gene Quinn, offered a potential diagnosis and then said, “but I’d be careful about anchoring too heavily on this diagnosis”—illustrating how today’s top residents naturally engage in meta-cognition (“thinking about one’s thinking”) and are on the lookout for their own cognitive biases (his concern was for “anchoring”—getting stuck on one answer). These would have been foreign concepts to trainees as recently as 2005.

Since diagnostic reasoning isn’t likely to be fixed by systems (though good IT-based decision support will help), it is the most personal area in the safety field. This makes it particularly important that there be role models for aspiring diagnosticians to emulate. Gregory House, MD is impressive, but he’s a) a pompous ass, b) a drug addict, and c) not real.

After watching him do his thing year after year, my vote for role model goes to Goop Dhaliwal.

Goop approaches diagnostic reasoning the way an Olympic athlete approaches his or her event. Sure, he reads journals and keeps up with clinical medicine. But it is his intentional approach to improvement that is particularly awesome.

Goop recently contributed a segment on diagnostic reasoning to a series on diagnostic errors I developed for the physician portal QuantiaMD. In it, he distinguished between becoming “experienced” and “expert.” The same principles that make one an expert at golf, writing, or playing the cello applies to diagnostic reasoning. While one can become experienced by simply seeing more cases, one becomes expert only by engaging in activities specifically designed to improve performance. They are:

Progressive problem solving: The habit of reformulating work to offer your mind a quick challenge when you don’t have to. For example, when seeing a patient with cellulitis—a diagnosis with which you’re very comfortable—the progressive problem solving approach might involve asking yourself, “Could I explain to a colleague why I chose not to cover MRSA?” or “Can I list two mimics of cellulitis?”

Seeking feedback: Physicians interested in improving diagnostic reasoning need a systematic way of gathering feedback on their performance. By avoiding what Goop calls the “no news is good news” phenomenon—learning both from correct and incorrect diagnoses—the expert diagnostician is able to recalibrate over time.

Simulation training: The key here is to make reading interesting, by reviewing tough clinical cases and treating them as simulation exercises. For example, one might review a clinical problem-solving case in the New England Journal and resist the temptation to look ahead for clues or the final answer.

Deliberate practice: top performers don’t improve their skills by relying on experience alone. Rather they deliberately identify skills they care about and develop short-term plans to improve in those areas. For example, one might focus on improving dermatology skills by taking photos of every unusual rash for a month and running them by a friendly consulting dermatologist.

Although Goop is my hero when it comes to diagnostic reasoning, in the September 7th issue of JAMA he described his own diagnostic heroes—two men in Boston with decades of experience in their field.

They are presented every conceivable problem related to their specialty,” Goop wrote. “They solve cases by history alone. They laugh a lot and clearly enjoy what they are doing. And not only do they demonstrate superb diagnostic acumen, but they also model many of the ACGME core competencies

Who are these brilliant diagnosticians? Not physicians, it turns out, but rather hosts of a popular radio show, National Public Radio’s Car Talk.

The article, cleverly titled “The Mechanics of Reasoning,” is a wonderful analysis of how Tom and Ray Magliozzi handle their weekly calls from listeners, who…

ask about car-related problems that a mechanic could reasonably be expected to solve (“The sound when I go downhill is kind of like two rocks being ground inside a Cuisinart.”) to issues that are more tangential (“Can I drive in the car pool lane if I’m alone in my car but pregnant?”). Occasionally, the hosts have to act as mediators in car-related disputes, eg, “My husband is teaching the kids to hot-wire the family minivan; I think it’s a bad idea. Who’s right?”

Goop played on this analogy to highlight how the Car Talk guys expertly engage in the diagnostic process, virtually all their steps mirroring those of expert physician-diagnosticians. He made the crucial point that the fact that Tom and Ray are talking about cars might even make it easier for aspiring MD diagnosticians to learn about their own craft: “The disentanglement from medical facts allows the student of reasoning to observe the process rather than obsess over the content (consider if this were a medical call-in show Body Talk: “My husband makes this terrible noise . . . ”).

(Interestingly, I have the same feeling when I introduce healthcare audiences to safety concepts by using aviation analogies, or to the notion of high reliability by describing FedEx’s near-perfect performance. Using non-medical examples helps as it thwarts a clinician’s natural tendency to focus on the medical issues rather than on systems, processes, and conceptual models.)

Checklists, computerized prescribing, and time outs are crucial ways to prevent medical mistakes. But if the entire focus of the safety field is on these rote processes and system changes, we will be missing a major source of errors and harm: diagnostic errors. To elevate diagnostic reasoning to its rightful place, while we surely need more research and analysis, I believe that nothing is as important as having role models: physicians who prize diagnostic accuracy, train themselves to be better tomorrow than today, and can humbly teach this skill—and the joy of diagnostic reasoning—to future generations.

Physicians like Goop Dhaliwal.


  1. Gene Spiritus November 4, 2011 at 8:51 pm - Reply

    For those of us who trained before MRI’s, CT’s Ultrasound and Echo Cardiograms making rounds with physicians like dr. Dhaliwal was perhaps the high point of our education. I still remember Dr. Nadas a pediatric cardiologist describing a murmur at the bedside with a stethoscope or Dr. Derek Denny-Brown the neurologist with his hammer and pins making a complex neurologic diagnosis. Now when you see a cardiologist they might not even have a stethoscope and neurology consultants ask what the MRI shows before they come to see the patient. Unfortunately that is what a lot of our residents are growing up with.

  2. Menoalittle November 5, 2011 at 3:23 am - Reply


    Goop is a thoughtful clinician. Does he do the clicking…or do the residents and paraprofessionals do the clicking?

    You state: “Checklists, computerized prescribing, and time outs are crucial ways to prevent medical mistakes”.

    Computerized prescribing?? Where is the data, Bob, that computerized prescribing prevents medical mistakes? Recent data show they are no better than paper. Have these devices been approved by the FDA?

    (G)oops! Here are a few thousand of the reasons why these devices can not be trusted in their present form. Vendors other than Siemens have sold similar defective devices. What would Goop say?,1008631,1025982,1025984,1025967

    Some 2,000 patients of the Lifespan hospital group were discharged with incorrect prescriptions over the past 9 to 15 months because of a software glitch…

    and the software vendor, Siemens, is notifying hospitals elsewhere in the country, according to Cooper…”

    Best regards,


  3. Mark Neuenschwander November 5, 2011 at 6:52 pm - Reply

    Thanks Bob for a great article. As you know I’m not a physician. But a specific line jumped off the page and resonated in my mind and heart:

    “meta-cognition (“thinking about one’s thinking”) and are on the lookout for their own cognitive biases (his concern was for “anchoring”—getting stuck on one answer)”

    I realize your intent is to promote such thinking in the diagnostic process but I believe it is an approach that should be applied to all of our thinking, when we are “diagnosing” other issues of import in life.

    Like I said, thanks.

  4. Apurv Gupta November 6, 2011 at 12:31 am - Reply

    Love the article Bob, and the way in which you and Goop demonstrate the value of improving once’s diagnostic acumen over time in order to improve quality/ patient safety.

    At the risk or sounding too much like a “systems” person, is there a way to measure the degree of diagnostic accuracy? Perhaps this would give not only the clinicians a mechanism by which to objectively gauge their acumen, but will also give the policymakers, regulators, insurers, and public confidence in the clinical skill rather than just the process of care.

  5. Bob Wachter November 6, 2011 at 12:42 am - Reply

    Thanks, Apurv — the difficulty in measuring diagnostic accuracy (short of performing an autopsy) is one of the main reasons that diagnostic errors and diagnostic accuracy don’t get the attention they deserve. An interesting recent study by Singh and colleagues found that electronic trigger tools were reasonably accurate in identifying outpatient diagnostic errors.

    This is a start — but the science of measuring diagnostic accuracy (at least while patients are alive) remains immature. As long as it stays that way, it will be difficult for diagnostic errors to compete with more measurable safety targets such as healthcare-associated infections for attention and resources.

  6. Drone November 6, 2011 at 1:34 am - Reply

    Thanks for the great discussion. A very serious and important topic.
    Unfortunately, I am reminded of a silly incident from a few years back.
    I was asked to discuss “being a doctor” with a class of third graders. I gave them a few complaints (like ‘tummy ache’) and asked them to help me figure out what might be going on (aided by some old Netter drawings). I was completely surprised by their ability to generate a differential (at least to the system-level of ‘something wrong with stomach’ or something wrong with kidney’ or ‘something wrong with ____’)

    Cutest of all was one of the Netter pictures showed a gall bladder and when I presented it, one student exclaimed “Oh, now I know what’s going on; There something wrong with his gizzard !”

  7. Chris W. November 6, 2011 at 1:56 am - Reply

    Readers of this post should read Daniel Kahneman’s recent article in the New York Times, adapted from his book (mentioned in the post):



    “The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?”

    Some background:

    The robust beauty of improper linear models in decision making.
    Dawes, Robyn M.
    American Psychologist, Vol 34(7), Jul 1979, 571-582. doi: 10.1037/0003-066X.34.7.571

  8. Lincoln Weed November 7, 2011 at 3:37 am - Reply

    “Role Models of Diagnostic Excellence” is concerned with improving clinical judgment. But everyone’s judgment depends on the information taken into account. The uninformed judgment of a genius may be inferior to the informed judgment of a lesser mind. Informing judgment is the first step to improving it. The question thus arises — what is the optimal approach to informing judgment?

    The human mind, if left to its own devices, inevitably uses judgment in becoming informed. For example, the doctor faced with an unexplained problem to diagnose uses personal knowledge and judgment for initial information gathering (deciding what possible diagnoses are worth considering for the patient and what data should be gathered to assess the diagnostic possibilities). But this exercise of judgment compromises the diagnostic process at its foundation. Our cognitive weaknesses and vulnerabilities mean that judgment cannot be trusted to inform itself.

    This reality suggests that the diagnostician should not rely on personal judgment when initially assembling relevant information for decision making. Instead, the diagnostician should rely on external information tools and standards of care. It is entirely feasible to bypass personal judgment in this way. Initial information gathering for diagnostic purposes can be performed as a simple process of association, that is, matching patient data with associated medical knowledge. Computer tools handle this initial process far more reliably and efficiently than human judgment. Once this initial process takes place, human judgment becomes more trustworthy and powerful.

    These simple points and their implications are addressed in a new book from Dr Larry Weed (my father). Entitled Medicine in Denial (see, this book grew out of more than 50 years of development and clinical experience in finding new approaches to medical practice. Interested readers who are not familiar with this background may wish to look at a December 2005 article in The Economist, available at Readers interested in searchable electronic access to the book should contact [email protected].

  9. Bob Wachter November 8, 2011 at 12:35 am - Reply

    In addition to the published comments on the site, I often receive lots of off-line comments, and this post generated more than most, all positive (thanks!). One of the most interesting was a note from Jason Maude, the founder of Isabel. Isabel is a computerized diagnostic tool — one enters signs, symptoms, lab tests, and demographics and the program spits out a prioritized differential diagnosis list. I’ve played around with Isabel and have been quite impressed with it (I have no financial relationships with the company, FYI).

    When Jason entered “relapsing fevers,” “MRI diffuse marrow infiltration,” and “pancytopenia,” the correct answer (Hodgkin’s disease) popped up high on the list (second, after CMV infection). I would have liked to have seen what happened if “positive ppd” was also added to the list — I’m guessing TB would have been very high, illustrating that the output of any diagnostic engine (computer or human) depends on what you put into it.

    Not everyone will be lucky enough to have access to Goop Dhaliwals, and so computerized diagnostic tools, like Isabel, will be important resources for the rest of us.

  10. Louis Verardo, MD November 8, 2011 at 2:54 pm - Reply

    Very much enjoyed reading this posting. I was reminded of an incident from 25 years ago, when I was a young faculty person in our Family Medicine residency program responsible for setting up monthly Grand Rounds in Clinical Medicine. I had invited a seasoned internist from Long Island Jewish Medical Center to be our guest clinician at the case presentation, and I asked how I might get the case summary over to him before the date in question. To which he quietly (and without arrogance) suggested that it would be more “fun” to get a look at it in “real time” along with the rest of the audience and then “think out loud”.

    I thought that was about the classiest and most collegial attitude I had ever witnessed, and his performance was very well received by my colleagues at our hospital. I am an older faculty member now, still teaching and seeing patients, and still striving to be like this extraordinary physician.

  11. Bob Wachter November 14, 2011 at 10:05 am - Reply

    My dear friend Abraham Verghese, professor at Stanford and acclaimed writer (most recently of Cutting for Stone), has been writing and speaking on the demise of the physical exam and what that means for patients and physicians. Abraham’s wonderful lecture on this topic at the 2011 TED conference in Scotland is here.

    Abraham and colleagues are surveying physicians seeking stories regarding patient safety problems that arose because of errors/oversights in the physical exam. If you’re interested in helping, please take the short (11 question) survey here:

    Thanks very much for considering this.

    — Bob

Leave A Comment

About the Author:

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.


Related Posts

By  | June 26, 2018 |  2
JAMA just published the largest trial I have seen on a Hospital at Home (HAH) model to date and the first one out in the last few years. It comes from Mount Sinai in NYC–who have led the pack in this style of care if national presentations are the judge. They launched the program three […]
By  | March 7, 2018 |  0
I am angry. Perhaps, you are too. As a physician, it is heart-wrenching to watch people unnecessarily die from gun violence. As a mom, it strikes fear in my heart to know that our nation’s children are not safe in our schools. I vividly remember being a resident on call in the ICU when I […]
By  | February 28, 2018 |  0
“We are playing the same sport, but a different game,” the wise, thoughtful emergency medicine attending physician once told me. “I am playing speed chess – I need to make a move quickly, or I lose – no matter what. My moves have to be right, but they don’t always necessarily need to be the […]