In a “Clinical Problem Solving” session at my annual Hospital Medicine conference last week, I presented a fiendishly hard case to Gurpreet Dhaliwal, a UCSF associate professor of medicine based at our San Francisco V.A. You can imagine how hard this is for the discussant: he’s hearing a case for the first time, absorbing and processing scores of facts in real time, while simultaneously praying that he hasn’t miss something obvious (“Oh, I spaced on the fact that his hematocrit fell from 36 to 17. Sorry.”). Meanwhile, the 650 audience members are tracking every thought, hoping to learn something about the art of diagnostic reasoning while also being entertained, in a bullfighting audience kind of way (and the speaker knows that some of them are undoubtedly rooting for the bull). At the end of the case, the discussant is forced to put a nickel down on a single diagnosis.
While I enjoy moderating these sessions, the thought of being on the receiving end of one is my own vision of hell.
The Wachter-Dhaliwal show has become an annual ritual at my conference. This year’s case concerned a middle-aged Indian man with six months of relapsing and remitting fever, an extensive travel history, and a positive PPD—in short, lots of clues pointing toward tuberculosis or another indolent infection. With each new piece of data, Dhaliwal—known by his friends and students simply as Goop— described his thinking, including how he was guarding against being ensnared in his own cognitive biases. “We hear that the patient is of Indian origin, but he was born and raised in the U.S., so I’m immediately on watch for the tendency to be overly swayed by his country of origin.”
An MRI scan showed diffuse marrow infiltration throughout the spine. Here Goop demonstrated the importance of using feedback to improve one’s diagnostic performance. “I was presented a case like this a few months ago and said it was probably a disseminated infection. But the ID consultant told me afterwards that this kind of diffuse infiltration on MRI is very unusual for infection, which tends to be patchier, and more likely to be tumor.”
The patient later developed pancytopenia, leading to a bone marrow biopsy—which was essentially normal. Rather than being overly swayed by this result, Goop’s immediate response was that such biopsies are often negative in hematologic malignancies. And so, when I pressed him for his “final answer,” Goop declared that he would repeat the bone marrow biopsy (“it’s often a good idea to repeat a high yield test that has a high rate of false negatives”) and, in light of the MRI appearance, guessed that the patient had a hematologic malignancy.
The repeat biopsy shows Hodgkin’s lymphoma. Bingo.
The audience was wowed, not just because Goop got it right, and not just by his vast fund of knowledge. (I jokingly accused him of making up the disease “Chikungunya,” which was on his differential diagnosis list. He promptly gave a textbook description of the rare mosquito-borne viral illness.) Rather, Goop impresses audiences with his poise, his humor, his ability to articulate his real-time thinking in an accessible way, and—most of all—his humility.
One of the reasons I enjoy these sessions so much is that they support my overall goal of ensuring that diagnostic decision-making has its proper place in the larger field of patient safety. About five years ago, I was concerned that a growing emphasis on existing quality and safety measures—door-to-balloon time, beta-blocker use, readmission and pneumovax rates—would lead to a dumbing down of medical training. This, it seemed to me, might be the natural result of schools and healthcare systems deemphasizing diagnostic accuracy as they struggled to make room for process-oriented quality activities that, frankly, can be mastered by anybody armed with a good system.
Mind you, I have also been pushing the importance of systems for years and believe passionately in their role in improving safety. But when it comes to great systems vs. smart doctors, this is an “and,” not an “or.” We need both.
I’ve been gratified by the shift since several safety leaders have joined me in pounding this particular drum. A number of books on diagnostic reasoning have been published, most prominently by Groopman and Sanders, along with bestsellers on the more general issue of cognitive biases (such as Daniel Kahneman’s new book, Thinking, Fast and Slow). The fourth annual Diagnostic Errors in Medicine conference was held last month in Chicago, and was reportedly a smashing success. A growing number of researchers have published widely on topics that include new ways of measuring misdiagnoses, the value of diagnostic checklists, and the hazards of heuristics.
All of this activity is having an effect. At our M&M conference last week back at UCSF Medical Center, I presented a really tough case to the residents. One of them, Gene Quinn, offered a potential diagnosis and then said, “but I’d be careful about anchoring too heavily on this diagnosis”—illustrating how today’s top residents naturally engage in meta-cognition (“thinking about one’s thinking”) and are on the lookout for their own cognitive biases (his concern was for “anchoring”—getting stuck on one answer). These would have been foreign concepts to trainees as recently as 2005.
Since diagnostic reasoning isn’t likely to be fixed by systems (though good IT-based decision support will help), it is the most personal area in the safety field. This makes it particularly important that there be role models for aspiring diagnosticians to emulate. Gregory House, MD is impressive, but he’s a) a pompous ass, b) a drug addict, and c) not real.
After watching him do his thing year after year, my vote for role model goes to Goop Dhaliwal.
Goop approaches diagnostic reasoning the way an Olympic athlete approaches his or her event. Sure, he reads journals and keeps up with clinical medicine. But it is his intentional approach to improvement that is particularly awesome.
Goop recently contributed a segment on diagnostic reasoning to a series on diagnostic errors I developed for the physician portal QuantiaMD. In it, he distinguished between becoming “experienced” and “expert.” The same principles that make one an expert at golf, writing, or playing the cello applies to diagnostic reasoning. While one can become experienced by simply seeing more cases, one becomes expert only by engaging in activities specifically designed to improve performance. They are:
Progressive problem solving: The habit of reformulating work to offer your mind a quick challenge when you don’t have to. For example, when seeing a patient with cellulitis—a diagnosis with which you’re very comfortable—the progressive problem solving approach might involve asking yourself, “Could I explain to a colleague why I chose not to cover MRSA?” or “Can I list two mimics of cellulitis?”
Seeking feedback: Physicians interested in improving diagnostic reasoning need a systematic way of gathering feedback on their performance. By avoiding what Goop calls the “no news is good news” phenomenon—learning both from correct and incorrect diagnoses—the expert diagnostician is able to recalibrate over time.
Simulation training: The key here is to make reading interesting, by reviewing tough clinical cases and treating them as simulation exercises. For example, one might review a clinical problem-solving case in the New England Journal and resist the temptation to look ahead for clues or the final answer.
Deliberate practice: top performers don’t improve their skills by relying on experience alone. Rather they deliberately identify skills they care about and develop short-term plans to improve in those areas. For example, one might focus on improving dermatology skills by taking photos of every unusual rash for a month and running them by a friendly consulting dermatologist.
Although Goop is my hero when it comes to diagnostic reasoning, in the September 7th issue of JAMA he described his own diagnostic heroes—two men in Boston with decades of experience in their field.
They are presented every conceivable problem related to their specialty,” Goop wrote. “They solve cases by history alone. They laugh a lot and clearly enjoy what they are doing. And not only do they demonstrate superb diagnostic acumen, but they also model many of the ACGME core competencies.
Who are these brilliant diagnosticians? Not physicians, it turns out, but rather hosts of a popular radio show, National Public Radio’s Car Talk.
The article, cleverly titled “The Mechanics of Reasoning,” is a wonderful analysis of how Tom and Ray Magliozzi handle their weekly calls from listeners, who…
ask about car-related problems that a mechanic could reasonably be expected to solve (“The sound when I go downhill is kind of like two rocks being ground inside a Cuisinart.”) to issues that are more tangential (“Can I drive in the car pool lane if I’m alone in my car but pregnant?”). Occasionally, the hosts have to act as mediators in car-related disputes, eg, “My husband is teaching the kids to hot-wire the family minivan; I think it’s a bad idea. Who’s right?”
Goop played on this analogy to highlight how the Car Talk guys expertly engage in the diagnostic process, virtually all their steps mirroring those of expert physician-diagnosticians. He made the crucial point that the fact that Tom and Ray are talking about cars might even make it easier for aspiring MD diagnosticians to learn about their own craft: “The disentanglement from medical facts allows the student of reasoning to observe the process rather than obsess over the content (consider if this were a medical call-in show Body Talk: “My husband makes this terrible noise . . . ”).
(Interestingly, I have the same feeling when I introduce healthcare audiences to safety concepts by using aviation analogies, or to the notion of high reliability by describing FedEx’s near-perfect performance. Using non-medical examples helps as it thwarts a clinician’s natural tendency to focus on the medical issues rather than on systems, processes, and conceptual models.)
Checklists, computerized prescribing, and time outs are crucial ways to prevent medical mistakes. But if the entire focus of the safety field is on these rote processes and system changes, we will be missing a major source of errors and harm: diagnostic errors. To elevate diagnostic reasoning to its rightful place, while we surely need more research and analysis, I believe that nothing is as important as having role models: physicians who prize diagnostic accuracy, train themselves to be better tomorrow than today, and can humbly teach this skill—and the joy of diagnostic reasoning—to future generations.
Physicians like Goop Dhaliwal.