Last week, Medicare added patient satisfaction data to its hospital reporting website. This is progress, but it raises an interesting question: should patient satisfaction scores be case-mix adjusted?
The motivation to include patient satisfaction data comes from the Institute of Medicine’s inclusion of “patient-centeredness” as one key component of quality. And what could be simpler than asking patients a few questions, as the Center for Medicare & Medicaid Services (CMS) survey does. (A pdf of the survey, formally known as HCAHPS, or “H-CAPS”, for Hospital Consumer Assessment of Healthcare Providers and Systems, is here). I like the addition of the patient experience data and found the presentation on the CMS site to be fairly reader-friendly (as did US News & World Report’s Avery Comarow). For example, it only took a few seconds to find my hospital’s performance on the summary question, “Would you definitely recommend this hospital?”:
UCSF Medical Center: 80% yes
Average for Northern and Central California: 65% yes
Average for all U.S. Hospitals: 67% yes
[You’ll note that we didn’t do too badly. But it would be legitimate to wonder whether I, being relatively fond of my job and unenthusiastic about being shunned by my colleagues, would have shown you something that made us look crummy. You should have the same skepticism when you look at every hospital’s web site, a point Peter Pronovost, Marlene Miller, and I made in this JAMA article.]
One can debate the relative value of considering patient experience vs. harder measures of quality and safety forever. Personally, I want both: great technical quality (which few patients will be able to judge) as well as a clean room with nice people who listen and communicate well. There’s no reason that the dexterous surgeon needs to be a jerk, nor that the empathic internist needs to be a diagnostic imbecile.
But, like most things in healthcare, patient satisfaction measurement and reporting is trickier than it looks. Think about it in terms of the Donabedian triad of quality measurement: structure, process, and outcomes. One of the advantages of using processes (did the patient get a beta blocker?) and structure (are there intensivists available?) is that outcome measurement requires case-mix adjustment to avoid apples-to-oranges comparisons. Just comparing raw 30-day post-CABG mortality rates, for example, would clearly be misleading, since the superb surgeon who operates on older diabetic vasculopaths might well have a higher-than-average mortality rate, notwithstanding his excellence. Just as importantly, if you don’t employ scientifically bullet-proof case-mix adjustment, the Pavlovian response of every provider and hospital whose outcomes are worse than average is… “But–but–but… You don’t understand… My patients are sicker and older!”
Not everybody’s patients can be sicker and older. Except in some Bizarro-World version of Lake Wobegon. But believe me, that is what everybody will claim.
Anyway, it is largely for this reason that most publicly reported quality measures to date have not been of outcomes – the science of case-mix adjustment has not been ready for prime time. But this science is getting better, and the world is clearly moving toward outcome measurement and reporting: CMS and several states now report case-mix-adjusted CABG mortality, and California is now reporting case-mix-adjusted ICU outcomes via its CalHospitalCompare project.
If you think about it, patient satisfaction is simply another outcome measure.
But do satisfaction survey responses need to be adjusted? Well, yes. For example, maternity patients tend to rate their experience more highly than do medical and surgical patients (no surprise there). Well educated people tend to be more critical, and older patients are more forgiving.
Impressively, the H-CAPS folks thought of some of this, and the data you see on hospitalcompare have been adjusted for the following variables: service line (medical, surgical, or maternity care), age, education, self-reported health status, language other than English spoken at home, emergency room (ER) admission, and the time between discharge and survey completion.
But is this enough? Probably not. As reported in the New York Times, states showed substantial variation in their average satisfaction scores. For example, 79% of patients in Alabama hospitals “would definitely recommend” their hospital to friends and family, while only 64% of folks in New Jersey, 61% in Florida, and 56% in Hawaii would do the same. I’m guessing that these differences are more likely to be due to the characteristics of Floridians (spend a day visiting my family in Boca if you doubt this) or in Hawaii (“hey, dude, I’m missing some gnarly grinders”) than differences in the niceness of nurses and doctors.
Would it be possible to capture the personal characteristics that would fuel a robust satisfaction “case-mix adjustment” engine? I’d guess that insurance status would be a predictor of satisfaction; income might be as well. I’d also wager that Nordstrom shoppers are more demanding than Target shoppers, and that people with young kids or busy jobs are less tolerant of long waits than retirees.
The point is that we don’t understand the interactions between these subtle sociocultural and economic variables and the likelihood of ranking your hospital or doctor more highly. For now, this isn’t a big deal – the data are just being put out there and folks can draw their own conclusions. (Medicare penalizes hospitals – about $100 per hospital admission – for not reporting, but there is no change in payment based on performance on these measures. Yet.) But if satisfaction is ultimately tied to reimbursement, or if patients or insurers begin making decisions based on satisfaction data, it will be important to either adjust for these variables or at least understand and describe them.
Some day, the presentation of patient satisfaction scores may be similar to that of presidential polling results: “Independents, single mothers, and Asian men over 55 really adored Hospital X.” Or perhaps it will be more like Amazon.com: “Customers like you prefer Hospital Y.”
CMS is throwing the public a bone by reporting patient satisfaction data. And I think CMS is seriously underestimating the intelligence of the public to ascertain quality..if the public is given accurate and meaningful data. Maybe some people would be influenced by “gee, I had a great appendectomy over there at UCSF”. But I think that they would be much more interested in knowing what the risk adjusted mortality rate is for acute myocardial infarction (AMI) care at specific facilities at 30-day or 1-year intervals. Werner & Bradlow (2006, JAMA, 296 (22) studied the current process measures used by CMS (in their Hospital Quality Alliance project)and their correlation with predicted risk-adjusted mortality rates in hospitals. Werner & Bradlow found that the difference between risk adjusted mortality rates for AMI between high performing hospitals and lower performing hospitals was minimall.
Hospital Compare is the website where CMS posts their performance measure results. But the kind of findings that Werner & Bradlow reported are not found on these website.
The pay for performance scheme that reimburses facilities based on this performance measures is faulty because it is not identifying quality measures that are truly reflective of the care at the bedside. The public deserves to know that their money is being well-spent. Evaluation or accreditation processes that serve merely to legitimize their own existence pay lip-service to the drive for quality in health care.