Well, not all of us, but some of us– especially in Manhattan, Los Angeles, and Chicago.
With value-based payments coming down the pike (this calendar year applies toward the 2012 ranking), patient satisfaction and HCAHPS is a priority. Of the composite score, this element has a 30% weight.
Yesterday, in both the WSJ (gated) and NYT, long feature articles discussed how hospitals are adapting to improve the patient experience. First, of note, fellow SHM’er (and friend), Kathy Hochman is the star:
The article reviews familiar ground, but what struck me was the lengths to which facilities are upgrading service to obtain scores of 9 or 10. It felt like Groundhog Day redux, but instead of antibiotic overkill in the instance of CAP, we are now tucking patients in and leaving mints on the pillow. Will this lead down the same path of resource exploitation in order to play to the measures? Probably.
Service is great, and we need more of it, but there is a fine line. When I buy a suit, the sales clerk can say hello, ask if I need assistance, and stand 30′ back if I defer. Conversely, he can do the same and hover. A waiter can check in at my table every 5 minutes, or every 10, but either way, the tip is 20%.
Unlike fastidious attention to hand washing and infection control (we cannot employ enough dollars there) or other clinical measures, the service angle can get impractical, and be forewarned, it will. Will it have positive externalities for the organization and generate good will amongst staff? No doubt, and Nordstrom or the Ritz-Carlton would not object. However, this comes at the expense of our other efforts to say, prevent unnecessary readmissions. Again, it is the five vs. ten-minute thing.
Let us just hope the pieces were featuring extreme instances.
Now, on the “we stink” front, I highlight the Hospital Compare website, and the inserted satisfaction scores excerpted below:
Of the 295 Hospital Referral Regions, Manhattan is dead last, Chicago is #292, Long Island is #288, and Los Angeles is #282. Why?
It is not one hospital, or one physician group, but an entire locale. Do we impugn a population, its normative expectations, the homegrown physician training, or something else? What is going on here, and why are these three enormous cities such outliers (Houston is #133 incidentally). I believe it is more arcane than staff and patient interaction, and I will revisit this subject in the future with a more in depth discussion, if a publication does not beat me to it. It is not so simple.
It is tempting to attribute the scores, if you live in these cities, to a faulty instrument. HCAHPS bashing is a recreational sport nowadays, and we all question its validity, particularly in its application to hospitalists.
This interesting JHM paper, released last week, contrasted HCAHPS scores between hospitalists and PCP’s in three Massachusetts facilities:
There is plenty to garner from the study—with its inherent limitations of course, but my interest rested with uncovering misleading differences between specialties, especially as they relate to HCAHPS. To my surprise, the groups were equivalent, and at least amongst this sample of 8000, the survey played no favorites. If validity was an issue, and the survey was “rigged” from the outset, my expectation was superior PCP performance. This was not the case. We must await more data.
I crave thoughts on HCAHPS—upsides, downsides, and whines—all are desired, and please sound off.
Mason City, I am talking to you!
UPDATE: Here are additional views on the subject. Even if care standards are applied identically, and equivalent outcomes are achieved, should hospitals (cities) be penalized if patients of lesser means rate differently, or regional expectations are dissimilar? Adjusting for these factors then, seems rational, and is not a free pass for the affected institutions.