There has been much written about publishing hospital-level or doctor-level outcomes data as a means of stimulating better quality of care. The basic premise is that by 'naming and shaming' patients will gain visibility on hospital and physician performance – including, amongst other things, how many patients die in their care each year.
Similarly, hospitals who are 'shamed' with the publication of 'underperformance' relative to their peers will feel motivated to improve their delivery of care in order to avoid such future embarrassments and, possibly, the reduction of funding in the case where health systems may attach some sort of budgetary envelope that is contingent upon outcomes.
In the UK the NHS has ramped up its its efforts to introduce this performance data into the public domain this very year. But much of the empirical research on this subject is, at best, unconvincing as to the impact of publishing hospital and physician performance as a means of driving quality improvement. Peter Smith has written a wonderful paper on the unintended consequences of publishing performance data in the public sector and highlights some flaws with this approach to driving quality improvements.
At the core are a variety of factors that make this a compelling discussion. Firstly, the coding of outcomes or performance data must be consistently held across all hospitals. If one hospital, for example, is coding acute coronary syndrome differently from another hospital, the comparative data will not hold up well. One might argue that the general public would never be able to tell whether one hospital coded their data differently from another hospital, since they only see the results.
While this is true, the hospitals themselves and their clinicians will know and that will undermine the process and participation. Secondly, we must ask ourselves whether the results have been risk-adjusted and what method has been used to calculate the risk adjustment. If, one hospital or physician is treating, on average, an older and unhealthier cohort of patients, it is naturally assumed that their outcomes would be worse than a peer treating healthier and younger patients.
The risks for 'cream‐skimming' are immense here. Hospitals and clinicians may choose to treat only low(er) risk patients knowing that younger and healthier patients are unlikely to carry the baggage of older patients (comorbidities, post-surgical complications, etc). Additionally, we must ask ourselves whether the improvements in quality resulting from underperformance are robbing patients in other service areas of needed treatment and care.
It is not uncommon to see hospitals pour incremental resources into, say, the obstetrics programme as a result of underperformance in order to 'get to the average' only to have a previously well-performing service area suffer as funds are redirected to the underperforming programme. Unless health systems and funders are prepared to provide additional funding for underperformance to improve quality of care, hospitals within the health system are forced to fund quality improvements from their existing budget. This 'robbing Peter to pay Paul' method of driving quality improvements has its evident challenges.
But let's put all of these issues aside for a moment and assume that they don't exist or that we've found a solution for them. The fundamental issue that still exists and the reason that 'naming and shaming' is a thorny issue is that patients, by and large, are incapable of distinguishing between high value and low value health services. As health policy makers debate the merits of publishing performance data, it is incumbent upon them to pay close attention to the 'comprehensibility' of the data. Can the average patient make sense of it? Is it unbiased and easily understood? Does it drive the right behaviour? Can the health system respond to changes in consumer (aka patient) behaviour?
Further compounding the problem is the fact that 'naming and shaming' and the publication of performance data at either the hospital or individual physician level only really works best for non-traumatic, non-emergency, elective and relatively innocuous conditions. Motor vehicle accident victims and emergency department patients don't have the time or presence of mind to consult rankings and reports and decide where to seek treatment. If you need a plastic surgeon or are considering cataract surgery, performance data may help in your decision and ensure that underperformance is corrected so that you are getting at least average treatment relative to the peer hospitals or physicians that you might have chosen.
Perhaps, the closing thought for this subject is best borrowed from Shakespeare in Othello, “... but he that filches from me my good name, robs me of that which not enriches him and makes me poor indeed.
We’re IGNIFI. An independent creative agency, we help spark and sustain successful brands for some of the biggest names in...