...asked Ross Wilson at the Quality Forum in Berlin two weeks ago (you can watch the video of his talk, and others, at http://internationalforum.bmj.com). His point was that we just don’t know. The global airline industry is able to say how many passengers die each year (500 in 2007) and why. By comparison, health care is flying blind.
Part of the problem is that we can’t agree on which data to collect or how to interpret them. This week Mohammed Mohammed and colleagues (doi:10.1136/bmj.b780) report their analysis of hospital standardised mortality ratios (HSMRs) in the West Midlands. They looked at two variables—co-morbidity and emergency admissions—used to adjust the ratios for differences in case mix at different hospitals. Because these variables can be affected by systematic differences in how hospitals code patients or decide which emergencies to admit, the authors question claims that HSMRs reflect differences in quality of care.
The HSMR was developed at the Doctor Foster Unit at Imperial College. From there Paul Aylin and colleagues challenge the authors’ conclusions in two rapid responses (http://www.bmj.com/cgi/eletters/338/mar18_2/b780). In a third response, Chris Sherlaw-Johnson and colleagues from the Healthcare Commission, which last week severely criticised the care at a hospital in the West Midlands (BMJ 2009;338:b1207, doi:10.1136/bmj.b1207), say they don’t use HSMRs to trigger their investigations. Instead they use a range of mortality data.
Confused readers may find help in John Wright’s editorial (doi:10.1136/bmj.b569). Rather than championing one metric over another or reverting to "measurement nihilism," he thinks we should explore a range of indicators. These should not be used for comparing one hospital with another but for measuring progress in individual hospitals over time. I hope others will now join this debate on bmj.com.
Cite this as: BMJ 2009;338:b1356