T Delamothe, deputy editor, BMJ
Putting the finishing touches to an editorial some years ago, I decided at the last minute that "Wanted: guidelines that doctors will follow" was a better title than "Wanted: doctors who will follow guidelines." My thinking was that doctors aren’t automata. You can’t just write a few lines of code to achieve the desired outcome; human beings are more complicated than that.
I wish I’d gained further insight into behavioural change since that midnight revelation, but I haven’t. Ruminating on the difficulties of getting people to do the right thing, I console myself with Immanuel Kant’s claim that "Out of the crooked timber of humanity no straight thing was ever made."
The crooked timber of humanity was much in evidence at last week’s congress on peer review and biomedical publication. There seems no limit to what some researchers will do to come up with a publishable paper. They will include "guest" authors on the paper when they don’t deserve to be there and omit others who should be there ("ghosts") (doi:10.1136/bmj.b3783).
They’ll fudge, or deny, their competing financial interests. They’ll exaggerate the importance of secondary, statistically significant, outcomes when the primary outcome isn’t altered by an intervention, and they’ll softpeddle any limitations.
They are masters of "spin," with claims made in article discussions and conclusions bearing little relation to the actual findings (doi:10.1136/bmj.b3779). They will blithely embark on underpowered studies without first ascertaining whether the question has already been convincingly answered—thereby wasting money and putting patients at risk.
The Committee on Publication Ethics has been exploring the outer reaches of such research misconduct since 1997. Its report card up to 2008 lists 115 cases of unethical research, 34 of plagiarism, and 23 of data fabrication or falsification.
Editorial offices contain their fair share of crooked timber and don’t always enforce the requirements they so fervently endorse. And even when they do, they can be confounded by authors who fill in the forms according to what they think the editors want rather than the truth.
The mandatory registration of trials at their inception might have been expected to curtail some of the finagling that went on, but not so. Presenters at the congress have been finding that variations between the details of a particular trial listed on a register and the published report are a rich, if alarming, seam to mine. Primary outcomes morph into secondary ones (and vice versa) and eligibility criteria shift, usually to increase the size of the recruitment pool.
Back on the wards and in general practices, doctors aren’t doing the right thing either. England’s Department of Health estimates that avoidable adverse clinical events are costing NHS hospitals £2 billion a year (doi:10.1136/bmj.b3678). In the United States, campaigns to promote better outpatient use of antibiotics seem to be going nowhere (doi:10.1136/bmj.b3785). No doubt, guidelines there are aplenty, so why don’t people follow "best practice" when it’s spelt out for them?
It may be human nature, but do we have to leave it at that?
Cite this as: BMJ 2009;339:b3813