Can you trust what you read in medical journals? Investigations reveal that the research that guides health care is often tainted by unreliable conclusions.
Research on trial
Biased reporting, hand-picked results, unreliable conclusions: no, you really can't trust what you read in leading medical journals. That at least seems to be the upshot of a flurry of investigations into the reliability of research used to guide health care worldwide. No less worrying is the fact that attempts to tackle the problem seem to be making little headway. The findings have implications far beyond medicine, however. Increasingly, disciplines ranging from economics to educational psychology are attempting to become more "evidence-based", borrowing methods routinely used in medicine. While laudable in theory, it's clear the reliability of such methods is all too easily undermined.
Medical researchers have long recognised the need for objectivity when trying to find out "what works". As long ago as 1747, the Scottish naval physician James Lind set up a pioneering study to test his theory that scurvy is caused by a dietary deficiency. He selected six pairs of sailors with closely matched symptoms, and gave them identical diets - along with treatments he thought might alleviate their symptoms, ranging from quarts of cider to strong mouthwash. Within a week one treatment had proved astonishingly effective: a daily ration of oranges and lemons. We now know that the patients had benefited from the vitamin C in citrus fruits.
Today, Lind's methods have evolved into the so-called randomised controlled trial, in which patients are randomly divided into two groups: those receiving the new treatment, and those being given the standard therapy or a placebo as a "control". The random allocation cleverly obviates the need to match the two sets of patients, as Lind was forced to do. If the groups are big enough, they will both contain a representative selection of the population, the only difference between them being whether they get the new therapy or not.
At the end of the trial, the numbers of patients benefiting in each group can be compared. But it's not enough for the new therapy to benefit a higher proportion of patients. The difference between the two groups must be "statistically significant" - that is, so large that there's only a small risk of getting so great a difference by fluke alone. All this sounds like a highly reliable means for finding out what works. In reality, however, there's a big problem: studies that fail to produce positive results tend to languish in filing cabinets, unpublished and unknown. As a result, the research that does get published in journals can give a dangerously rosy view of the effectiveness of new therapies.
This so-called publication bias has several causes. Researchers working for pharmaceutical companies aren't exactly keen to trumpet their failures, while academics under pressure to publish or perish may decide it's just easier to move on to something new. But it's long been suspected that the principal culprits are the medical journals themselves, which prefer headline-grabbing "breakthroughs" to dull but important failures.
Such suspicions have now won backing from investigators at the University of Washington Medical Centre. Dr Seth Leopold and his colleagues sent reviewers for two leading journals fake research on the effect of antibiotics on preventing infections in surgery. Two versions were sent out to over 200 reviewers for the journals: one reporting a positive outcome, the other failing to find anything significant.
In theory, the reviewers should have welcomed either finding, as both have major implications for medical practice. Yet the team found that 98 per cent of the reviewers at one of the leading journals recommended publication of the positive research, compared to just 71 per cent for the equivocal report. When the team sent the papers to another journal, they got a similar result, with the positive research garnering most recommendations for publication.
Reporting the results at a recent conference in Vancouver, Canada, Dr Leopold and his colleagues said that reviewers often claimed the positive paper was better constructed and contained fewer errors than the equivocal paper. In reality, both were identical in everything apart from the final conclusion. Few medical researchers will be surprised by this - not least because many of them will know the frustration of trying to get negative findings published in leading journals. More worrying is the fact that attempts to combat publication bias aren't working.
Since 2005, leading medical journals have demanded that researchers planning to publish with them register their trials even before they begin. The idea is to ensure that the outcome of all trials are reported, regardless of outcome. Last month, the Journal of the American Medical Association carried the results of a major study of the registration scheme by an international team led by Dr Sylvain Mathieu of Inserm, Paris. The team found that of a sample of over 300 trials published since the scheme was introduced, fewer than half had been properly registered. Even of those that were, many showed signs of deliberately reporting positive findings to boost the chances of publication.
It's the same story with another study published last week, this time in the online journal PLoS Medicine. A team led by Dr Joseph Ross of Mount Sinai School of Medicine, New York, examined the records of hundreds of trials registered with the US National Library of Medicine. Again, the aim of the scheme is to reduce publication bias, but it too does not appear to be working. Of a sample of over 700 clinical trials examined in detail, fewer than half had been published, so no one knows what they found.
Taken together, these new studies point to a major scandal which threatens health care worldwide. Unless the findings of all trials are reported - both positive and negative - it is impossible to know what really works. How much time and money has already been wasted testing useless therapies, simply because previous negative findings remain buried in filing cabinets? More serious still, how many patients in clinical trials have faced unnecessary suffering and even death after receiving treatments proved useless years earlier by researchers who couldn't or wouldn't publish their negative findings?
So far the medical community has dismally failed to put its house in order. It can only be a matter of time before it is forced to face the wrath of aggrieved patients - and their lawyers. Robert Matthews is Visiting Reader in Science at Aston University, Birmingham, England