Experts warn study papers can be dangerously misinterpreted
Insomnia: questions over research flaws must not be ignored
With around one in three of us having bouts of insomnia, news about its health effects always makes headlines.
So when researchers reported finding no link between insomnia and premature death earlier this month, the story was picked up world-wide.
But what looks like good news for insomniacs is actually the latest manifestation of a scandal that should keep us all awake at night.
That’s because the researchers did find a link – but ignored the evidence.
To be clear, they weren’t trying to hide anything. It’s actually worse than that. They fell into a trap scientists have been warned about for decades, but have simply ignored.
As such, this latest study joins countless others in fields ranging from medicine and psychology to economics that have reached conclusions which are just flat wrong.
The idea that scientific studies can be misleading will not surprise anyone tired of reading that, say, coffee is beneficial to our health once week, only to read the opposite a few weeks later.
But the scientific community has always been able to blame such flip-flopping as the result of poor study design.
This has masked a far more fundamental problem: that the textbook methods used by scientists to turn data into insight are fundamentally flawed.
These methods were laid down in the 1920s – and almost immediately attacked by statisticians as dangerously misleading.
Yet, incredibly, the scientific community chose to ignore the warnings, and continues to use the methods to this day.
Concern about the impact on the reliability of research led the American Statistical Association in 2016 to issue an unprecedented public statement urging the scientific community to heed the warnings – so far to little effect.
Now that could be about to change. Early in the new year, the ASA will be publishing proposals for more reliable ways of extracting insight from data.
The hope is that researchers will adopt these methods, making their findings more trustworthy.
But to do it, they must be weaned off their obsession with so-called “statistical significance”, widely used as the acid test of telling whether a finding is genuine or not.
Scientists know it’s all too easy to be fooled by fluke results, and try to minimise the risk by calculating the statistical significance of their findings.
They believe this measures the chances their result is just a fluke. But it doesn’t – as statisticians have been warning for years.
As a result, countless studies have found “statistically significant” evidence for everything from the benefits of new drugs to the cancer risks from chemicals that in reality do not exist.
At the same time, many scientists routinely take non-significance to mean an effect doesn’t exist.
This can lead to the bizarre situation where they find evidence for a genuine effect – only to dismiss it because it is “statistically non-significant”.
The insomnia study is a classic example. The researchers, from Flinders University in Australia, reviewed 17 studies for evidence of a link between insomnia and premature death. The findings covered almost 37 million people, and compared the relative risk of death of the insomniacs with those without the disorder.
Each of the 17 studies gave different outcomes, but by pooling all the evidence the researchers showed there is a six to seven per cent increased risk of death among those with frequent, ongoing insomnia.
While relatively small, the sheer number of people with insomnia means that this poses a significant public health issue.
Yet in reporting their findings in the current issue of the journal Sleep Medicine Reviews, the researchers concluded “there was no difference” in the risks of death, pointing to the “lack of an association between mortality and insomnia”.
So how did the six to seven per cent risk difference suddenly become no difference at all? Because the difference was statistically non-significant – which the researchers took to mean non-existent.
A panel of experts convened by the ASA in 2016 highlighted this as a dangerous misconception that can lead to fundamentally mistaken conclusions.
In the case of the insomnia study, it turned an increased risk into none at all – which flies in the face of research suggesting insomnia adversely affects blood pressure and heart-rate.
Yet the authors of the study are far from unique. A forthcoming survey of studies to be published in top medical journals shows that even after the ASA statement, “non-significant” results are still being mis-interpreted.
Of over 100 such results reported in four leading journals during 2016-17, over half concluded this meant there was “no treatment benefit”.
The implications are alarming. Potentially life-saving treatments are being rejected because scientists are making a basic error in analysing their data.
Worse, the standard procedures for preventing faulty conclusions getting traction are not working. Neither the specialist editors working for the top journals or the expert referees who review research papers are spotting these basic mistakes.
The most likely reason is that they too don’t understand there’s a problem. Another possibility is that they just can’t accept that so widespread a practice can be wrong.
What is clear is that scientists will carry on using the flawed methods until there’s a positive incentive to do otherwise.
The publication by the ASA of alternative methods is a step in the right direction. But the top research journals have a critical role to play in compelling scientists to do better.
Until they act, it’s hard to escape the conclusion they’re just like tabloid newspapers, and more concerned with eye-catching headlines than the truth.
Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK