An MIT researcher has been convicted for faking his results.
Scientists sometimes 'fudge' results
Dr Luk Van Parijs seemed destined to do great things in science. A former student at Cambridge, Harvard and Caltech, Dr Van Parijs was an associate professor at the Massachusetts Institute of Technology (MIT) in his early 30s.
There he won huge government grants for his research into genetic disease and his results appeared in top-flight journals, with Nobel Prize winners as co-authors.
But in 2004, it all began to unravel. Some of Dr Van Parijs's colleagues at MIT began to suspect him of faking experimental results. MIT launched an investigation. Dr Van Parijs confessed and in 2005 he was fired.
He was pursued through the courts by the US government and last month finally received his punishment: six months' home detention, plus community service and an order to pay back MIT's grants.
Some think Dr Van Parijs got off lightly. Many researchers around the world wasted a lot of time and taxpayers' money trying to confirm his bogus results.
But he appears to have benefited from his prompt confession and expression of remorse, and pleas for clemency from several leading scientists. Among them was the Nobel Prize-winning biologist David Baltimore, who told a court he believed Dr Van Parijs was aware his acts were "antithetical to the principles of science".
You do not have to be a top academic to suspect that publishing results from experiments that were never done is beyond the pale. But what about not publishing results from experiments that were done because they don't fit your theory? Or fiddling the results of calculations to get the "right" answer?
Welcome to the murky world of real science, where such tactics have been used by researchers since at least the time of Galileo.
Anyone who views science as a disinterested quest for truth may be appalled by such practices. But according to the science writer Michael Brooks, it is time to get real.
In his new book Free Radicals: The Secret Anarchy of Science (Profile Books), Brooks argues science is being held back by its squeaky-clean image, not least because it deters the radical types most likely to make major discoveries.
Brooks points to the example of the most famous scientist of the 20th century, Albert Einstein.
Behind the public persona of a brilliant but unworldly genius was a lifelong troublemaker not averse to ignoring evidence that did not fit his theories.
The signs of his anarchic approach to science were clear early on. Leaving school out of boredom at the age of 16, he eventually got into university where he was regarded as insolent and idle by his professors.
Unable to obtain decent references he struggled to find a job, ending up as a lowly patent clerk in Berne, Switzerland.
Undaunted, Einstein knocked off the work quickly then spent his spare time developing deep ideas on the nature of space, time and matter - most famously his theory of relativity.
Like any scientific insight worthy of the name, relativity made predictions that could be put to the test. And within a few months of its publication in 1905, it had been. It failed dismally.
Einstein did not miss a beat. After congratulating the scientist on a nice job, he simply dismissed his results as implausible.
Einstein's confidence in his theory was finally vindicated a decade later. It is tempting to think all of this merely confirms his genius, but as Brooks points out, that confidence was sometimes hopelessly misplaced.
He cites the little-known case of Einstein's theory of magnetism, which he believed was the result of electrons spinning inside atoms.
He devised a test of his theory and performed the experiment, finding a result within 2 per cent of what his theory predicted. But when others tried to confirm the result, they could not get anything remotely close to it.
That, we now know, was because Einstein's theory was wrong.
This is nothing to be ashamed of. Everyone can make mistakes. Yet many years later, his collaborator on the experiment confessed they had found two values, one of which was much closer to what we now know to be the correct value, but which Einstein ignored because he was sure his theory was right.
According to Brooks, other renowned seekers of truth have shown similar tendencies. Galileo developed a theory that predicted that just one ocean tide would take place each day, and at exactly the same time.
He stuck by his explanation, even though any mariner could have told him it was wrong on both counts.
The British astronomer Sir Arthur Eddington, who in 1919 led efforts to verify the predictions of Einstein's theory of gravity, was so convinced of Einstein's genius he chose to ignore observations suggesting the theory was wrong.
The US physicist Robert Millikan won the 1923 physics Nobel Prize for his studies of the electron after hand-picking data to make his findings more compelling.
Such behaviour cannot be excused on the grounds that these brilliant minds had an instinct for the truth that was confirmed in the end. Millikan's values for the properties of the electron were later shown to be wide of the mark.
Despite this, Brooks argues such practices are not only far more widespread than scientists would have us believe, but that they are nothing to be ashamed of. On the contrary, they are a sign that science still attracts the swashbuckling types needed to push back the frontiers of knowledge.
It is unlikely many academics will welcome Brooks's call for greater honesty about the realities of the scientific process. They have invested too much in the image of science as the Golden Road to knowledge.
Yet there can be little doubt that too many scientists simply plod along with little sense of direction. While inventing false trails is not the answer, the history of science suggests that sometimes it pays to take short cuts.
Robert Matthews is visiting reader in science at Aston University, Birmingham, England