Right or wrong, it's scienceToronto Star
11 September 2005
By Kurt Kleiner
THE CASUAL newspaper reader, science can seem both wonderful and
baffling. Wonderful because of the power it has to explain the world we
live in. And baffling because of the steady stream of contradictory
results it seems to produce, from the shifting age of the universe to
whether saturated fat is really so bad for you.
Now it turns out
there's a simple reason. Most published scientific research is wrong.
According to a paper in the journal PLOS Medicine, when you look at all
of the things that can go wrong in the course of scientific research,
it becomes statistically improbable that even half of the published
research will stand up to further scrutiny.
paper wasn't written by a science historian, philosopher, journalist,
or some other outside critic. It comes from John Ioannidis, an
epidemiologist at the University of Ioannina School of Medicine in
Greece. Perhaps even more surprisingly, other scientists, science
journal editors, and statisticians agree with him. One called the
quality of most published research a "trade secret."
How can it
be that billions of dollars worth of scientific research, conducted by
our best and brightest minds, and published in peer-reviewed journals,
still has a heads-or-tails chance of being right? The answer gives an
insight into how the scientific process works.
according to Ioannidis, is that most research -- at least most medical
research -- isn't designed well enough to produce dependable results.
Many studies are so small that there's a good chance the results they
report are just coincidence. Others report effects so meagre -- for
instance a drug that helps only 10 percent of patients -- that they are
likely to be statistical noise.
Other studies don't make it
clear from the outset what they're looking for, which allows
researchers to "fish" for results after the fact. Researchers who are
financially interested in the results, or simply trying too hard to
prove a pet theory, are more likely to produce bad results. And
finally, research into areas that haven't been well-studied is also
more likely to be wrong.
In his paper, Ioannidis assigns all of
these potential sources of error a number value, and runs them through
a mathematical model. His conclusion is that it's very hard to do
research that has a more than even chance of being right. Even a large,
well-designed, randomized controlled trial with little researcher bias
has only an 85 percent chance of being correct. A trial where
everything is done well but the sample size is too small has only a 23
percent chance of being right. A too-small epidemiological study has
only a 12 percent chance of being right.
real-world examples to back up his conclusion. In another study, this
one published in July in the Journal of the American Medical
Association, he looked at 49 papers that had been published in
reputable medical journals and cited more than 1,000 times by other
scientists in their own work. Of these, 14, or almost a third, were
later partly or fully refuted by larger, better-conducted studies.
If the conclusions startle most of us, for scientists they're old news.
you're a lay person you say, 'I can't believe it.' But any scientist
will say, 'Of course, that's how science works.' You do your damn best,
but it's messy, no matter how skilled you are. Science is dreadfully,
dreadfully difficult," says Drummond Rennie, deputy editor of JAMA.
thinks Ioannidis' paper is important, because it reminds scientists
just how often papers are wrong, and can teach the public to think
critically about reported results.
"It isn't as astounding as
they make it out to be. But it's good to publish things like that
because they raise a lot of interesting questions," says Solomon
Snyder, senior editor at the Proceedings of the National Academy of
Sciences. He says there's no way to eliminate papers that turn out to
have false conclusions -- in fact, science would be hurt if you tried.
journals only published the papers that were most likely to be right --
large studies looking into well-researched areas -- some of the most
important, groundbreaking research would never get published. Even if a
study turns out to be wrong, if it raises questions and sparks ideas it
can open up new avenues of research, he says.
"What John is
trying to do is to be provocative. I think that is not a bad thing,"
says Barbara Cohen, an editor at PLOS Medicine, where the paper was
published. "Scientific truth is a moving target. I don't think it would
be useful to not allow people to draw conclusions that will change, to
speculate and propose hypotheses," she says.
Scientists take it
for granted that a lot of research will get it wrong. They are
skeptical of small initial studies; they read the paper's methodology
looking for potential problems; they know that it's best to weigh all
of the research before reaching a conclusion. Laymen shouldn't throw up
their hands in despair, but instead should learn to think just as
critically about science.
"We should accept that most research
findings will be refuted," Ioannidis says. "Some will be replicated and
validated. The replication process is more important than the 'first'
discovery. People should be critical. Critical thinking is not cynical
thinking," he says.