Abstract of Negative results are disappearing from most disciplines and countries (PDF) by Daniele Fanelli, Scientometrics, 2012 (thanks to Brent Roberts for forwarding it):
Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have ‘‘tested’’ a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.
My reactions…
Sarcastic: together with the Flynn Effect this is clearly a sign that we’re getting smarter.
Not: There is no single solution to this problem, but my proposal is something you could call the Pottery Barn Rule for journals. Once a journal publishes a study, it should be obliged to publish any and all exact or near-exact replication attempts in an online supplement, and link to such attempts from the original article. That would provide a guaranteed outlet for people to run exact replication attempts, something we do not do nearly enough of. And it would create an incentive for authors, editors, and publishers to be rigorous since non-replications would be hung around the original article’s neck. (And if nobody bothers to try to replicate the study, that would probably be a sign of something too.)
Fully support the Pottery Barn Rule! We will cite the paper you mention in a review article we’re currently finishing up on the empirical literature suggesting the pressure to publish in hi-rank journals is partly to blame for the increase in publication bias and hence decline effect. Watch for it some time later this year, hopefully.