How should journals handle replication studies?

Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper.

JPSP is the flagship journal in my field, and I’ve published in it and I’ve reviewed for it, so I’m reasonably familiar with how it ordinarily works. It strives to publish work that is theory-advancing. I haven’t seen the manuscript, but my understanding is that the Ritchie et al. experiments were exact replications (not “replicate and extend” studies). In the usual course of things, I wouldn’t expect JPSP to accept a paper that only reported exact replication studies, even if their results conflicted with the original study.

However, the Bem paper was extraordinary in several ways. I had two slightly different lines of thinking about JPSP’s rejection.

My first thought was that given the extraordinary nature of the Bem paper, maybe JPSP has a special obligation to go outside of its usual policy. Many scientists think that Bem’s effects are impossible, which created the big controversy around the paper. So in this instance, a null replication has a special significance that usually it would not. That would be especially true if the results reported by Ritchie et al. fell outside of the Bem studies’ replication interval (i.e., if they statistically conflicted; I don’t know whether or not that is thecase).

My second line of thinking was slightly different. Some people have suggested that the Bem paper shines a light on shortcomings of our usual criteria for what constitutes good methodology. Tal Yarkoni made this argument very well. In short: the Bem paper was judged by the same standard that other papers are judged by. So the fact that an effect that most of us consider impossible was able to pass that standard should cause us to question the standard, rather than just attacking the paper.

So by that same line of thinking, maybe the rejection of the Ritchie et al. null replication should make us rethink the usual standards for how journals treat replications. Prior to electronic publication — in an age where journal pages were scarce and expensive — the JPSP policy made sense for a flagship journal that strived to be “theory advancing.” But a consequence of that kind of policy is that exact replication studies are undervalued. Since researchers know from the outset that the more prestigious journals won’t publish exact replications, we have a low incentive to invest time and energy running them. Replications still get run, but often only if a researcher can think of some novel extension, like a moderator variable or a new condition to compare the old ones too. And then the results might only get published if the extension yields a novel and statistically significant result.

But nowadays, in the era of electronic publication, why couldn’t a journal also publish an online supplement of replication studies? Call it “JPSP: Replication Reports.” It would be a home for all replication attempts of studies originally published in the journal. This would have benefits for individual investigators, for journals, and for the science as a whole.

For individual investigators, it would be an incentive to run and report exact replication studies simply to see if a published effect can be reproduced. The market – that is, hiring and tenure committees – would sort out how much credit to give people for publishing such papers, in relation to the more usual kind. Hopefully it would be greater than zero.

For journals, it would be additional content and added value to users of their online services. Imagine if every time you viewed the full text of a paper, there was a link to a catalog of all replication attempts. In addition to publishing and hosting replication reports, journals could link to replicate-and-extend studies published elsewhere (e.g., as a subset of a “cited by” index). That would be a terrific service to their customers.

For the science, it would be valuable to encourage and document replications better than we currently do. When a researcher looks up an article, you could immediately and easily see how well the effect has survived replication attempts. It would also help us organize information better for meta-analyses and the like. It would help us keep labs and journals honest by tracking phenomena like the notorious decline effect and publication bias. In the short term that might be bad for some journals (I’d guess that journals that focus on novel and groundbreaking research are going to show stronger decline curves). But in the long run, it would be another index (alongside impact factors and the like) of the quality of a journal — which the better journals should welcome if they really think they’re doing things right. It might even lead to improvement of some of the problems that Tal discussed. If researchers, editors, and publishers knew that failed replications would be tied around the neck of published papers, there would be an incentive to improve quality and close some methodological holes.

Are there downsides that I’m not thinking of? Probably. Would there be barriers to adopting this? Almost certainly. (At a minimum, nobody likes change.) Is this a good idea? A terrible idea? Tell me in the comments.

Postscript: After I drafted this entry and was getting ready to post it, I came across this article in New Scientist about the rejection. It looks like Richard Wiseman already had a similar idea:

“My feeling is that the whole system is out of date and comes from a time when journal space was limited.” He argues that journals could publish only abstracts of replication studies in print, and provide the full manuscript online.