The following is a guest post by David Funder. David shares some of his thoughts about the best way forward through social psychology’s recent controversies over fraud and corner-cutting. David is a highly accomplished researcher with a lot of experience in the trenches of psychological science. He is also President-Elect of the Society for Personality and Social Psychology (SPSP), the main organization representing academic social psychologists — but he emphasizes that he is not writing on behalf of SPSP or its officers, and the views expressed in this essay are his own.
*****
Can we believe everything (or anything) that social psychological research tells us? Suddenly, the answer to this question seems to be in doubt. The past few months have seen a shocking series of cases of fraud –researchers literally making their data up — by prominent psychologists at prestigious universities. These revelations have catalyzed an increase in concern about a much broader issue, the replicability of results reported by social psychologists. Numerous writers are questioning common research practices such as selectively reporting only studies that “work” and ignoring relevant negative findings that arise over the course of what is euphemistically called “pre-testing,” increasing N’s or deleting subjects from data sets until the desired findings are obtained and, perhaps worst of all, being inhospitable or even hostile to replication research that could, in principle, cure all these ills.
Reaction is visible. The European Association of Personality Psychology recently held a special three-day meeting on the topic, to result in a set of published recommendations for improved research practice, a well-financed conference in Santa Barbara in October will address the “decline effect” (the mysterious tendency of research findings to fade away over time), and the President of the Society for Personality and Social Psychology was recently motivated to post a message to the membership expressing official concern. These are just three reactions that I personally happen to be familiar with; I’ve also heard that other scientific organizations and even agencies of the federal government are looking into this issue, one way or another.
This burst of concern and activity might seem to be unjustified. After all, literally making your data up is a far cry from practices such as pre-testing, selective reporting, or running multiple statistical tests. These practices are even, in many cases, useful and legitimate. So why did they suddenly come under the microscope as a result of cases of data fraud? The common thread seems to be the issue of replication. As I already mentioned, the idealistic model of healthy scientific practice is that replication is a cure for all ills. Conclusions based on fraudulent data will fail to be replicated by independent investigators, and so eventually the truth will out. And, less dramatically, conclusions based on selectively reported data or derived from other forms of quasi-cheating, such as “p-hacking,” will also fade away over time.
The problem is that, in the cases of data fraud, this model visibly and spectacularly failed. The examples that were exposed so dramatically — and led tenured professors to resign from otherwise secure and comfortable positions (note: this NEVER happens except under the most extreme circumstances) — did not come to light because of replication studies. Indeed, anecdotally — which, sadly, seems to be the only way anybody ever hears of replication studies — various researchers had noticed that they weren’t able to repeat the findings that later turned out to be fraudulent, and one of the fakers even had a reputation of generating data that were “too good to be true.” But that’s not what brought them down. Faking of data was only revealed when research collaborators with first-hand knowledge — sometimes students — reported what was going on.
This fact has to make anyone wonder: what other cases are out there? If literal faking of data is only detected when someone you work with gets upset enough to report you, then most faking will never be detected. Just about everybody I know — including the most pessimistic critics of social psychology — believes, or perhaps hopes, that such outright fraud is very rare. But grant that point and the deeper moral of the story still remains: False findings can remain unchallenged in the literature indefinitely.
Here is the bridge to the wider issue of data practices that are not outright fraudulent, but increase the risk of misleading findings making it into the literature. I will repeat: so-called “questionable” data practices are not always wrong (they just need to be questioned). For example, explorations of large, complex (and expensive) data sets deserve and even require multiple analyses to address many different questions, and interesting findings that emerge should be reported. Internal safeguards are possible, such as split-half replications or randomization analyses to assess the probability of capitalizing on chance. But the ultimate safeguard to prevent misleading findings from permanent residence in (what we think is) our corpus of psychological knowledge is independent replication. Until then, you never really know.
Many remedies are being proposed to cure the ills, or alleged ills, of modern social psychology. These include new standards for research practice (e.g., registering hypotheses in advance of data gathering), new ethical safeguards (e.g., requiring collaborators on a study to attest that they have actually seen the data), new rules for making data publicly available, and so forth. All of these proposals are well-intentioned but the specifics of their implementation are debatable, and ultimately raise the specter of over-regulation. Anybody with a grant knows about the reams of paperwork one now must mindlessly sign attesting to everything from the exact percentage of their time each graduate student has worked on your project to the status of your lab as a drug-free workplace. And that’s not even to mention the number of rules — real and imagined — enforced by the typical campus IRB to “protect” subjects from the possible harm they might suffer from filling out a few questionnaires. Are we going to add yet another layer of rules and regulations to the average over-worked, under-funded, and (pre-tenure) insecure researcher? Over-regulation always starts out well-intentioned, but can ultimately do more harm than good.
The real cure-all is replication. The best thing about replication is that it does not rely on researchers doing less (e.g., running fewer statistical tests, only examining pre-registered hypotheses, etc.), but it depends on them doing more. It is sometimes said the best remedy for false speech is more speech. In the same spirit, the best remedy for misleading research is more research.
But this research needs to be able to see the light of day. Current journal practices, especially among our most prestigious journals, discourage and sometimes even prohibit replication studies from publication. Tenure committees value novel research over solid research. Funding agencies are always looking for the next new thing — they are bored with the “same old same old” and give low priority to research that seeks to build on existing findings — much less seeks to replicate them. Even the researchers who find failures to replicate often undervalue them. I must have done something wrong, most conclude, stashing the study into the proverbial “file drawer” as an unpublishable, expensive and sad waste of time. Those researchers who do become convinced that, in fact, an accepted finding is wrong, are unlikely to attempt to publish this conclusion. Instead, the failure becomes fodder for late-night conversations, fueled by beverages at hotel bars during scientific conferences. There, and pretty much only there, can you find out which famous findings are the ones that “everybody knows” can’t be replicated.
I am not arguing that every replication study must be published. Editors have to use their judgment. Pages really are limited (though less so in the arriving age of electronic publishing) and, more importantly, editors have a responsibility to direct the limited attentional resources of the research community to articles that matter. So any replication study should be carefully evaluated for the skill with which it was conducted, the appropriate level of statistical power, and the overall importance of the conclusion. For example, a solid set of high-powered studies showing that a widely accepted and consequential conclusion was dead wrong, would be important in my book. (So would a series of studies confirming that an important surprising and counter-intuitive finding was actually true. But most aren’t, I suspect.) And this series of studies should, ideally, be published in the same journal that promulgated the original, misleading conclusion. As your mother always said, clean up your own mess.
Other writers have recently laid out interesting, ambitious, and complex plans for reforming psychological research, and even have offered visions of a “research utopia.” I am not doing that here. I only seek to convince you of one point: psychology (and probably all of science) needs more replications. Simply not ruling replication studies as inadmissible out-of-hand would be an encouraging start. Do I ask too much?