What is the Dutch word for “irony?”

Breathless headline-grabbing press releases based on modest findings. Investigations driven by confirmation bias. Broad generalizations based on tiny samples.

I am talking, of course, about the final report of the Diederik Stapel investigation.

Regular readers of my blog will know that I have been beating the drum for reform for quite a while. I absolutely think psychology in general, and perhaps social psychology especially, can and must work to improve its methods and practices.

But in reading the commission’s press release, which talks about “a general culture of careless, selective and uncritical handling of research and data” in social psychology, I am struck that those conclusions are based on a retrospective review of a known fraud case — a case that the commissions were specifically charged with finding an explanation for. So when they wag their fingers about a field rife with elementary statistical errors and confirmation bias, it’s a bit much for me.

I am writing this as a first reaction based on what I’ve seen in the press. At some point when I have the time and the stomach I plan to dig into the full 100-page commission report. I hope that — as is often the case when you go from a press release to an actual report — it takes a more sober and cautious tone. Because I do think that we have the potential to learn some important things by studying how Diederik Stapel did what he did. Most likely we will learn what kinds of hard questions we need to be asking of ourselves — not necessarily what the answers to those questions will be. Remember that the more we are shocked by the commission’s report, the less willing we should be to reach any sweeping generalizations from it.

So let’s all take a deep breath, face up to the Stapel case for what it is — neither exaggerating nor minimizing it — and then try to have a productive conversation about where we need to go next.

From Walter Stewart to Uri Simonsohn

Over on G+, Ole Rogeberg asks what ever happened to Walter Stewart? Stewart was a biologist employed by NIH in the 80s and 90s who became involved in rooting out questionable research practices.

Rogeberg posts an old Omni Magazine interview with Stewart (from 1989) in which Stewart describes how he got involved in investigating fraud and misconduct and what led him to think that it was more widespread than many scientists were willing to acknowledge. If you have been following the fraud scandals in psychology and the work of Uri Simonsohn, you should read it. It is completely riveting. And I found some of the parallels to be uncanny.

For example, on Stewart’s first investigation of questionable research, one of the clues that raised his suspicions was a pattern of too-similar means in a researcher’s observations. Similar problems — estimates closer together than what would be expected by chance — led Simonsohn to finger 2 researchers for misconduct.

And anticipating contemporary calls for more data openness — including the title of Simonsohn’s working paper, “Just Post It,” Stewart writes:

“With present attitudes it’s difficult for an outsider to ask for a scientist’s raw data without appearing to question that person’s integrity. But that attitude absolutely has to change… Once you publish a paper, you’re in essence giving its ideas away. In return for benefits you gain from that – fame, recognition, or whatever – you should be willing to make your lab records and data available.”

Some of the details of how Stewart’s colleagues responded are also alarming. His boss at NIH mused publicly on why he was wasting his talents chasing fraud. Others were even less kind, calling him “the terrorist of the lab.” And when he got into a dispute with his suburban neighbors about not mowing his lawn, Science — yes, that Science — ran a gossip piece on the spat. (Some of the discussions of Simonsohn’s earlier data-detecting efforts have gotten a bit heated, but I haven’t seen anything get that far yet. Let’s hope there aren’t any other social psychologists on the board of his HOA.)

The Stewart interview brought home for me just how much these issues are perennial, and perhaps structural. But the difference from 23 years ago is that we have better tools for change. Journal editors’ gatekeeping powers are weakening in the face of open-access journals and post-publication review.

Will things change for the better? I don’t know. I feel like psychology has an opportunity right now. Maybe we’ll actually step back, have a difficult conversation about what really needs to be done, and make some changes. If not, I bet it won’t be 20 years before the next Stewart/Simonsohn comes along.

The ongoing legacy of a case of scientific misconduct

Almost a decade ago, a scientific misconduct scandal shocked social psychologists. A prominent researcher on a career fast track was discovered to have fabricated data and committed other forms of misconduct for 4 articles in prominent journals (JPSP, PSPB, and Psychological Science). The studies were funded by NIH and published while she was at Harvard. A joint investigation by NIH and Harvard resulted in the researcher, Karen Ruggiero, admitting to misconduct, retracting the articles, and leaving academia.

The other day I read a post by Ben Goldacre about a new blog called Retraction Watch that follows scientific retractions. Goldacre mentions a study that followed up on citations of a retracted article from the late ’80s. That immediately reminded me of the Ruggiero incident, which was a big deal when I was a grad student. And it made me wonder: are people still citing Karen Ruggiero’s retracted papers?

Fortunately, it’s relatively easy to do a quick check via Google Scholar — just look up the (now-retracted) articles, click the “Cited by” link, and count the number of hits. The investigation report came out in December 2001, and the last of the retractions was published in March 2002. We should probably allow for some publication lag, so let’s forgive anything with a publication year of 2002 or earlier. How many citations are there from 2003 onward?

Here’s what I found:

  • Ruggiero, K.M. & Marx, D.M (1999). Less pain and more to gain: Why high-status group members blame their failure on discrimination. Journal of Personality and Social Psychology, 77, 774-784. Cited 7 times since 2003. (Google Scholar gives 9 hits, but 2 appear to be duplicates.)
  • Ruggiero, K.M., Steele, J., Hwang, A., & Marx, D.M. (2000). Why did I get a ‘D’?  The effects of social comparisons on women’s attributions to discrimination. Personality and Social Psychology Bulletin, 26, 1271-1283. Cited 2 times since 2003.
  • Ruggiero, K.M. & Major, B.N. (1998). Group status and attributions to discrimination:  Are low- or high-status group members more likely to blame their failure on discrimination? Personality and Social Psychology Bulletin, 24, 821-838. Cited 9 times since 2003.
  • Ruggiero, K.M., Mitchell, J.P., Krieger, N., Marx, D.M., & Lorenzo, M.L. (2000). Now you see it, now you don’t:  Explicit versus implicit measures of the personal/group discrimination discrepancy. Psychological Science, 22, 57-67. Cited 3 times since 2003.

[Let me pause here to note that the investigation concluded Ruggiero acted alone. I have listed complete citations, but please keep in mind that her co-authors were not responsible for the misconduct.]

Are these numbers a lot or a little? You can judge for yourself, but it’s at least worth noting that all are greater than zero. Some of the citations are as recent as 2010. I did not read the citing articles, but based on the titles, none appeared to be discussions of scientific misconduct; all seemed to be related to the substance of the retracted papers.

How could that happen? Since most people find articles to cite through electronic databases, I thought I’d take a look to see how these articles are listed. When I looked up these articles in PsycINFO, all 4 listings clearly state that the articles were retracted. But that isn’t true of other databases.

When I looked up the JPSP article in Google Scholar, clicking on the title took me to a ScienceDirect link (ScienceDirect is a product of Elsevier, although Elsevier does not publish JPSP). The ScienceDirect listing contained the title, abstract, etc. but did not say anything about the article having been retracted.

Clicking on both of the PSPB articles and the Psychological Science article in Google Scholar led me to entries in the Sage Journals Online database (Sage publishes those journals). None of those links mentioned the retractions. In fact, in addition to the usual information on each article (title, abstract, etc.), Sage goes a step further and lists other articles that cite them!

Google Scholar seems to give different links depending on whether you are on an institutional network that has access to certain databases, so your results may vary depending on where you are (and perhaps what search terms you use). However, it’s also worth noting that Google Scholar itself did not flag the articles as having been retracted. Sometimes my searches separately brought up the retraction notices on the same page (but always lower), and sometimes they didn’t.

Perhaps worst of all, when I retrieved the electronic full text of all four articles, I got the articles in their original form. None of the articles was marked to indicate that the article had been subsequently retracted.

Is this a problem? I think it is. Papers do get corrected or retracted from time to time (not always for sinister reasons like scientific misconduct), and it is important that researchers don’t keep citing them. I don’t know if this is an anomaly, but it does make you wonder if some databases are not being properly updated with retractions — and what effect that is having on science.