Yesterday after I emailed the CITI program about their continuing misrepresentation of Milgram, I got an email back from them apologizing and promising a revision (deja vu?). This time, I have asked them to keep me updated on any changes.
I think this was an error of omission, not commission. But that doesn’t absolve them. CITI’s courses are required of all research personnel at my institution (and lots of others), and I assume they are being paid for their services. They need to get it right.
In my correspondence with CITI so far I have focused narrowly on Milgram. But there were other problems with the training program, some of which may be symptomatic of how the larger IRB-regulatory system views social and behavioral research.
Milgram, in more detail
The most obvious problem with CITI was that Milgram’s obedience studies were described as unethical. That’s just not defensible. In fact, Milgram’s research was replicated with IRB approval just a few years ago. Milgram may be relevant for a training course on ethics, but not as an exemplar of bad research.
One way that Milgram is relevant is because of what his research tells us about how subjects may respond to an experimenter’s authority. It would be reasonable to conclude, for example, that if a subject says they are considering quitting an experiment, researchers must be careful not to pressure the subject to stay enrolled. (Saying “the experiment requires that you continue” is pretty much out.)
Milgram also found that obedience varied as a function of a variety of factors, including the physical and psychological distance between the subject and the “experimenter.” For example, when instructions were delivered by telephone rather than in person, obedience rates were much lower. In a modern context, we might make some reasonable inferences that coercion may be less of a possibility in Internet studies than in face-to-face lab experiments. Quitting an experiment is a lot easier if it’s just a matter of closing a browser window, rather than telling a stern experimenter in a white lab coat that you want to go home.
I’d also say that Milgram’s research is also relevant to interactions among research personnel. It’s a good reminder of the responsibility that PIs bear for the actions of their research assistants and other staff. This doesn’t mean that front-line personnel cannot or should not be held responsible for their actions. But it is wise to recognize that a PI is in a position of authority, and to consider whether that authority is being used wisely.
Milgram’s obedience research is also specifically relevant for IRBs and other ethics regulators, who are in a position to prevent research from being carried out. Milgram has profoundly affected the way we understand the behavior of ordinary people in Nazi Germany (and yes, for that reason CITI’s misrepresentation is especially perverse). It has been enormously influential across a wide range of basic and applied research in social psychology and other behavioral sciences: Google Scholar reports over 4000 citations of Milgram’s book Obedience to Authority and over 2000 citations of the original academic paper. How much would have been lost if a skittish IRB had read over the protocol and insisted on watering it down or rejecting it outright?
Beyond just Milgram though, there are other problems with the course — problems that may be emblematic of larger flaws in how the regulatory community thinks and talks about behavioral research.
Even if you accept that some of the other studies cited by CITI were unethical, a major problem is the coarsening comparison to Nazi medical experiments and the Tuskegee syphilis study. Gratuitous Nazi comparisons are so common as to be a running joke on the Internet. But in more serious terms, organizations like the ADL rightly object when public figures make inappropriate Nazi comparisons. Such comparisons do double damage: they diminish the real suffering of the Holocaust, and they overinflate whatever is being compared to it.
Let’s take what one commenter suggested is the (relatively) worst of the behavioral studies CITI cited: the Tearoom Trade study. A researcher observed men having sex with other men in a public bathroom, and in some cases he followed them home and posed as a health service interviewer to gather demographic data. Does that belong in the same category as performing transplantations without anesthesia, burning people with mustard gas, and a long list of other atrocities? Or even with knowingly denying medical treatment to men with syphilis?
Such comparisons cheapen the Holocaust and other atrocities. They also suggest grossly overblown stakes for regulating behavioral research. This mindset is by no means limited to CITI – references to the Nuremberg trials are de rigeur in almost every discussion of research ethics. In relation to what goes on in most social and behavioral studies, that’s absurd. That’s not to say that behavioral research cannot do real and deep harm (especially if we are talking about vulnerable populations). But ethics training ought to help researchers and regulators alike see the big picture and sharpen our perspective, not flatten it.
In favor of balance
Ideally, I’d like to see ethics training take on a broader scope. I’d love to see some discussion of reasonable limits on IRB reach. The CITI History and Ethics module states: “Highly motivated people tend to focus on their goals and may unintentionally overlook other implications or aspects of their work. No one can be totally objective about their own work.” In context, they are talking about why research needs to be independently reviewed. But the same sentences could apply to regulators, who may be biased in favor of regulatory review, and who may underestimate the slowing and chilling effects of regulation on researchers.
A great deal of behavioral science research consists of speech: a researcher wants to ask consenting adults some questions, or show them some pictures or videos, and then record what they say or do. Legal scholar Dale Carpenter has suggested that all IRBs should include a first-amendment expert. That’s not likely to happen. But academic freedom is central to the mission of universities. The AAUP has raised concerns about the effects of IRB regulation on academic freedom. Wouldn’t it be a good idea to make sure that ethics training for university researchers and regulators includes some basic coverage of academic freedom?
Then again, training is itself a burden. Ethics training is necessary, but if you overstuff it, you’ll just lose people’s attention. (The same can be said of consent forms, by the way.) I’d just settle for some basics with perspective. Principles of informed consent; working with vulnerable populations; evaluating risk; protecting privacy. All of these are important matters that researchers should know about – not to keep us from morphing into Nazis, but out of everyday decency and respect.