More ANPRM coverage in the blogosphere

A quick post, as I’m on vacation (sort of)… Institutional Review Blog has some absolutely terrific coverage of the proposed IRB rule changes, aka the ANPRM. The blogger, Zachary Schrag, is a historian who has made IRBs a focus of his research. In particular his Quick Guide to the ANPRM is a must-read for any social scientist considering writing public comments. All the coverage and commentary at his blog is conveniently tagged ANPRM so you can find it easily.

Also, thanks to Tal Yarkoni for a shout-out over at [citation needed].

The importance of trust and accountability in effective human subjects protection

I had a good discussion with a friend about the “excused from prior review” category that would replace “exempt” in the proposed human subjects rule changes.

Under the current system, a limited number of activities qualify as exempt from review, but researchers are not supposed to make that determination themselves. The argument is that incentives and biases might distort the researcher’s own judgment. Instead, an administrator is supposed to determine whether a research plan qualifies as exempt. This leads to a Catch-22 where protocols must be reviewed to determine that they are exempt from review. (Administrators have some leeway in creating the exemption application, but my human subjects office requires almost as much paperwork for an exempt protocol as for a fully reviewed protocol.)

The new system would allow investigators to self-declare as “excused” (the new name for exempt), subject to random auditing. The details need to be worked out, but the hope is that this would greatly facilitate the work of people doing straightforward behavioral research. My friend raised a very legitimate concern about whether investigators can make that decision impartially. We’re both psychologists who are well aware of the motivated cognition literature. Additionally she cited her experience on an IRB where people have tried to slip clearly non-exempt protocols into the exempt category.

I don’t doubt that people submit crazy stuff for exempt review, but I think you have to look at the context. A lot of it may be strategic navigation of bureaucracy. Or, if you will, a rational response to distorted incentives. Right now, investigators are not held responsible for making a consequential decision at the submission and review stage. Instead, all of the incentives push investigators to lobby for the lowest possible level of review. It means that your study can get started a lot faster, and depending on your institution it may mean less paperwork. (Not for me though.) If an application gets bumped up to expedited or full review there is often no down side for the investigator — it just gets passed on to the IRB, often on the same timeline as if it had been initially submitted for expedited or full review anyway.

In short, at the submission stage the current system asks investigators to describe their protocol honestly — and I would infer that they must be disclosing enough relevant information if their non-exempt submissions are getting bumped up to expedited or full review. But the system neither trusts them nor holds them accountable for making ethics-based decisions about the information that they have honestly disclosed.

Under the proposed new system, if an investigator says that a study is excused, they will just file a brief form describing the study’s procedures and then go. Nobody is looking over an investigator’s shoulder before they start running subjects. Yes, that does open up room for rationalizations. (“Vaginal photoplethysmography is kind of like educational testing, right?”) But it also tells investigators something they have not been told before: “We expect you to make a real decision, and the buck stops with you.” Random retrospective auditing would add accountability, especially if repeated or egregious violations come with serious sanctions. (“You listed this as educational testing and you’ve been doing what? Um, please step outside your lab while we change the locks.”)

So if you believe that investigators are subject to the effects of incentives and motivated cognition — and I do — your choice is to either change the incentive structure, or take control out of their hands and put it in a regulatory system that has its own distorted incentives and biases. I do see both sides, and my friend might still disagree with me — but my money is on changing the motivational landscape for investigators. Trust but verify.

Finally, the new system would generate something we don’t have right now: data. Currently, there is a paucity of data showing which aspects of the IRB system, if any, actually achieve its goals of protecting human subjects. It’s merely an article of faith — backed with a couple of highly rare and misrepresented examples — that the current system needs to work the way it does. How many person-hours have been spent by investigators writing up exempt proposals (and like I said, at my institution it’s practically a full protocol)? How many hours have been spent by administrators reading “I’d like to administer the BFI” for the zillionth time, instead of monitoring safety on higher-risk studies? And all with no data showing that that is necessary? The ANPRM explicitly states that institutions can and should use the data generated by random audits to evaluate the effectiveness of the new policy and make adjustments accordingly. So if the policy gets instituted, we’ll actually know how it’s working and be able to make corrections if necessary.

What the proposed human subjects rules would mean for social and behavioral researchers

The federal government is proposing a massive overhaul of the rules governing human subjects research and IRBs (previously mentioned here). The proposed rule changes were just announced by the Department of Health and Human Services. They are outlined in a document called an “advance notice of proposed rulemaking,”or ANPRM. (See also this overview in the NEJM by Ezekiel Emanuel and Jerry Menikoff.)

Reading the full ANPRM is a slog, in part because the document keeps cross-referencing stuff it hasn’t talked about yet. But if you do human subjects research in the United States, you owe it to yourself to read it over carefully. And if you are so moved, you can go comment on it at Regulations.gov until the public comment period ends on September 26. (You can comment on any aspect of it. The document contains 74 questions on which they are soliciting input, giving the impression that they will be particularly responsive to comments on those points.)

Proposed changes

Based on a first read-through, here is my understanding of proposed changes that will be most consequential for social and behavioral researchers. (Caveats: This isn’t everything, I’ve simplified a lot, and it’s quite possible that I’ve misunderstood some stuff. But hopefully not too much.)

“Informational risks” would no longer be reviewed by IRBs. IRBs would no longer evaluate or regulate so-called “informational risks” (stuff associated with confidentiality etc.). The arguments are that IRBs rarely have expertise to do this right, informational risks have changed with developments like network technology and genetic testing, and IRBs’ time is better spent focusing on physical and psychological risks. Instead of putting informational risks under IRB oversight, all researchers would be governed by a uniform set of data security regulations modeled on HIPAA (see below).

“Exempt” is changed to “excused” — as in, excused from review. This is a big one. Among other things, all educational tests, interviews, surveys, and similar procedures with competent adults would now be called “Excused.” And because informational risk is is being separated out, the new rules would drop the qualifications related to identifiability — meaning that even surveys/interviews where you collect identifiable information would be excused. The excused category would also be enlarged to include other minimal-risk activities (such as watching videos, solving puzzles, etc.). For studies in the new excused category there would be no prior review by an administrator or an IRB member. Instead, a researcher would file a very brief form with your human subjects office saying what you are going to do. And then, as soon as the paperwork is filed, you go ahead and start collecting data. No waiting for anybody’s approval. A random sampling of these forms would occasionally be audited to make sure the excused categories are being applied correctly.

Paperwork for expedited studies will be streamlined. Currently, you have to fill out a full protocol for an expedited study. That would be changed – expedited review would involve shorter forms than full review.

Continuing review is eliminated for almost all minimal risk studies (and for certain activities on more-than-minimal studies, like data analysis). No more annual forms saying “can I please run some more t-tests?”

Updated consent procedures. Consent comes up in a few different places in the ANPRM. For Excused studies, “Oral consent without written documentation would continue to be acceptable for many research studies involving educational tests, surveys, focus groups, interviews, and similar procedures.” (“Continue to be acceptable?” I’ve routinely been asked to get written consent for self-report studies.) The ANPRM also proposes a variety of ways to standardized and improve consent forms, for example by restricting how long the forms could be and what could be in them.

Simplification of multi-site studies. Domestic multi-site studies would have one and only one IRB of record. Review by each institution’s IRB would no longer be necessary (or even permitted).

Existing data could be used for new research only with prior consent. I’m not entirely clear on where they are drawing the line on calling something new research on existing data. (Is it a new investigator? a new research question? does it depend on how the research was described in the original consent form?) And this intersects with the HIPAA stuff (see below). But the general idea, as I understand it, is that at the time data is collected, researchers would have to ask subjects if their data could be used for other studies in the future (beyond the present study). Subjects would have to say “yes” for the data to be re-used in future studies. Data that was not originally collected for research purposes would not have this requirement, but only it is fully de-identified. (But all existing datasets collected prior to the new rules would be grandfathered in.)

Data security rules will be based on the HIPAA Privacy Rule. This is one that I’m still trying to sort through. I don’t know much about HIPAA except that people in biomedicine seem to roll their eyes and sigh when it comes up. It also vaguely stinks of over-extension of biomedical standards into social and behavioral research — the same rules apply regardless of the content of the data. As I understand it, datasets would fall into 3 categories of identifiability. Identifiable datsets are those that contain direct identifiers like names or images of faces. Limited datasets from which direct identifiers have been removed but which still have data that might make it possible, alone or in combination, to re-identify people (e.g., a ZIP code might be used with other information to figure out who somebody is). De-identified datasets have neither names nor any one of a list of 18 pieces of information that are semi-identifiable. Regulations governing how the data must be protected, who may have access to it, audit trails, etc. would be similar to HIPAA. All of this would be outside of IRB control — it would be required of all investigators regardless of level of review. I know that sounds vague; like I said, I’m still figuring this one out (and frankly, the ANPRM isn’t very specific).

My first reactions

Overall I think this sounds like mostly good news for social and behavioral researchers, if this actually happens. It’s possible that after the public comment period they’ll drop some of these changes or do something completely different.

I’d ideally like to see them recognize that certain research activities are protected speech and therefore should be outside of all federally mandated regulation. At the very least, universities have had to figure out whether to apply the Common Rule to activities like journalism, folklore and oral history research, etc. It would be nice to clear that up. (I’d advocate for a broader interpretation where interviews and surveys are considered protected speech regardless of who’s doing them. “Do you approve of how the President is doing his job?” is the same question whether it’s being asked by a journalist or a political scientist. But I’m not holding my breath for that.)

The HIPAA stuff makes me a little nervous. It appears that they are going to require the same level of security for a subject’s response to “Are you an outgoing person?” as for the results of an STD test. There also does not seem to be any provision for research where you tell subjects up front that you are not offering or guarantee confidentiality. For example, it’s pretty common in social/personality psych to videotape people in order to code and analyze their behavior, and later in another study use the videotapes as stimuli to measure other people’s impressions of their behavior. This is done with advance permission (I use a special consent form that asks if we can use videotapes as stimuli in future studies). Under the new rules, a videotape where you can see somebody’s face is considered fully identifiable and subjected to the most stringent level of control. Even just giving your own undergraduate RAs access to code the videotapes might require a mountain of security. Showing it to new subjects in a new study might be impossible.

So I do have some concerns, especially about applying a medical model of data security to research that has low or minimal informational risks. But overall, my first reading of the proposed changes sounds like a lot of steps in the right direction.

Proposed federal IRB rule changes open for public comment

Yesterday when I posted about IRB regulation and free speech, I had no idea that the NY Times was running a story about new IRB rule changes possibly in the works. (I’m too cheap to pony up for the Times, and too guilt-prone to circumvent the paywall.)

It sounds like someone’s been listening to behavioral scientists. From the proposed changes in the Federal Register (found via a commenter on scatterplot):

Questions have been raised about the appropriateness of the review process for social and behavioral research.\15\ \16\ \17\ \18\ The nature of the possible risks to subjects is often significantly different in many social and behavioral research studies as compared to biomedical research, and critics contend that the difference is not adequately reflected in the current rules. While physical risks generally are the greatest concern in biomedical research, social and behavioral studies rarely pose physical risk but may pose psychological or informational risks. Some have argued that, particularly given the paucity of information suggesting significant risks to subjects in certain types of survey and interview-based research, the current system over-regulates such research.\19\ \20\ \21\ Further, many critics see little evidence that most IRB review of social and behavioral research effectively does much to protect research subjects from psychological or informational risks.\22\ Over-regulating social and behavioral research in general may serve to distract attention from attempts to identify those social and behavioral research studies that do pose threats to the welfare of subjects and thus do merit significant oversight.

There are lots of other proposed changes, including streamlining review of multi-site studies and updates to data security regulations. It’s quite a long document — I’m just starting to wade through the proposed rule changes myself. The proposed changes are open for public comment through September 26. Read them over and then submit your comments.

When research is speech, should it be regulated?

Consider a study that has the following characteristics:

1. The procedure will consist of the researcher telling people things and/or asking them questions, and recording their responses in some fashion.

2. The participants are all legal adults, and are not drawn from any population or setting that compromises their ability to give consent (e.g., prisoners or the severely mentally ill).

3. The participants are all free to decline to participate or to discontinue participating after they start. “Free” means that if they decline or discontinue, they will face no negative consquences (relative to if they had never been invited to participate in the first place) .

4. Everything the researcher says to the participant must be true. The researcher cannot deceive people, and the researcher cannot make promises or commitments that will not be kept in good faith.

5. At the start of any interaction with participants, the researcher will identify him- or herself as such.

6. The researcher will not break any applicable laws.

If an investigator certifies that a study meets these criteria, should the government or a university scrutinize and regulate it any further?

Under the current IRB regulatory system, if a researcher wants to ask people some questions and find out their answers, the researcher has to jump through a whole lot of hoops and wait for various administrative delays. You have to complete pages and pages of paperwork, which includes submitting all of the questions you want to ask; wait a period of time (often weeks or longer) for administrators and IRB members to review the application, which will include somebody reading over the questions and deciding if you are permitted ask them; obtain consent in writing (unless it’s waived, which only happens under narrow and unusual circumstances); and if your inquiry takes longer than a year, go back to your IRB annually to get your permission renewed.

Yet if the researcher’s interactions with participants only involve talking with them, isn’t it just speech (you know, the free kind)? There has been a movement in recent years to clarify that fields like journalism, oral history, and folklore are exempt from IRB oversight. Unfortunately, the debate is being waged on the wrong terms: whether these fields produce “generalizable knowledge,” rather than whether their activities are protected by the First Amendment.

I think it’s fair to ask whether anybody doing #1 above is engaging in protected speech, regardless of whether they are a journalist or a political scientist or anything else; and if it’s speech, whether the government and universities ought to treat it accordingly. (I added nos. 2-6 to address some common and reasonable boundaries on free speech; e.g., we regulate speech to children more carefully; we don’t protect fraud; etc.) I’ll admit there may be something I’m missing out on — some way that a researcher could harm people with questions — besides boring them to death, of course. Sadly, the standard argument for regulation rests upon comparisons to Nazi war crimes. It would be nice to hear the advocates of regulation give a serious response to the free speech and academic freedom issues.

CITI update, and broader thoughts on ethics training and behavioral research

Yesterday after I emailed the CITI program about their continuing misrepresentation of Milgram, I got an email back from them apologizing and promising a revision (deja vu?). This time, I have asked them to keep me updated on any changes.

I think this was an error of omission, not commission. But that doesn’t absolve them. CITI’s courses are required of all research personnel at my institution (and lots of others), and I assume they are being paid for their services. They need to get it right.

In my correspondence with CITI so far I have focused narrowly on Milgram. But there were other problems with the training program, some of which may be symptomatic of how the larger IRB-regulatory system views social and behavioral research.

Milgram, in more detail

The most obvious problem with CITI was that Milgram’s obedience studies were described as unethical. That’s just not defensible. In fact, Milgram’s research was replicated with IRB approval just a few years ago. Milgram may be relevant for a training course on ethics, but not as an exemplar of bad research.

One way that Milgram is relevant is because of what his research tells us about how subjects may respond to an experimenter’s authority. It would be reasonable to conclude, for example, that if a subject says they are considering quitting an experiment, researchers must be careful not to pressure the subject to stay enrolled. (Saying “the experiment requires that you continue” is pretty much out.)

Milgram also found that obedience varied as a function of a variety of factors, including the physical and psychological distance between the subject and the “experimenter.” For example, when instructions were delivered by telephone rather than in person, obedience rates were much lower. In a modern context, we might make some reasonable inferences that coercion may be less of a possibility in Internet studies than in face-to-face lab experiments. Quitting an experiment is a lot easier if it’s just a matter of closing a browser window, rather than telling a stern experimenter in a white lab coat that you want to go home.

I’d also say that Milgram’s research is also relevant to interactions among research personnel. It’s a good reminder of the responsibility that PIs bear for the actions of their research assistants and other staff. This doesn’t mean that front-line personnel cannot or should not be held responsible for their actions. But it is wise to recognize that a PI is in a position of authority, and to consider whether that authority is being used wisely.

Milgram’s obedience research is also specifically relevant for IRBs and other ethics regulators, who are in a position to prevent research from being carried out. Milgram has profoundly affected the way we understand the behavior of ordinary people in Nazi Germany (and yes, for that reason CITI’s misrepresentation is especially perverse). It has been enormously influential across a wide range of basic and applied research in social psychology and other behavioral sciences: Google Scholar reports over 4000 citations of Milgram’s book Obedience to Authority and over 2000 citations of the original academic paper. How much would have been lost if a skittish IRB had read over the protocol and insisted on watering it down or rejecting it outright?

Coarsening comparisons

Beyond just Milgram though, there are other problems with the course — problems that may be emblematic of larger flaws in how the regulatory community thinks and talks about behavioral research.

Even if you accept that some of the other studies cited by CITI were unethical, a major problem is the coarsening comparison to Nazi medical experiments and the Tuskegee syphilis study. Gratuitous Nazi comparisons are so common as to be a running joke on the Internet. But in more serious terms, organizations like the ADL rightly object when public figures make inappropriate Nazi comparisons. Such comparisons do double damage: they diminish the real suffering of the Holocaust, and they overinflate whatever is being compared to it.

Let’s take what one commenter suggested is the (relatively) worst of the behavioral studies CITI cited: the Tearoom Trade study. A researcher observed men having sex with other men in a public bathroom, and in some cases he followed them home and posed as a health service interviewer to gather demographic data. Does that belong in the same category as performing transplantations without anesthesia, burning people with mustard gas, and a long list of other atrocities? Or even with knowingly denying medical treatment to men with syphilis?

Such comparisons cheapen the Holocaust and other atrocities. They also suggest grossly overblown stakes for regulating behavioral research. This mindset is by no means limited to CITI – references to the Nuremberg trials are de rigeur in almost every discussion of research ethics. In relation to what goes on in most social and behavioral studies, that’s absurd. That’s not to say that behavioral research cannot do real and deep harm (especially if we are talking about vulnerable populations). But ethics training ought to help researchers and regulators alike see the big picture and sharpen our perspective, not flatten it.

In favor of balance

Ideally, I’d like to see ethics training take on a broader scope. I’d love to see some discussion of reasonable limits on IRB reach. The CITI History and Ethics module states: “Highly motivated people tend to focus on their goals and may unintentionally overlook other implications or aspects of their work. No one can be totally objective about their own work.” In context, they are talking about why research needs to be independently reviewed. But the same sentences could apply to regulators, who may be biased in favor of regulatory review, and who may underestimate the slowing and chilling effects of regulation on researchers.

A great deal of behavioral science research consists of speech: a researcher wants to ask consenting adults some questions, or show them some pictures or videos, and then record what they say or do. Legal scholar Dale Carpenter has suggested that all IRBs should include a first-amendment expert. That’s not likely to happen. But academic freedom is central to the mission of universities. The AAUP has raised concerns about the effects of IRB regulation on academic freedom. Wouldn’t it be a good idea to make sure that ethics training for university researchers and regulators includes some basic coverage of academic freedom?

Then again, training is itself a burden. Ethics training is necessary, but if you overstuff it, you’ll just lose people’s attention. (The same can be said of consent forms, by the way.) I’d just settle for some basics with perspective. Principles of informed consent; working with vulnerable populations; evaluating risk; protecting privacy. All of these are important matters that researchers should know about – not to keep us from morphing into Nazis, but out of everyday decency and respect.

Why does an IRB need an analysis plan?

My IRB has updated its forms since the last time I submitted an application, and I just saw this section, which I think is new (emphasis added by me):

Analysis: Explain how the data will be analyzed or studied (i.e. quantitatively or qualitatively and what statistical tests you plan on using). Explain how the interpretation will address the research questions. (Attach a copy of the data collection instruments).

What statistical tests I plan on using?

My first thought was “mission creep,” but I want to keep an open mind. Are there some statistical tests that are more likely to do harm to the human subjects who provided the data? Has anybody ever been given syphilis by a chi-square test? If I do a median split, am I damaging anything more than my own credibility? (“What if there are an odd number of subjects? Are you going to have to saw a subject in half?”)

Seriously though, is there something I’m missing?

Self-selection into online or face-to-face studies

A new paper by Edward Witt, Brent Donellan, and Matthew Orlando looks at self-selection biases in subject pools:

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version…

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What’s more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

Self-selection in subject pools is not a new topic — I’ve heard plenty of people talk about an early-participant conscientiousness effect (though I don’t know if that’s been documented or if it’s just lab-lore). But the analyses of personality differences in who takes online versus in-person studies are new, as far as I know — and they definitely add a new wrinkle.

My lab’s experience has been that we get a lot more students responding to postings for online studies than face-to-face, but it seems like we sometimes get better data from the face-to-face studies. Personality measures don’t seem to be much different in quality (in terms of reliabilities, factor structures, etc.), but with experiments where we need subjects’ focused attention for some task, the data are a lot less noisy when they come from the lab. That could be part of the selection effect (altruistic students might be “better” subjects to help the researchers), though I bet a lot of it has to do with old-fashioned experimental control of the testing environment.

What could be done? When I was an undergrad taking intro to psych, each student was given a list of studies to participate in. All you knew was the codenames of the studies and some contact information, and it was your responsibility to arrange with the experimenter to take the experiment. It was a pain on all sides, but it was a good way to avoid these kinds of self-selection biases.

Of course, some people would argue that the use of undergraduate subject pools itself is a bigger problem. But given that they aren’t going away, this is definitely something to pay attention to.

A very encouraging reply

Who knew letter-writing could actually make a difference?

In response to the letter I sent yesterday to the CITI program, I got a prompt and very responsive reply from someone involved in running the program. She explained that the module had originally been written just for biomedical researchers. When it was adapted for social/behavioral researchers, the writers simply inserted new cases without really thinking about them. Most importantly, she said that she agreed with me and will revise the module.

Cool!

UPDATE (7/6/2011): Not cool. Despite their promises, they didn’t change a thing.

Milgram is not Tuskegee

My IRB requires me to take a course on human subjects research every couple of years. The course, offered by the Collaborative Institutional Training Initiative (CITI), mostly deals with details of federal research regulations covering human subjects research.

However the first module is titled “History and Ethics” and purports to give an overview and background of why such regulations exist. It contains several historical inaccuracies and distortions, including attempts to equate the Milgram obedience studies with Nazi medical experiments and the Tuskegee syphilis study. I just sent the following letter to the CITI co-founders in the hopes that they will correct their presentation:

* * *

Dear Dr. Braunschweiger and Ms. Hansen:

I just completed the CITI course, which is mandated by my IRB. I am writing to strongly object to the way the research of Stanley Milgram and others was presented in the “History and Ethics” module.

The module begins by stating that modern regulations “were driven by scandals in both biomedical and social/behavioral research.” It goes on to list events whose “aftermath” led to the formation of the modern IRB system. The subsection for biomedical research lists Nazi medical experiments and the PHS Tuskegee Syphilis study. The subsection for social/behavioral research lists what it calls “similar events,” including the Milgram obedience experiments, the Zimbardo/Stanford prison experiment, and several others.

The course makes no attempt to distinguish among the reasons why the various studies are relevant. They are all called “scandals,” described as “similar,” and presented in parallel. This is severely misleading.

Clearly, the Nazi experiments are morally abhorrent on their face. The Tuskegee study was also deeply unethical by modern standards and, most would argue, even by the standards of its day: it involved no informed consent, and after the discovery that penicillin was an effective treatment for syphilis, continuation of the experiment meant withholding a life-saving medical treatment.

But Milgram’s studies of obedience to authority are a much different case. His research predated the establishment of modern IRBs, but even by modern standards it was an ethical experiment, as the societal benefits from knowledge gained are a strong justification for the use of deception. Indeed, just this year a replication of Milgram’s study was published in the American Psychologist, the flagship journal of the American Psychological Association. The researcher, Jerry M. Burger of Santa Clara University, received permission from his IRB to conduct the replication. He made some adjustments to add further safeguards beyond what Milgram did — but these adjustments were only possible by knowing, in hindsight, the outcome of Milgram’s original experiments. (See: http://www.apa.org/journals/releases/amp641-1.pdf)

Thus, Tuskegee and Milgram are both relevant to modern thinking about research ethics, but for completely different reasons. Tuskegee is an example of a deeply flawed study that violated numerous ethical principles. By contrast, Milgram was an ethically sound study whose relevance to modern researchers is in the substance of its findings — to wit, that research subjects are more vulnerable than we might think to the influence of scientific and institutional authority. Yet in spite of these clear differences, the CITI course calls them all “scandals” and presents them in parallel, and alongside other ethically questionable studies, implying that they are all relevant in the same way.

(The parallelism implied with other studies on the list is problematic as well. Take for example the Stanford prison experiment. It would arguably not be approved by a modern IRB. But an important part of its modern relevance is that the researchers discontinued the study when they realized it was harming subjects — anticipating a central tenet of modern research ethics. This is in stark contrast to Tuskegee, where even after an effective treatment for syphilis was discovered, the researchers continued the study and never intervened on behalf of the subjects.)

In conclusion, I strongly urge you to revise your course. It appears that the module is trying to get across the point that biomedical research and social/behavioral research both require ethical standards and regulation — which is certainly true. But the histories, relevant issues, and ramifications are not the same. The attempt to create some sort of parallelism in the presentation (Tuskegee = Milgram? Nazis = Zimbardo?) is inaccurate and misguided, and does a disservice to the legacy of important social/behavioral research.

Sincerely,
Sanjay Srivastava

UPDATE: I got a response a day after I sent the letter. See this post: A very encouraging reply.

UPDATE 7/6/2011: Scratch that. Two years later, they haven’t changed a thing.