More ANPRM coverage in the blogosphere

A quick post, as I’m on vacation (sort of)… Institutional Review Blog has some absolutely terrific coverage of the proposed IRB rule changes, aka the ANPRM. The blogger, Zachary Schrag, is a historian who has made IRBs a focus of his research. In particular his Quick Guide to the ANPRM is a must-read for any social scientist considering writing public comments. All the coverage and commentary at his blog is conveniently tagged ANPRM so you can find it easily.

Also, thanks to Tal Yarkoni for a shout-out over at [citation needed].

The importance of trust and accountability in effective human subjects protection

I had a good discussion with a friend about the “excused from prior review” category that would replace “exempt” in the proposed human subjects rule changes.

Under the current system, a limited number of activities qualify as exempt from review, but researchers are not supposed to make that determination themselves. The argument is that incentives and biases might distort the researcher’s own judgment. Instead, an administrator is supposed to determine whether a research plan qualifies as exempt. This leads to a Catch-22 where protocols must be reviewed to determine that they are exempt from review. (Administrators have some leeway in creating the exemption application, but my human subjects office requires almost as much paperwork for an exempt protocol as for a fully reviewed protocol.)

The new system would allow investigators to self-declare as “excused” (the new name for exempt), subject to random auditing. The details need to be worked out, but the hope is that this would greatly facilitate the work of people doing straightforward behavioral research. My friend raised a very legitimate concern about whether investigators can make that decision impartially. We’re both psychologists who are well aware of the motivated cognition literature. Additionally she cited her experience on an IRB where people have tried to slip clearly non-exempt protocols into the exempt category.

I don’t doubt that people submit crazy stuff for exempt review, but I think you have to look at the context. A lot of it may be strategic navigation of bureaucracy. Or, if you will, a rational response to distorted incentives. Right now, investigators are not held responsible for making a consequential decision at the submission and review stage. Instead, all of the incentives push investigators to lobby for the lowest possible level of review. It means that your study can get started a lot faster, and depending on your institution it may mean less paperwork. (Not for me though.) If an application gets bumped up to expedited or full review there is often no down side for the investigator — it just gets passed on to the IRB, often on the same timeline as if it had been initially submitted for expedited or full review anyway.

In short, at the submission stage the current system asks investigators to describe their protocol honestly — and I would infer that they must be disclosing enough relevant information if their non-exempt submissions are getting bumped up to expedited or full review. But the system neither trusts them nor holds them accountable for making ethics-based decisions about the information that they have honestly disclosed.

Under the proposed new system, if an investigator says that a study is excused, they will just file a brief form describing the study’s procedures and then go. Nobody is looking over an investigator’s shoulder before they start running subjects. Yes, that does open up room for rationalizations. (“Vaginal photoplethysmography is kind of like educational testing, right?”) But it also tells investigators something they have not been told before: “We expect you to make a real decision, and the buck stops with you.” Random retrospective auditing would add accountability, especially if repeated or egregious violations come with serious sanctions. (“You listed this as educational testing and you’ve been doing what? Um, please step outside your lab while we change the locks.”)

So if you believe that investigators are subject to the effects of incentives and motivated cognition — and I do — your choice is to either change the incentive structure, or take control out of their hands and put it in a regulatory system that has its own distorted incentives and biases. I do see both sides, and my friend might still disagree with me — but my money is on changing the motivational landscape for investigators. Trust but verify.

Finally, the new system would generate something we don’t have right now: data. Currently, there is a paucity of data showing which aspects of the IRB system, if any, actually achieve its goals of protecting human subjects. It’s merely an article of faith — backed with a couple of highly rare and misrepresented examples — that the current system needs to work the way it does. How many person-hours have been spent by investigators writing up exempt proposals (and like I said, at my institution it’s practically a full protocol)? How many hours have been spent by administrators reading “I’d like to administer the BFI” for the zillionth time, instead of monitoring safety on higher-risk studies? And all with no data showing that that is necessary? The ANPRM explicitly states that institutions can and should use the data generated by random audits to evaluate the effectiveness of the new policy and make adjustments accordingly. So if the policy gets instituted, we’ll actually know how it’s working and be able to make corrections if necessary.

What the proposed human subjects rules would mean for social and behavioral researchers

The federal government is proposing a massive overhaul of the rules governing human subjects research and IRBs (previously mentioned here). The proposed rule changes were just announced by the Department of Health and Human Services. They are outlined in a document called an “advance notice of proposed rulemaking,”or ANPRM. (See also this overview in the NEJM by Ezekiel Emanuel and Jerry Menikoff.)

Reading the full ANPRM is a slog, in part because the document keeps cross-referencing stuff it hasn’t talked about yet. But if you do human subjects research in the United States, you owe it to yourself to read it over carefully. And if you are so moved, you can go comment on it at Regulations.gov until the public comment period ends on September 26. (You can comment on any aspect of it. The document contains 74 questions on which they are soliciting input, giving the impression that they will be particularly responsive to comments on those points.)

Proposed changes

Based on a first read-through, here is my understanding of proposed changes that will be most consequential for social and behavioral researchers. (Caveats: This isn’t everything, I’ve simplified a lot, and it’s quite possible that I’ve misunderstood some stuff. But hopefully not too much.)

“Informational risks” would no longer be reviewed by IRBs. IRBs would no longer evaluate or regulate so-called “informational risks” (stuff associated with confidentiality etc.). The arguments are that IRBs rarely have expertise to do this right, informational risks have changed with developments like network technology and genetic testing, and IRBs’ time is better spent focusing on physical and psychological risks. Instead of putting informational risks under IRB oversight, all researchers would be governed by a uniform set of data security regulations modeled on HIPAA (see below).

“Exempt” is changed to “excused” — as in, excused from review. This is a big one. Among other things, all educational tests, interviews, surveys, and similar procedures with competent adults would now be called “Excused.” And because informational risk is is being separated out, the new rules would drop the qualifications related to identifiability — meaning that even surveys/interviews where you collect identifiable information would be excused. The excused category would also be enlarged to include other minimal-risk activities (such as watching videos, solving puzzles, etc.). For studies in the new excused category there would be no prior review by an administrator or an IRB member. Instead, a researcher would file a very brief form with your human subjects office saying what you are going to do. And then, as soon as the paperwork is filed, you go ahead and start collecting data. No waiting for anybody’s approval. A random sampling of these forms would occasionally be audited to make sure the excused categories are being applied correctly.

Paperwork for expedited studies will be streamlined. Currently, you have to fill out a full protocol for an expedited study. That would be changed – expedited review would involve shorter forms than full review.

Continuing review is eliminated for almost all minimal risk studies (and for certain activities on more-than-minimal studies, like data analysis). No more annual forms saying “can I please run some more t-tests?”

Updated consent procedures. Consent comes up in a few different places in the ANPRM. For Excused studies, “Oral consent without written documentation would continue to be acceptable for many research studies involving educational tests, surveys, focus groups, interviews, and similar procedures.” (“Continue to be acceptable?” I’ve routinely been asked to get written consent for self-report studies.) The ANPRM also proposes a variety of ways to standardized and improve consent forms, for example by restricting how long the forms could be and what could be in them.

Simplification of multi-site studies. Domestic multi-site studies would have one and only one IRB of record. Review by each institution’s IRB would no longer be necessary (or even permitted).

Existing data could be used for new research only with prior consent. I’m not entirely clear on where they are drawing the line on calling something new research on existing data. (Is it a new investigator? a new research question? does it depend on how the research was described in the original consent form?) And this intersects with the HIPAA stuff (see below). But the general idea, as I understand it, is that at the time data is collected, researchers would have to ask subjects if their data could be used for other studies in the future (beyond the present study). Subjects would have to say “yes” for the data to be re-used in future studies. Data that was not originally collected for research purposes would not have this requirement, but only it is fully de-identified. (But all existing datasets collected prior to the new rules would be grandfathered in.)

Data security rules will be based on the HIPAA Privacy Rule. This is one that I’m still trying to sort through. I don’t know much about HIPAA except that people in biomedicine seem to roll their eyes and sigh when it comes up. It also vaguely stinks of over-extension of biomedical standards into social and behavioral research — the same rules apply regardless of the content of the data. As I understand it, datasets would fall into 3 categories of identifiability. Identifiable datsets are those that contain direct identifiers like names or images of faces. Limited datasets from which direct identifiers have been removed but which still have data that might make it possible, alone or in combination, to re-identify people (e.g., a ZIP code might be used with other information to figure out who somebody is). De-identified datasets have neither names nor any one of a list of 18 pieces of information that are semi-identifiable. Regulations governing how the data must be protected, who may have access to it, audit trails, etc. would be similar to HIPAA. All of this would be outside of IRB control — it would be required of all investigators regardless of level of review. I know that sounds vague; like I said, I’m still figuring this one out (and frankly, the ANPRM isn’t very specific).

My first reactions

Overall I think this sounds like mostly good news for social and behavioral researchers, if this actually happens. It’s possible that after the public comment period they’ll drop some of these changes or do something completely different.

I’d ideally like to see them recognize that certain research activities are protected speech and therefore should be outside of all federally mandated regulation. At the very least, universities have had to figure out whether to apply the Common Rule to activities like journalism, folklore and oral history research, etc. It would be nice to clear that up. (I’d advocate for a broader interpretation where interviews and surveys are considered protected speech regardless of who’s doing them. “Do you approve of how the President is doing his job?” is the same question whether it’s being asked by a journalist or a political scientist. But I’m not holding my breath for that.)

The HIPAA stuff makes me a little nervous. It appears that they are going to require the same level of security for a subject’s response to “Are you an outgoing person?” as for the results of an STD test. There also does not seem to be any provision for research where you tell subjects up front that you are not offering or guarantee confidentiality. For example, it’s pretty common in social/personality psych to videotape people in order to code and analyze their behavior, and later in another study use the videotapes as stimuli to measure other people’s impressions of their behavior. This is done with advance permission (I use a special consent form that asks if we can use videotapes as stimuli in future studies). Under the new rules, a videotape where you can see somebody’s face is considered fully identifiable and subjected to the most stringent level of control. Even just giving your own undergraduate RAs access to code the videotapes might require a mountain of security. Showing it to new subjects in a new study might be impossible.

So I do have some concerns, especially about applying a medical model of data security to research that has low or minimal informational risks. But overall, my first reading of the proposed changes sounds like a lot of steps in the right direction.