I had a good discussion with a friend about the “excused from prior review” category that would replace “exempt” in the proposed human subjects rule changes.
Under the current system, a limited number of activities qualify as exempt from review, but researchers are not supposed to make that determination themselves. The argument is that incentives and biases might distort the researcher’s own judgment. Instead, an administrator is supposed to determine whether a research plan qualifies as exempt. This leads to a Catch-22 where protocols must be reviewed to determine that they are exempt from review. (Administrators have some leeway in creating the exemption application, but my human subjects office requires almost as much paperwork for an exempt protocol as for a fully reviewed protocol.)
The new system would allow investigators to self-declare as “excused” (the new name for exempt), subject to random auditing. The details need to be worked out, but the hope is that this would greatly facilitate the work of people doing straightforward behavioral research. My friend raised a very legitimate concern about whether investigators can make that decision impartially. We’re both psychologists who are well aware of the motivated cognition literature. Additionally she cited her experience on an IRB where people have tried to slip clearly non-exempt protocols into the exempt category.
I don’t doubt that people submit crazy stuff for exempt review, but I think you have to look at the context. A lot of it may be strategic navigation of bureaucracy. Or, if you will, a rational response to distorted incentives. Right now, investigators are not held responsible for making a consequential decision at the submission and review stage. Instead, all of the incentives push investigators to lobby for the lowest possible level of review. It means that your study can get started a lot faster, and depending on your institution it may mean less paperwork. (Not for me though.) If an application gets bumped up to expedited or full review there is often no down side for the investigator — it just gets passed on to the IRB, often on the same timeline as if it had been initially submitted for expedited or full review anyway.
In short, at the submission stage the current system asks investigators to describe their protocol honestly — and I would infer that they must be disclosing enough relevant information if their non-exempt submissions are getting bumped up to expedited or full review. But the system neither trusts them nor holds them accountable for making ethics-based decisions about the information that they have honestly disclosed.
Under the proposed new system, if an investigator says that a study is excused, they will just file a brief form describing the study’s procedures and then go. Nobody is looking over an investigator’s shoulder before they start running subjects. Yes, that does open up room for rationalizations. (“Vaginal photoplethysmography is kind of like educational testing, right?”) But it also tells investigators something they have not been told before: “We expect you to make a real decision, and the buck stops with you.” Random retrospective auditing would add accountability, especially if repeated or egregious violations come with serious sanctions. (“You listed this as educational testing and you’ve been doing what? Um, please step outside your lab while we change the locks.”)
So if you believe that investigators are subject to the effects of incentives and motivated cognition — and I do — your choice is to either change the incentive structure, or take control out of their hands and put it in a regulatory system that has its own distorted incentives and biases. I do see both sides, and my friend might still disagree with me — but my money is on changing the motivational landscape for investigators. Trust but verify.
Finally, the new system would generate something we don’t have right now: data. Currently, there is a paucity of data showing which aspects of the IRB system, if any, actually achieve its goals of protecting human subjects. It’s merely an article of faith — backed with a couple of highly rare and misrepresented examples — that the current system needs to work the way it does. How many person-hours have been spent by investigators writing up exempt proposals (and like I said, at my institution it’s practically a full protocol)? How many hours have been spent by administrators reading “I’d like to administer the BFI” for the zillionth time, instead of monitoring safety on higher-risk studies? And all with no data showing that that is necessary? The ANPRM explicitly states that institutions can and should use the data generated by random audits to evaluate the effectiveness of the new policy and make adjustments accordingly. So if the policy gets instituted, we’ll actually know how it’s working and be able to make corrections if necessary.