My IRB has updated its forms since the last time I submitted an application, and I just saw this section, which I think is new (emphasis added by me):
Analysis: Explain how the data will be analyzed or studied (i.e. quantitatively or qualitatively and what statistical tests you plan on using). Explain how the interpretation will address the research questions. (Attach a copy of the data collection instruments).
What statistical tests I plan on using?
My first thought was “mission creep,” but I want to keep an open mind. Are there some statistical tests that are more likely to do harm to the human subjects who provided the data? Has anybody ever been given syphilis by a chi-square test? If I do a median split, am I damaging anything more than my own credibility? (“What if there are an odd number of subjects? Are you going to have to saw a subject in half?”)
Seriously though, is there something I’m missing?
Maybe they’re thinking of post hoc harm? The harm done when you publish your “results” like whats-his-name on Psychology Today?
If that’s the case, Kanazawa is actually a good argument against such a policy. He used AddHealth, a publicly available dataset. If IRBs are supposed to prevent researchers from using human subjects data to reach bad conclusions, that would pretty much kill the whole concept of publicly available data.
When I was a member of my institution’s IRB, we were tasked with considering a risk-benefit ratio, not just the risk to the participant. For a lot of research, especially out of the psychology department, there was certainly very little risk to participants, but essentially no direct benefit to the participants, either. As a consequence, PIs often invoked benefits to society or the advancement of the field as the primary benefits of the study, to improve their ratio. When I examined the experimental design and statistical plan, it was to see if such benefits were likely: answering the question, essentially, of whether the research had the potential to be something more than just a waste of the participants’ time. As a reviewer, I have, for example, requested an author to include a control group, where none was originally proposed, to strengthen their ability to make causal inferences.
Probably for research that is atypical for psychologists. For example, merging datasets that might tie information to individuals in ways that weren’t originally intended for the researcher’s eyes.
Our IRB has asked a similar question for so long that I had not thought about this issue too much until now. Our IRB also wants a justification of sample size (item: Provide the rationale for your sample size). The sample size question is intriguing as it can lead to the potentially interesting cost to participant: benefit to science/society calculation of whether it is worthwhile to plan research that is underpowered and likely to provide a distorted estimate of the population effect size if one were to find something significant with so few people. Someone might be able to make the case that research designed without a reasonable analytic plan (or research based on a poor plan) is unlikely to provide much in the way of a benefit to science or society.
That said, I don’t think the local IRB is in a good position to judge these issues.
Perhaps this is to prevent the inhumane practice of warping subjects’ brains into ‘standard’ space, ironing out the wrinkles of individual difference, and further promoting a hegemony of homogeny. Or maybe they just want to live vicariously through researchers (because if you’ve never been intimate enough with math to catch a Statistically Transmitted Disease, surely there is something lacking in your life).
I agree with Jason and Brent that this relates to the cost-benefit evaluation for society. I would add that it also concerns the costs and benefits for the study’s participants. My understanding is that knowledge of the analysis plan will inform the IRB as to whether you have planned your sample size appropriately. Technically, they should then be able to determine whether you have too *few* participants to detect effects using the analysis you propose (e.g. you want to establish predictors of disease outcomes using structural equation modelling with a sample of 50), or indeed whether you have too *many* participants compared to what is really necessary (e.g., you want to compare gender differences in height using a t-test in a sample of 20,000). In the former case you will be unable to test your hypothesis adequately, meaning that you are wasting all your participants’ time. In the latter case, your gratuitous sample size means that you are collecting lots of unnecessary data, wasting the majority of your participants’ time, and depriving other researchers of these participants’ availability. On their own the sample sizes cannot be evaluated in this way; but with knowledge of the proposed statistical analyses, the IRB should be able to draw a conclusion. This is why, strictly speaking, IRBs are supposed to have trained statisticians among their number.
All this is fine in theory — whether or not it really happens in practice, I wouldn’t like to speculate!
Under the U.S. law that creates IRBs, there is no requirement that an IRB is supposed to have a trained statistician. In fact, only one IRB member even needs to be a scientist. The law gives a long list of areas of expertise that are supposed to be covered by its members, many of which are legal/regulatory rather than scientific. http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.107