Do review sheets help?

A lot of what I do as a college instructor draws upon the accumulated wisdom and practice of my profession, plus my personal experience. I accumulate ideas and strategies from mentors and colleagues, I read about pedagogy, I try to get a feel for what works and what doesn’t in my classes, and I ask my students what is working for them. That’s what I suspect that most of us do, and probably it works pretty well.

But as stats guru and blogger Andrew Gelman pointed out not too long ago, we don’t often formally test which of our practices work. Hopefully the accumulated wisdom is valid — but if you’re a social scientist, your training might make you want something stronger than that. In that spirit, recently I ran a few numbers on a pedagogical practice that I’ve always wondered about. Do review sheets help students prepare for tests?


When I first started teaching undergrad courses, I did not make review sheets for my students. I didn’t think they were particularly useful. I decided that I would rather focus my time and energy on doing things for my students that I believed would actually help them learn.

Why didn’t I think a review sheet would be useful? There are 2 ways to make a review sheet for an exam. Method #1 involves listing the important topics, terms, concepts, etc. that students should study. The review sheet isn’t something you study on its own — it’s like a guide or checklist that tells you what to study. That seemed questionable to me. It’s essentially an outline of the lectures and textbook — pull out the headings, stick in the boldface terms, and voila! Review sheet. If anything, I thought, students are better off doing that themselves. (Many resources on study skills tell students to scan and outline before they start reading.) In fact, the first time I taught my big Intro course, I put the students into groups and had them make their own review sheets. Students were not enthusiastic about that.

Method #2 involves making a document that actually contains studyable information on its own. That makes sense in a course where there are a few critical nuggets of knowledge that everybody should know — like maybe some key formulas in a math class that students need to memorize. But that doesn’t really apply to most of the courses I teach, where students need to broadly understand the lectures and readings, make connections, apply concepts, etc. (As a result, this analysis doesn’t really apply to courses that use that kind of approach.)

So in my early days of teaching, I gave out no review sheets. But boy, did I get protests. My students really, really wanted a review sheet. So a couple years ago I finally started making list-of-topics review sheets and passing them out before exams. I got a lot of positive feedback — students told me that they really helped.

Generally speaking, I trust students to tell me what works for them. But in this case, I’ve held on to some nagging doubts. So recently I decided to collect a little data. It’s not a randomized experiment, but even some correlational data might be informative.


In Blackboard, the course website management system we use at my school, you can turn on tracking for items that you post. Students have to be logged in to the Blackboard system to access the course website, and if you turn on tracking, it’ll tell you when (if ever) each student clicked on a particular item. So for my latest midterm, the second one of the term, I decided to turn on tracking for the review sheet so that I could find out who downloaded it. Then I linked that data to the test scores.

I posted the review sheet on a Monday, 1 week before the exam. The major distinction I drew was between people who downloaded the sheet and those who never did. But I also tracked when students downloaded it. There were optional review sessions on Thursday and Friday. Students were told that if they came to the review session, they should come prepared. (It was a Jeopardy-style quiz.) So I divided students into several subgroups: those who first downloaded the sheet early in the week (before the review sessions), those who downloaded it on Thursday or Friday, and those who waited until the weekend before they downloaded it. I have no record of who actually attended the review sessions.

A quick caveat: It is possible that a few students could’ve gotten the review sheet some other way, like by having a friend in the class print it for them. But it’s probably reasonable to assume that wasn’t widespread. More plausible is that some people might have downloaded the review sheet but never really used it, which I have no way of knowing about.


Okay, so what did I find? First, out of N=327 students, 225 downloaded the review sheet at some point. Most of them (173) waited until the last minute and didn’t download it until the weekend before the exam. 17 downloaded it Thursday-Friday, and 35 downloaded it early in the week. So apparently most students thought the review sheet might help.

Did students who downloaded the review sheet do any better? Nope. Zip, zilch, nada. The correlation between getting the review sheet and exam scores was virtually nil, r = -.04, p = .42. Here’s a plot, further broken down into the subgroups:

Review Sheet 1

This correlational analysis has potential confounds. Students were not randomly assigned — they decided for themselves whether to download the review sheet. So those who downloaded it might have been systematically different from those who did not; and if they differed in some way that would affect their performance on the second midterm, that could’ve confounded the results. In particular, perhaps the students who were already doing well in the class didn’t bother to download the review sheet, but the students who were doing more poorly downloaded it, and the review sheet helped them close the gap. If that happened, you’d observe a zero correlation. (Psychometricians call this a suppressor effect.)

So to address that possibility, I ran a regression in which I controlled for scores on the first midterm. The simple correlation asks: did students who downloaded the review sheet do better than students who didn’t? The regression asks: did students who downloaded the review sheet do better than students who performed just as well on the first midterm but didn’t download the sheet? If there was a suppressor effect, controlling for prior performance should reveal the effect of the review sheet.

But that isn’t what happened. The two midterms were pretty strongly correlated, r = .63. But controlling for prior performance made no difference — the review sheet still had no effect. The standardized beta was .00, p = .90. Here’s a plot to illustrate the regression: this time, the y-axis is the residual (the difference between somebody’s actual score minus the score we would have expected them to get based on the first midterm):

Review Sheet 2Limitations

This was not a highly controlled study. As I mentioned earlier, I have no way of knowing whether students who downloaded the review sheet actually used it. I also don’t know who used a review sheet for the first midterm, the one that I controlled for. (I didn’t think to turn on tracking at the start of the term.) And there could be other factors I didn’t account for.

A better way to do this would be to run a true experiment. If I was going to do this right, I’d go into a class where the instructor isn’t planning to give out review sheets. Tell students that if they enroll in the experiment, they’ll be randomly assigned to get different materials to help them prepare for the test. Then you give a random half of them a review sheet and tell them to use it. For both ethical and practical reasons, you would probably want to tell everybody in advance that you’ll adjust scores so that if there is an effect, students who didn’t get the sheet (either because they were in the control group or because they chose not to participate) won’t be at a disadvantage. You’d have to be careful in what you tell them about the experiment to balance informed consent without creating demand characteristics. But it could probably be done.


In spite of these issues, I think this data is strongly suggestive. The most obvious confounding factor was prior performance, which I was able to control for. If some of the students who downloaded the review sheet didn’t use it, that would attenuate the difference, but it shouldn’t make it go away entirely. To me, the most plausible explanation left standing is that review sheets don’t make a difference.

If that’s true, why do students ask for review sheets and why do they think that they help? As a student, you only have a limited capacity to gauge what really makes a difference for you — because on any given test, you will never know how well you would have done if you had studied differently. (By “limited capacity,” I don’t mean that students are dumb — I mean that there’s a fundamental barrier.) So a lot of what students do is rely on feelings. Do I feel comfortable with the material? Do I feel like I know it? Do I feel ready for the exam? And I suspect that review sheets offer students an illusory feeling of control and mastery. “Okay, I’ve got this thing that’s gonna help me. I feel better already.” So students become convinced that they make a difference, and then they insist on them.

I also suspect, by the way, that lots of other things work that way. To date, I have steadfastly refused to give out my lecture slides before the lecture. Taking notes in your own words (not rote) requires you to be intellectually engaged with the material. Following along on a printout might feel more relaxed, but I doubt it’s better for learning. Maybe I’ll test that one next time…

Students, fellow teachers, and anybody else: I’d welcome your thoughts and feedback, both pro and con, in the comments section. Thanks!

5 thoughts on “Do review sheets help?

  1. Hey Sanjay,
    Interesting study. I agree that student’s comfort is not always conducive to their learning. I tend to give “review” sheets with just a series of questions, if I give them at all. I answer one of them in class, and give them the kind of answer I expect.

    When I have taught cognitive, I have specifically told students (and taught it in the memory section) about the difference between feelings of familiarity and ability to recall. Your study leads me to an interesting follow-up experiment. Maybe if you gave a review sheet that was almost exactly the questions on the test, it would lead those who studied it to mistake their familiarity for knowing even more, and actually do worse than those who spend more time studying without a review sheet.

    I also wonder if you would get an interaction with time spent studying and review sheet (even without a main effect of review sheet). Having a review sheet could help you, but only if you spend a lot of time studying. With the no review sheet group, you would get some who studied so much that did well regardless, and some who were so out to lunch that they didn’t study hardly at all.

    Jane pointed me towards your blog, and I am enjoying some of your past posts as well. Can I subscribe via RSS so I can read in my google reader? You don’t have a button for that on the front page.

  2. Thanks Cedar. Really interesting points.

    The issue of the “feeling of knowing” is so important in teaching and learning, and I think especially so in psychology. I spend about 10 minutes on the first day of my Intro class warning students how that feeling can be especially misleading in a psych class. In many fields, the concepts are completely unfamiliar, so the feeling of (not) knowing is at least a moderately accurate signal of how much you actually know. In psychology, though, we deal with phenomena and concepts that people are familiar with from personal experience — emotions, personality, making decisions, etc. So everything is going to feel familiar on a superficial level, even if a student’s experience and intuitions turn out to be either incorrect or not deep/systematic enough.

    Your comment about how much students studied reminds me of the efficacy/effectiveness distinction in intervention research. Efficacy estimates a causal effect under optimal conditions (manualized treatment, monitoring compliance, etc.); effectiveness measures a practical impact in realistic circumstances. What I did is a lot more like an effectiveness trial. The analysis I did should estimate how much review sheets made a difference at the average level of any omitted interactions — in this case, for a student who studied an average amount. So you’d think I would’ve gotten at least a small effect. Unless the review sheet helped students who studied a lot, and harmed students who didn’t study at all (maybe it fooled them into thinking they were prepared).

    Glad you like the blog. I added an RSS link at the top of the righthand sidebar.

  3. Very interesting read! I teach mathematics at high school, and often give out review sheets of problems that are similar to what will be on the test. Given that parameter, how do you think it would effect your study. In other words, specific to the education of mathematics, do you think review sheets would be helpful given that they contain problems/examples of exercises that would be on the test.

    I often get frustrated with 16-18 year olds who don’t attempt or complete their review sheet. Maybe I shouldn’t? Maybe I shouldn’t do a review sheet (and boy, oh boy, would that ruffle some feathers).

    What say you?

  4. John, I’d say that what you and I are calling “review sheet” sound like different things. Mine are just a list of topics that will be covered on the exam. Yours sound more like practice tests, which I’m guessing might be more useful.

  5. Sanjay, I agree with you wholeheartedly. This semester, I’ve decided not to use review sheets, and I’ve used them off and on since I started teaching college in ’06. In my experience, they haven’t affected anything; both the exam means and their standard deviations remained consistent across course sections. At this point, I’ve stopped looking for ways to make my courses easier for students and started focusing on teaching them the skills needed to combat the inevitable difficulties. In the long run, that’ll probably help them more, anyway.

Comments are closed.