A student’s perspective on PowerPoint lectures

A student blogger who goes by Carolyn Blogs has an interesting entry on PowerPoint lectures from the perspective of someone taking the class:

Recently I came to the conclusion that I do not learn well from classes in which the lectures are based on PowerPoint presentations… Professors who use PowerPoint tend to present topics very quickly when they don’t have to do anything but talk. If every example and every diagram is on the screen, there isn’t much time for me to take notes on the subject of each slide. Lectures aided by chalkboard visuals are easier to take notes from because I can write what the professor writes on the board at the same time. Also, because there is usually more chalkboard space than screen space, if I am behind on note-taking, the visual will probably still be on the board for me to copy a few minutes later. A lot of professors try to solve this problem by handing out the lecture slides before class, or by posting them online. While this is great for a lot of students, it doesn’t work for me because I learn best and am most engaged if I have to take notes as if my grade depended on having a great record of the class and I would never see the material again. In classes with handouts, I tend to zone out and have to work harder to pay attention. Studies have shown[pdf] that taking high-quality notes improves organic memory: I rarely use my notes after the lecture because the act of physically writing information down helps me remember more of what goes on in class.

A few years ago I started phasing out PowerPoint from my upper-division classes (I never used it for grad classes). Carolyn hits on pretty much all the major reasons.

Teaching with PowerPoint has a different pace and structure than teaching with chalk or markers. It’s not just about overall fast vs. slow (though that’s part of it), but about when you go fast and when you go slow. When I use the board, I write down the major points, terms, definitions, etc. That forces me to slow down at exactly the moment when I’m making a big point and students should be attending closely. Once the critical information is on the board, I can elaborate, discuss with the class, ask questions, etc. while it hangs up there behind me for students to refer to. And since writing slows me down, I don’t give as much emphasis to relatively minor points — giving students an additional cue as to what’s more and less important. (“Don’t ignore this completely, but it’s not as central as what I said earlier.”) You can reproduce this kind of pacing and structure with PowerPoint, but in practice it’s difficult to do during a live performance in front of a classroom. You have to write your presentation with delivery (not just content) in mind. Otherwise it’s just too easy to blow through major and minor points at a constant pace.

Another point that she makes… I still use PowerPoint in my big introductory classes (though I make my own slides from scratch, use animation to help regulate my delivery, and try to avoid the mind-numbing bullety templates). I always have a few students ask me to post the notes before class. I don’t — I post them after class, but honestly, I have sometimes wondered if I’d be better off not posting them at all. Carolyn modestly writes “while [posting notes] is great for a lot of students, it doesn’t work for me…” but I actually think this describes most students. A lot of students misread their internal cues — if it feels like they are expending a lot of effort then they think they must be struggling with the material. Actually, though, if the professor is presenting challenging material, then you shouldn’t feel relaxed — relaxation is a sign that you’re probably thinking superficially or zoning out, not that you’ve quickly mastered the material.

I also found it impressive that Carolyn reached this conclusion on her own. Because frankly, it’s fundamentally very difficult to introspect into your own learning processes. A few years back, when I started moving away from PowerPoint, I got feedback on my student evaluations from people who wanted more PowerPoint. When I talked with students who felt that way, they thought they’d be able to focus more on the material if they didn’t have to bother taking notes. I realized that reflects a fundamental misunderstanding of what note-taking does for you. I’ve been getting less of that feedback lately — maybe because I’ve gotten better at using the board, or maybe because recent students have been around PowerPoint longer and see its limitations more clearly.

Here’s eight grand to adopt our textbook

I got the following email this morning. Note the part I’ve underlined:


Dear Introductory Psychology Professor:

[Redacted] Press was created as a faculty venture six years ago focusing solely on interactive low cost digital text packages with free printed texts. This concept has been widely accepted by faculty and students alike. The rising price of textbooks is well known to college faculty, students, and even government agencies.  Our digital textbooks offer a low cost alternative to traditional expensive textbooks.
We would like to introduce you to our Introductory Psychology low cost interactive package including:

A $40 digital interactive text with embedded videos and audio and words with internet links — a better way for today’s students
A free printed text called a student text supplement
Access to a password protected website with interactive updates and materials
A test marketing program with stipends up to $8,000 for individual professors and up to $15,000 or more for departments
An online test center for each chapter of the interactive text, plus instructor’s manual
Test bank questions to upload to any online platform such as Blackboard
Technical and consulting support — 24/7
We invite you to take a narrated tour of [Redacted] Press before you review the interactive Introductory Psychology text. It is a brief tour of [Redacted] Press and interactive texts and will enable you to better understand the benefits of our program within minutes. You start the tour by going to: [URL redacted] (you can cut and paste this URL directly into your browser).This tour will demonstrate the interactive elements of our texts and give you an opportunity to review the [Redacted] interactive Introductory Psychology text at your leisure.

After you have taken the tour, if you email me your mailing address and the number of students in your upcoming classes, we will send you the digital text and brochure on the Introductory Psychology package and tailor a test marketing stipend program for you and even for your department.

We are confident you will see the numerous advantages of moving towards digital, interactive texts and will help us faculty move students into the digital age of education.

Thank you in advance for your time and interest,


I went to the website and looked at the text briefly, and I wouldn’t ask a student to pay $40 for it. It’s just not that good, and for a few bucks more, a student can get an ebook edition of a name-brand textbook.

But more to the point, is it just me, or does that “test marketing program” sound like a pretext for a kickback? Awfully close to the consulting fees and conference junkets that doctors and pharmaceutical companies are always getting in trouble for.

(Of course, I’m also suspicious of the numbers. At $40 a pop, you’d need to sell 200 ebooks just to cover the $8000 kickback stipend.)

Evidence-based policy

I’m all for basing social policy on good social science evidence. But as Dean Dad writes:

We have anecdotal evidence that suggests that students who actually take math for all four years of high school do better in math here than those who don’t. We also have anecdotal evidence that bears crap in the woods. Why the hell do the high schools only require two years of math?

I say we can bypass the regression analysis on this one.

Do review sheets help?

A lot of what I do as a college instructor draws upon the accumulated wisdom and practice of my profession, plus my personal experience. I accumulate ideas and strategies from mentors and colleagues, I read about pedagogy, I try to get a feel for what works and what doesn’t in my classes, and I ask my students what is working for them. That’s what I suspect that most of us do, and probably it works pretty well.

But as stats guru and blogger Andrew Gelman pointed out not too long ago, we don’t often formally test which of our practices work. Hopefully the accumulated wisdom is valid — but if you’re a social scientist, your training might make you want something stronger than that. In that spirit, recently I ran a few numbers on a pedagogical practice that I’ve always wondered about. Do review sheets help students prepare for tests?


When I first started teaching undergrad courses, I did not make review sheets for my students. I didn’t think they were particularly useful. I decided that I would rather focus my time and energy on doing things for my students that I believed would actually help them learn.

Why didn’t I think a review sheet would be useful? There are 2 ways to make a review sheet for an exam. Method #1 involves listing the important topics, terms, concepts, etc. that students should study. The review sheet isn’t something you study on its own — it’s like a guide or checklist that tells you what to study. That seemed questionable to me. It’s essentially an outline of the lectures and textbook — pull out the headings, stick in the boldface terms, and voila! Review sheet. If anything, I thought, students are better off doing that themselves. (Many resources on study skills tell students to scan and outline before they start reading.) In fact, the first time I taught my big Intro course, I put the students into groups and had them make their own review sheets. Students were not enthusiastic about that.

Method #2 involves making a document that actually contains studyable information on its own. That makes sense in a course where there are a few critical nuggets of knowledge that everybody should know — like maybe some key formulas in a math class that students need to memorize. But that doesn’t really apply to most of the courses I teach, where students need to broadly understand the lectures and readings, make connections, apply concepts, etc. (As a result, this analysis doesn’t really apply to courses that use that kind of approach.)

So in my early days of teaching, I gave out no review sheets. But boy, did I get protests. My students really, really wanted a review sheet. So a couple years ago I finally started making list-of-topics review sheets and passing them out before exams. I got a lot of positive feedback — students told me that they really helped.

Generally speaking, I trust students to tell me what works for them. But in this case, I’ve held on to some nagging doubts. So recently I decided to collect a little data. It’s not a randomized experiment, but even some correlational data might be informative.


In Blackboard, the course website management system we use at my school, you can turn on tracking for items that you post. Students have to be logged in to the Blackboard system to access the course website, and if you turn on tracking, it’ll tell you when (if ever) each student clicked on a particular item. So for my latest midterm, the second one of the term, I decided to turn on tracking for the review sheet so that I could find out who downloaded it. Then I linked that data to the test scores.

I posted the review sheet on a Monday, 1 week before the exam. The major distinction I drew was between people who downloaded the sheet and those who never did. But I also tracked when students downloaded it. There were optional review sessions on Thursday and Friday. Students were told that if they came to the review session, they should come prepared. (It was a Jeopardy-style quiz.) So I divided students into several subgroups: those who first downloaded the sheet early in the week (before the review sessions), those who downloaded it on Thursday or Friday, and those who waited until the weekend before they downloaded it. I have no record of who actually attended the review sessions.

A quick caveat: It is possible that a few students could’ve gotten the review sheet some other way, like by having a friend in the class print it for them. But it’s probably reasonable to assume that wasn’t widespread. More plausible is that some people might have downloaded the review sheet but never really used it, which I have no way of knowing about.


Okay, so what did I find? First, out of N=327 students, 225 downloaded the review sheet at some point. Most of them (173) waited until the last minute and didn’t download it until the weekend before the exam. 17 downloaded it Thursday-Friday, and 35 downloaded it early in the week. So apparently most students thought the review sheet might help.

Did students who downloaded the review sheet do any better? Nope. Zip, zilch, nada. The correlation between getting the review sheet and exam scores was virtually nil, r = -.04, p = .42. Here’s a plot, further broken down into the subgroups:

Review Sheet 1

This correlational analysis has potential confounds. Students were not randomly assigned — they decided for themselves whether to download the review sheet. So those who downloaded it might have been systematically different from those who did not; and if they differed in some way that would affect their performance on the second midterm, that could’ve confounded the results. In particular, perhaps the students who were already doing well in the class didn’t bother to download the review sheet, but the students who were doing more poorly downloaded it, and the review sheet helped them close the gap. If that happened, you’d observe a zero correlation. (Psychometricians call this a suppressor effect.)

So to address that possibility, I ran a regression in which I controlled for scores on the first midterm. The simple correlation asks: did students who downloaded the review sheet do better than students who didn’t? The regression asks: did students who downloaded the review sheet do better than students who performed just as well on the first midterm but didn’t download the sheet? If there was a suppressor effect, controlling for prior performance should reveal the effect of the review sheet.

But that isn’t what happened. The two midterms were pretty strongly correlated, r = .63. But controlling for prior performance made no difference — the review sheet still had no effect. The standardized beta was .00, p = .90. Here’s a plot to illustrate the regression: this time, the y-axis is the residual (the difference between somebody’s actual score minus the score we would have expected them to get based on the first midterm):

Review Sheet 2Limitations

This was not a highly controlled study. As I mentioned earlier, I have no way of knowing whether students who downloaded the review sheet actually used it. I also don’t know who used a review sheet for the first midterm, the one that I controlled for. (I didn’t think to turn on tracking at the start of the term.) And there could be other factors I didn’t account for.

A better way to do this would be to run a true experiment. If I was going to do this right, I’d go into a class where the instructor isn’t planning to give out review sheets. Tell students that if they enroll in the experiment, they’ll be randomly assigned to get different materials to help them prepare for the test. Then you give a random half of them a review sheet and tell them to use it. For both ethical and practical reasons, you would probably want to tell everybody in advance that you’ll adjust scores so that if there is an effect, students who didn’t get the sheet (either because they were in the control group or because they chose not to participate) won’t be at a disadvantage. You’d have to be careful in what you tell them about the experiment to balance informed consent without creating demand characteristics. But it could probably be done.


In spite of these issues, I think this data is strongly suggestive. The most obvious confounding factor was prior performance, which I was able to control for. If some of the students who downloaded the review sheet didn’t use it, that would attenuate the difference, but it shouldn’t make it go away entirely. To me, the most plausible explanation left standing is that review sheets don’t make a difference.

If that’s true, why do students ask for review sheets and why do they think that they help? As a student, you only have a limited capacity to gauge what really makes a difference for you — because on any given test, you will never know how well you would have done if you had studied differently. (By “limited capacity,” I don’t mean that students are dumb — I mean that there’s a fundamental barrier.) So a lot of what students do is rely on feelings. Do I feel comfortable with the material? Do I feel like I know it? Do I feel ready for the exam? And I suspect that review sheets offer students an illusory feeling of control and mastery. “Okay, I’ve got this thing that’s gonna help me. I feel better already.” So students become convinced that they make a difference, and then they insist on them.

I also suspect, by the way, that lots of other things work that way. To date, I have steadfastly refused to give out my lecture slides before the lecture. Taking notes in your own words (not rote) requires you to be intellectually engaged with the material. Following along on a printout might feel more relaxed, but I doubt it’s better for learning. Maybe I’ll test that one next time…

Students, fellow teachers, and anybody else: I’d welcome your thoughts and feedback, both pro and con, in the comments section. Thanks!

Thinking hard

I’ve been enjoying William Cleveland’s The Elements of Graphing Data, a book I wish I’d discovered years ago. The following sentence jumped out at me:

No complete prescription can be designed to allow us to proceed mechanically and to relieve us of thinking hard. (p. 59)

The context was — well, it doesn’t matter what the context was. It’s a great encapsulation of what statistical teaching, mentoring, and consulting should be (teaching how to think hard) and cannot be (mechanical prescriptions).

Teaching is a social interaction

Howard Gardner suggests that the next big leap for teaching will be “personalized education,” in which people will learn from computers that adapt to their individual learning style:

Well-programmed computers—whether in the form of personal computers or hand-held devices—are becoming the vehicles of choice. They will offer many ways to master materials. Students (or their teachers, parents, or coaches) will choose the optimal ways of presenting the materials. Appropriate tools for assessment will be implemented. And best of all, computers are infinitely patient and flexible. If a promising approach does not work the first time, it can be repeated, and if it continues to fail, other options will be readily available.

My response to this is a big fat humbug. Gardner has put forward some interesting ideas about multiple intelligences and different learning styles. But the notion that computers will supplant human teachers strikes me as overreaching.

Teaching is, at its core, a social interaction between teacher and student. That is why MIT isn’t putting itself out of business by putting gobs of course materials online. Teachers do not create new information. (Or at least — if they’re at a university and also do research — not in their role as teachers.) And frankly, they don’t often package it into some novel format (“here is a bodily-kinesthetic presentation of Bayes’ Theorem”). What teachers do is convey information through a social interaction with their students. Perhaps some day we’ll know enough about how to turn computers into compelling social agents that can reproduce that experience. But until then, I’m not worried about technology supplanting human teachers.