Is it still a bad idea for psychology majors to rent their intro textbook?

Inside Higher Ed reports that the number of students who rent textbooks is increasing. Interestingly, e-books have not caught on — most students are still using printed textbooks (though iPads might change that).

When I teach intro, I have always suggested to my students that if they are going to major in psychology, it is a good idea to purchase and keep their intro textbook. My argument has been that it will be a good reference for their upper-division classes, which might assume that they already know certain concepts. For example, when I teach an upper-division class in motivation and emotion, I assume that my students understand classical and operant conditioning (and I tell them in the syllabus that they should go back to their intro textbook and review the relevant sections).

A downside of this advice is that textbooks are very expensive. Renting a book, or selling one on the used market after the term ends, is a way for students to reduce costs.

Anyway, what this got me wondering is whether it’s still helpful or necessary for students to keep their intro textbooks. Is there enough good info on the internet now that they could just google whatever topics they need to review? A few years ago I looked around on the web for a well-written, introductory-level account of classical conditioning and wasn’t impressed with what I found. I still don’t think I’d assign the current entry for classical conditioning as a review. But with the APS Wikipedia project, for example, maybe things will get better soon.

I remember finding my intro textbook especially helpful when I studied for the psychology GRE, but not many undergrads will go on to do that. Next time I teach an upper-division class I’ll probably ask my students how much use they’ve gotten out of their intro text afterward.

Rethinking intro to psych

Inside Higher Ed has a really interesting article, Rethinking Science Education, about how some universities are trying to break the mold of the traditional intro-to-a-science course. From the article:

Too many college students are introduced to science through survey courses that consist of facts “often taught as a laundry list and from a historical perspective without much effort to explain their relevance to modern problems.” Only science students with “the persistence of Sisyphus and the patience of Job” will reach the point where they can engage in the kind of science that excited them in the first place, she said.

This is exactly how Intro to Psych is taught pretty much everywhere — as a laundry list of topics and findings, usually old ones. The scientific method is presented didactically as another topic in the list (usually the first one), rather than being woven into the daily experience of the class.

It’s a problem that’s easy to point out, but hard to solve. You almost couldn’t do it as a single instructor working within a traditional curriculum. Our majors take a 4-course sequence: 2 terms of intro, then statistics, then research methods. You’d essentially need to flip that around — start with a course called “The Process of Scientific Discovery in Psychology” and have students start collecting and analyzing data before they’ve even learned most of the traditional Intro topics. Such an approach is described in the article:

One approach to breaking out of this pattern, she said, is to create seminars in which first-year students dive right into science — without spending years memorizing facts. She described a seminar — “The Role of Asymmetry in Development” — that she led for Princeton freshmen in her pre-presidential days.

She started the seminar by asking students “one of the most fundamental questions in developmental biology: how can you create asymmetry in a fertilized egg or a stem cell so that after a single cell division you have two daughter cells that are different from one another?” Students had to discuss their ideas without consulting texts or other sources. Tilghman said that students can in fact engage in such discussions and that in the process, they learn that they can “invent hypotheses themselves.”

Would this work in psychology? I honestly don’t know. One of the big challenges in learning psychology — which generally isn’t an issue for biology or physics or chemistry — is the curse of prior knowledge. Students come to the class with an entire lifetime’s worth of naive theories about human behavior. Intro students wouldn’t invent hypotheses out of nowhere — they’d almost certainly recapitulate cultural wisdom, introspective projections, stereotypes, etc. Maybe that would be a problem. Or maybe it would be a tremendous benefit — what better way to start off learning psychology than to have some of your preconceptions shattered by data that you’ve collected yourself?

Here’s eight grand to adopt our textbook

I got the following email this morning. Note the part I’ve underlined:

***

Dear Introductory Psychology Professor:

[Redacted] Press was created as a faculty venture six years ago focusing solely on interactive low cost digital text packages with free printed texts. This concept has been widely accepted by faculty and students alike. The rising price of textbooks is well known to college faculty, students, and even government agencies.  Our digital textbooks offer a low cost alternative to traditional expensive textbooks.
We would like to introduce you to our Introductory Psychology low cost interactive package including:

A $40 digital interactive text with embedded videos and audio and words with internet links — a better way for today’s students
A free printed text called a student text supplement
Access to a password protected website with interactive updates and materials
A test marketing program with stipends up to $8,000 for individual professors and up to $15,000 or more for departments
An online test center for each chapter of the interactive text, plus instructor’s manual
Test bank questions to upload to any online platform such as Blackboard
Technical and consulting support — 24/7
We invite you to take a narrated tour of [Redacted] Press before you review the interactive Introductory Psychology text. It is a brief tour of [Redacted] Press and interactive texts and will enable you to better understand the benefits of our program within minutes. You start the tour by going to: [URL redacted] (you can cut and paste this URL directly into your browser).This tour will demonstrate the interactive elements of our texts and give you an opportunity to review the [Redacted] interactive Introductory Psychology text at your leisure.

After you have taken the tour, if you email me your mailing address and the number of students in your upcoming classes, we will send you the digital text and brochure on the Introductory Psychology package and tailor a test marketing stipend program for you and even for your department.

We are confident you will see the numerous advantages of moving towards digital, interactive texts and will help us faculty move students into the digital age of education.

Thank you in advance for your time and interest,

***

I went to the website and looked at the text briefly, and I wouldn’t ask a student to pay $40 for it. It’s just not that good, and for a few bucks more, a student can get an ebook edition of a name-brand textbook.

But more to the point, is it just me, or does that “test marketing program” sound like a pretext for a kickback? Awfully close to the consulting fees and conference junkets that doctors and pharmaceutical companies are always getting in trouble for.

(Of course, I’m also suspicious of the numbers. At $40 a pop, you’d need to sell 200 ebooks just to cover the $8000 kickback stipend.)

Do review sheets help?

A lot of what I do as a college instructor draws upon the accumulated wisdom and practice of my profession, plus my personal experience. I accumulate ideas and strategies from mentors and colleagues, I read about pedagogy, I try to get a feel for what works and what doesn’t in my classes, and I ask my students what is working for them. That’s what I suspect that most of us do, and probably it works pretty well.

But as stats guru and blogger Andrew Gelman pointed out not too long ago, we don’t often formally test which of our practices work. Hopefully the accumulated wisdom is valid — but if you’re a social scientist, your training might make you want something stronger than that. In that spirit, recently I ran a few numbers on a pedagogical practice that I’ve always wondered about. Do review sheets help students prepare for tests?

Background

When I first started teaching undergrad courses, I did not make review sheets for my students. I didn’t think they were particularly useful. I decided that I would rather focus my time and energy on doing things for my students that I believed would actually help them learn.

Why didn’t I think a review sheet would be useful? There are 2 ways to make a review sheet for an exam. Method #1 involves listing the important topics, terms, concepts, etc. that students should study. The review sheet isn’t something you study on its own — it’s like a guide or checklist that tells you what to study. That seemed questionable to me. It’s essentially an outline of the lectures and textbook — pull out the headings, stick in the boldface terms, and voila! Review sheet. If anything, I thought, students are better off doing that themselves. (Many resources on study skills tell students to scan and outline before they start reading.) In fact, the first time I taught my big Intro course, I put the students into groups and had them make their own review sheets. Students were not enthusiastic about that.

Method #2 involves making a document that actually contains studyable information on its own. That makes sense in a course where there are a few critical nuggets of knowledge that everybody should know — like maybe some key formulas in a math class that students need to memorize. But that doesn’t really apply to most of the courses I teach, where students need to broadly understand the lectures and readings, make connections, apply concepts, etc. (As a result, this analysis doesn’t really apply to courses that use that kind of approach.)

So in my early days of teaching, I gave out no review sheets. But boy, did I get protests. My students really, really wanted a review sheet. So a couple years ago I finally started making list-of-topics review sheets and passing them out before exams. I got a lot of positive feedback — students told me that they really helped.

Generally speaking, I trust students to tell me what works for them. But in this case, I’ve held on to some nagging doubts. So recently I decided to collect a little data. It’s not a randomized experiment, but even some correlational data might be informative.

Method

In Blackboard, the course website management system we use at my school, you can turn on tracking for items that you post. Students have to be logged in to the Blackboard system to access the course website, and if you turn on tracking, it’ll tell you when (if ever) each student clicked on a particular item. So for my latest midterm, the second one of the term, I decided to turn on tracking for the review sheet so that I could find out who downloaded it. Then I linked that data to the test scores.

I posted the review sheet on a Monday, 1 week before the exam. The major distinction I drew was between people who downloaded the sheet and those who never did. But I also tracked when students downloaded it. There were optional review sessions on Thursday and Friday. Students were told that if they came to the review session, they should come prepared. (It was a Jeopardy-style quiz.) So I divided students into several subgroups: those who first downloaded the sheet early in the week (before the review sessions), those who downloaded it on Thursday or Friday, and those who waited until the weekend before they downloaded it. I have no record of who actually attended the review sessions.

A quick caveat: It is possible that a few students could’ve gotten the review sheet some other way, like by having a friend in the class print it for them. But it’s probably reasonable to assume that wasn’t widespread. More plausible is that some people might have downloaded the review sheet but never really used it, which I have no way of knowing about.

Results

Okay, so what did I find? First, out of N=327 students, 225 downloaded the review sheet at some point. Most of them (173) waited until the last minute and didn’t download it until the weekend before the exam. 17 downloaded it Thursday-Friday, and 35 downloaded it early in the week. So apparently most students thought the review sheet might help.

Did students who downloaded the review sheet do any better? Nope. Zip, zilch, nada. The correlation between getting the review sheet and exam scores was virtually nil, r = -.04, p = .42. Here’s a plot, further broken down into the subgroups:

Review Sheet 1

This correlational analysis has potential confounds. Students were not randomly assigned — they decided for themselves whether to download the review sheet. So those who downloaded it might have been systematically different from those who did not; and if they differed in some way that would affect their performance on the second midterm, that could’ve confounded the results. In particular, perhaps the students who were already doing well in the class didn’t bother to download the review sheet, but the students who were doing more poorly downloaded it, and the review sheet helped them close the gap. If that happened, you’d observe a zero correlation. (Psychometricians call this a suppressor effect.)

So to address that possibility, I ran a regression in which I controlled for scores on the first midterm. The simple correlation asks: did students who downloaded the review sheet do better than students who didn’t? The regression asks: did students who downloaded the review sheet do better than students who performed just as well on the first midterm but didn’t download the sheet? If there was a suppressor effect, controlling for prior performance should reveal the effect of the review sheet.

But that isn’t what happened. The two midterms were pretty strongly correlated, r = .63. But controlling for prior performance made no difference — the review sheet still had no effect. The standardized beta was .00, p = .90. Here’s a plot to illustrate the regression: this time, the y-axis is the residual (the difference between somebody’s actual score minus the score we would have expected them to get based on the first midterm):

Review Sheet 2Limitations

This was not a highly controlled study. As I mentioned earlier, I have no way of knowing whether students who downloaded the review sheet actually used it. I also don’t know who used a review sheet for the first midterm, the one that I controlled for. (I didn’t think to turn on tracking at the start of the term.) And there could be other factors I didn’t account for.

A better way to do this would be to run a true experiment. If I was going to do this right, I’d go into a class where the instructor isn’t planning to give out review sheets. Tell students that if they enroll in the experiment, they’ll be randomly assigned to get different materials to help them prepare for the test. Then you give a random half of them a review sheet and tell them to use it. For both ethical and practical reasons, you would probably want to tell everybody in advance that you’ll adjust scores so that if there is an effect, students who didn’t get the sheet (either because they were in the control group or because they chose not to participate) won’t be at a disadvantage. You’d have to be careful in what you tell them about the experiment to balance informed consent without creating demand characteristics. But it could probably be done.

Conclusions

In spite of these issues, I think this data is strongly suggestive. The most obvious confounding factor was prior performance, which I was able to control for. If some of the students who downloaded the review sheet didn’t use it, that would attenuate the difference, but it shouldn’t make it go away entirely. To me, the most plausible explanation left standing is that review sheets don’t make a difference.

If that’s true, why do students ask for review sheets and why do they think that they help? As a student, you only have a limited capacity to gauge what really makes a difference for you — because on any given test, you will never know how well you would have done if you had studied differently. (By “limited capacity,” I don’t mean that students are dumb — I mean that there’s a fundamental barrier.) So a lot of what students do is rely on feelings. Do I feel comfortable with the material? Do I feel like I know it? Do I feel ready for the exam? And I suspect that review sheets offer students an illusory feeling of control and mastery. “Okay, I’ve got this thing that’s gonna help me. I feel better already.” So students become convinced that they make a difference, and then they insist on them.

I also suspect, by the way, that lots of other things work that way. To date, I have steadfastly refused to give out my lecture slides before the lecture. Taking notes in your own words (not rote) requires you to be intellectually engaged with the material. Following along on a printout might feel more relaxed, but I doubt it’s better for learning. Maybe I’ll test that one next time…

Students, fellow teachers, and anybody else: I’d welcome your thoughts and feedback, both pro and con, in the comments section. Thanks!