Everybody knows that grad school admission interviews don’t tell us anything useful, right? Right?

From time to time I have heard people in my field challenging the usefulness of interviews in grad student selection. It usually is delivered with the weary tone of the evidence-based curmudgeon. (I should note that as an admirer of Paul Meehl and a grand-advisee of Lew Goldberg, who once wrote an article called “Human Mind Versus Regression Equation” in which the regression equation wins, I am often disposed toward such curmudeonry myself.)

The argument usually goes something like this: “All the evidence from personnel selection studies says that interviews don’t predict anything. We are wasting people’s time and money by interviewing grad students, and we are possibly making our decisions worse by substituting bad information for good.”

I have been hearing more or less that same thing for years, starting when I was grad school myself. In fact, I have heard it often enough that, not being familiar with the literature myself, I accepted what people were saying at face value. But I finally got curious about what the literature actually says, so I looked it up.

And given what I’d come to believe over the years, I was a little surprised at what I found.

A little Google Scholaring for terms like “employment interviews” and “incremental validity” led me to a bunch of meta-analyses that concluded that in fact interviews can and do provide useful information above and beyond other valid sources of information (like cognitive ability tests, work sample tests, conscientiousness, etc.). One of the most heavily cited is a 1998 Psych Bulletin paper by Schmidt and Hunter (link is a pdf; it’s also discussed in this blog post). Another was this paper by Cortina et al, which makes finer distinctions among different kinds of interviews. The meta-analyses generally seem to agree that (a) interviews correlate with job performance assessments and other criterion measures, (b) interviews aren’t as strong predictors as cognitive ability, (c) but they do provide incremental (non-overlapping) information, and (d) in those meta-analyses that make distinctions between different kinds of interviews, structured interviews are better than unstructured interviews.

If you look at point “b” above and think that maybe interviews add too little variance to be worth the trouble, my response is: live by the coefficients, die by the coefficients. You’d also have to conclude that we shouldn’t be asking applicants to write about their background or interests in a personal statement, and we shouldn’t be obtaining letters of recommendation. According to Schmidt and Hunter (Table 1), biography, interests, and references all have weaker predictive power than structured interviews. (You might want to justify those things over interviews on a cost-benefit basis, though I’d suggest that they aren’t necessarily cheap either. A personal statement plus 3 reference letters adds up to a lot of person-hours of labor.)

A bigger problem is that if you are going to take an evidence-based approach, your evidence needs to be relevant. Graduate training shares some features with conventional employment, but they are certainly not the same. So I think it is fair to question how well personnel studies can generalize to doctoral admissions. For example, one justification for interviews that I’ve commonly heard is that Ph.D. programs require a lot of close mentoring and productive collaboration. Interviews might help the prospective advisor and advisee evaluate the potential for rapport and shared interests and goals. Even if an applicant is generally well qualified to earn a Ph.D., they might not be a good fit for a particular advisor/lab/program.

That, of course, is a testable question. So if you are an evidence-based curmudgeon, you should probably want some relevant data. I was not able to find any studies that specifically addressed the importance of rapport and interest-matching as predictors of later performance in a doctoral program. (Indeed, validity studies of graduate admissions are few and far between, and the ones I could find were mostly for medical school and MBA programs, which are very different from research-oriented Ph.D. programs.) It would be worth doing such studies, but not easy.

Anyway, if I’m misunderstanding the literature or missing important studies, I hope someone will tell me in the comments. (Personnel selection is not my wheelhouse, but since this is a blog I’m happy to plow forward anyway.) However, based on what I’ve been able to find in the literature, I’m certainly not ready to conclude that admissions interviews are useless.