Everybody knows that grad school admission interviews don’t tell us anything useful, right? Right?

From time to time I have heard people in my field challenging the usefulness of interviews in grad student selection. It usually is delivered with the weary tone of the evidence-based curmudgeon. (I should note that as an admirer of Paul Meehl and a grand-advisee of Lew Goldberg, who once wrote an article called “Human Mind Versus Regression Equation” in which the regression equation wins, I am often disposed toward such curmudeonry myself.)

The argument usually goes something like this: “All the evidence from personnel selection studies says that interviews don’t predict anything. We are wasting people’s time and money by interviewing grad students, and we are possibly making our decisions worse by substituting bad information for good.”

I have been hearing more or less that same thing for years, starting when I was grad school myself. In fact, I have heard it often enough that, not being familiar with the literature myself, I accepted what people were saying at face value. But I finally got curious about what the literature actually says, so I looked it up.

And given what I’d come to believe over the years, I was a little surprised at what I found.

A little Google Scholaring for terms like “employment interviews” and “incremental validity” led me to a bunch of meta-analyses that concluded that in fact interviews can and do provide useful information above and beyond other valid sources of information (like cognitive ability tests, work sample tests, conscientiousness, etc.). One of the most heavily cited is a 1998 Psych Bulletin paper by Schmidt and Hunter (link is a pdf; it’s also discussed in this blog post). Another was this paper by Cortina et al, which makes finer distinctions among different kinds of interviews. The meta-analyses generally seem to agree that (a) interviews correlate with job performance assessments and other criterion measures, (b) interviews aren’t as strong predictors as cognitive ability, (c) but they do provide incremental (non-overlapping) information, and (d) in those meta-analyses that make distinctions between different kinds of interviews, structured interviews are better than unstructured interviews.

If you look at point “b” above and think that maybe interviews add too little variance to be worth the trouble, my response is: live by the coefficients, die by the coefficients. You’d also have to conclude that we shouldn’t be asking applicants to write about their background or interests in a personal statement, and we shouldn’t be obtaining letters of recommendation. According to Schmidt and Hunter (Table 1), biography, interests, and references all have weaker predictive power than structured interviews. (You might want to justify those things over interviews on a cost-benefit basis, though I’d suggest that they aren’t necessarily cheap either. A personal statement plus 3 reference letters adds up to a lot of person-hours of labor.)

A bigger problem is that if you are going to take an evidence-based approach, your evidence needs to be relevant. Graduate training shares some features with conventional employment, but they are certainly not the same. So I think it is fair to question how well personnel studies can generalize to doctoral admissions. For example, one justification for interviews that I’ve commonly heard is that Ph.D. programs require a lot of close mentoring and productive collaboration. Interviews might help the prospective advisor and advisee evaluate the potential for rapport and shared interests and goals. Even if an applicant is generally well qualified to earn a Ph.D., they might not be a good fit for a particular advisor/lab/program.

That, of course, is a testable question. So if you are an evidence-based curmudgeon, you should probably want some relevant data. I was not able to find any studies that specifically addressed the importance of rapport and interest-matching as predictors of later performance in a doctoral program. (Indeed, validity studies of graduate admissions are few and far between, and the ones I could find were mostly for medical school and MBA programs, which are very different from research-oriented Ph.D. programs.) It would be worth doing such studies, but not easy.

Anyway, if I’m misunderstanding the literature or missing important studies, I hope someone will tell me in the comments. (Personnel selection is not my wheelhouse, but since this is a blog I’m happy to plow forward anyway.) However, based on what I’ve been able to find in the literature, I’m certainly not ready to conclude that admissions interviews are useless.

Just don’t ask me how

There has been lots of blog activity over a NY Times op-ed by Mark Taylor, professor of religion at Columbia. Right now it’s the most-emailed story at the Times. In it, Taylor proposes abolishing the modern university. From his mixed bag of arguments:

  • graduate education prepares students for jobs that don’t exist
  • academic scholarship is too specialized and divorced from real-world problems
  • faculty create clones of themselves instead of true scholars
  • grad school exploits people to provide cheap labor for undergrad education
  • traditional disciplines need to be replaced with interdisciplinary thematic ceters
  • tenure protects unproductive people and inhibits change
  • etc.

I wish I could say that any of this was new, but this is the same stuff I’ve been hearing about higher education since I was in college, and I know that pretty much all of it has been around a lot longer than that. Some of it has some traction, some of it doesn’t. Taylor doesn’t come up with any new or interesting solutions. (He proposes to train grad students for non-academic careers; but he doesn’t say how. Abolish tenure: but he makes no attempt to quantify the benefits of tenure, such as the freedom to define hard problems and take risks to solve them. Etc.)

Plenty of bloggers are posting takedowns. Among the good ones…

Chris Kielty on Savage Minds notes that many of the practices and structures that Taylor attacks (like departments and tenure) protect what is valuable about universities. Abolishing them will just make things worse:

Administrators across the country love it when stooges like Taylor say this kind of shit, because it gives them the right and high horse upon which to justify the destruction of academic job security, autonomous decision making by faculty and the definition of what counts as a timely or important problem by the people who actually have to do the work. And I suspect I hardly need to tell anyone that it isn’t places like UCLA or Columbia that will suffer even if his suggestions are taken seriously, but those underfunded state schools looking for any excuse to expand the number of adjuncts, diminish the autonomy of faculty, exploit graduate students even further (by claiming that they need to “expand their skills”), and so on.

Scott Sommers says that the problem isn’t that grad students are too specialized to have marketable skills — it’s that most of the jobs where they can apply their skills are less interesting than academia:

All the “…limited knowledge that all too often is irrelevant for genuinely important problems” decried by Dr. Taylor is really made up of highly valuable skills…

The problem isn’t the usefulness of these techniques, nor even the employablity of these skills outside the university. The problem is that no one trained in these skills really wants to apply them to anything but academic problems. I have personal experience with this. Before teaching English, I worked for a marketing research firm in Canada. While all this was long ago, I retain one especially vivid memory. My supervisor, who holds a PhD in Political Science from the University of Toronto, and I were hunched over a table examining cross tabs of a survey of attitudes toward Canadian hi-tech companies. I remember her commenting on the wide fluctuation in perceptions of excellence we had obtained across the spectrum of surveyed companies surveyed. Her response to this? “Isn’t this interesting!” No, it isn’t and it wasn’t then, even though it was really one of the more interesting problems our firm worked on. And I suspect even my boss thought so, since she now works in academia.

Matt Welsh points out that Taylor’s critique doesn’t apply nearly as well to the sciences:

What he really means is that in the areas of “religion, politics, history, economics, anthropology, sociology, literature, art, religion and philosophy” (the author’s all-encompassing list of the realms of human thought that apparently really matter) it is damned hard to get a decent job after graduate school, and I agree. But this has little to do with the situation in the sciences and engineering, where graduate students go on to a wide range of careers in industry, government, military, and, yes, academia.