Just don’t ask me how

There has been lots of blog activity over a NY Times op-ed by Mark Taylor, professor of religion at Columbia. Right now it’s the most-emailed story at the Times. In it, Taylor proposes abolishing the modern university. From his mixed bag of arguments:

  • graduate education prepares students for jobs that don’t exist
  • academic scholarship is too specialized and divorced from real-world problems
  • faculty create clones of themselves instead of true scholars
  • grad school exploits people to provide cheap labor for undergrad education
  • traditional disciplines need to be replaced with interdisciplinary thematic ceters
  • tenure protects unproductive people and inhibits change
  • etc.

I wish I could say that any of this was new, but this is the same stuff I’ve been hearing about higher education since I was in college, and I know that pretty much all of it has been around a lot longer than that. Some of it has some traction, some of it doesn’t. Taylor doesn’t come up with any new or interesting solutions. (He proposes to train grad students for non-academic careers; but he doesn’t say how. Abolish tenure: but he makes no attempt to quantify the benefits of tenure, such as the freedom to define hard problems and take risks to solve them. Etc.)

Plenty of bloggers are posting takedowns. Among the good ones…

Chris Kielty on Savage Minds notes that many of the practices and structures that Taylor attacks (like departments and tenure) protect what is valuable about universities. Abolishing them will just make things worse:

Administrators across the country love it when stooges like Taylor say this kind of shit, because it gives them the right and high horse upon which to justify the destruction of academic job security, autonomous decision making by faculty and the definition of what counts as a timely or important problem by the people who actually have to do the work. And I suspect I hardly need to tell anyone that it isn’t places like UCLA or Columbia that will suffer even if his suggestions are taken seriously, but those underfunded state schools looking for any excuse to expand the number of adjuncts, diminish the autonomy of faculty, exploit graduate students even further (by claiming that they need to “expand their skills”), and so on.

Scott Sommers says that the problem isn’t that grad students are too specialized to have marketable skills — it’s that most of the jobs where they can apply their skills are less interesting than academia:

All the “…limited knowledge that all too often is irrelevant for genuinely important problems” decried by Dr. Taylor is really made up of highly valuable skills…

The problem isn’t the usefulness of these techniques, nor even the employablity of these skills outside the university. The problem is that no one trained in these skills really wants to apply them to anything but academic problems. I have personal experience with this. Before teaching English, I worked for a marketing research firm in Canada. While all this was long ago, I retain one especially vivid memory. My supervisor, who holds a PhD in Political Science from the University of Toronto, and I were hunched over a table examining cross tabs of a survey of attitudes toward Canadian hi-tech companies. I remember her commenting on the wide fluctuation in perceptions of excellence we had obtained across the spectrum of surveyed companies surveyed. Her response to this? “Isn’t this interesting!” No, it isn’t and it wasn’t then, even though it was really one of the more interesting problems our firm worked on. And I suspect even my boss thought so, since she now works in academia.

Matt Welsh points out that Taylor’s critique doesn’t apply nearly as well to the sciences:

What he really means is that in the areas of “religion, politics, history, economics, anthropology, sociology, literature, art, religion and philosophy” (the author’s all-encompassing list of the realms of human thought that apparently really matter) it is damned hard to get a decent job after graduate school, and I agree. But this has little to do with the situation in the sciences and engineering, where graduate students go on to a wide range of careers in industry, government, military, and, yes, academia.

Misrepresenting science to justify torture

I wonder how many other legitimate psychological studies were misrepresented to justify torture.

A British professor whose research on sleep was cited in one of the just-released Bush administration torture memos has expressed outrage that his work was used to justify extreme sleep deprivation, including keeping subjects awake for up to 11 days.

As for whether such stress could be considered “harmful,” Horne was unequivocal. “I thought it was totally inappropriate to cite my book as being evidence that you can do this and there’s not much harm. With additional stress, these people are suffering. It’s obviously traumatic,” he said. “I just find it absurd.”

Further, Horne continued, sleep-deprived subjects become so confused that they’re highly unlikely to offer useful intelligence. “I don’t understand what you’re going to get out of it,” he said. “You can no longer think rationally, you just become more of an automaton … These people will just be spewing nonsense anyway. It’s pointless!”

So even for those who argue that torture is justified when it produces actionable intelligence — sleep deprivation doesn’t.

Your brain has something to do with your mind, we think

Personality decided at birth, say scientists:

Anatomical differences between the brains of 85 people have been measured and linked with the four main categories of personality types as defined by psychiatrists using a clinically recognised system of character evaluation.

“There is no point shouting at a child who is very shy and telling them off, because it does not come naturally to them to put themselves forward. But actually knowing there is a biological basis for this helps educators or parents to use the right approach to help a child to compensate.”

This is a complete non sequitur. If two people behave differently, it necessarily follows that there are biological differences that underpin the behaviors. Because we are all, you know, made of stuff. Biological stuff. Demonstrating structural differences between brains says absolutely nothing about whether personality is “determined at birth.”

The kind of writing up with which I will not put

From a smackdown of Strunk & White in the Chronicle:

The Elements of Style does not deserve the enormous esteem in which it is held by American college graduates. Its advice ranges from limp platitudes to inconsistent nonsense. Its enormous influence has not improved American students’ grasp of English grammar; it has significantly degraded it.

Personally, I’m a big fan of Garner’s Modern American Usage, famously and brilliantly praised by David Foster Wallace in a review in Harper’s (“Tense Present“).

On sex and our ever-larger organs

Male chimpanzees exchange meat for sex from females, according to a new study in PLoS One.

This study is bound to trigger lots of tittering and jokes about the “oldest profession.” However, what I found most interesting about it was this:

“We looked at chimps when they were not in oestrus, this means they don’t have sexual swellings and aren’t copulating.”

“The males still share with them – they might share meat with a female one day, and only copulate with her a day or two later.”

In other words, these aren’t just instantaneous you-give-me-this, I-give-you-that trades. Rather, these seem to be more like long-term contracts. Evolutionary psychologists like Leda Cosmides has argued that these kinds of long-term social exchanges played an important role in the evolution of human brains, because they require specialized and complex reasoning.

Of course, it would be a huge leap from modern-day chimps to human ancestors… but wouldn’t it be funny if it turned out that proto-prostitution is what made our brains so big?

The perverse incentive structure of IRBs

As a researcher at a university, all of my human subjects research has to go through my university’s IRB. I believe that IRBs have an important role in research. However, in practice I sometimes find dealing with an IRB to be frustrating.

Pretty much all of the research that I do is very low risk. Yet I have to go through a review system that was invented as a response to Nazi medical experiments and other horrific incidents half a century ago. You might think that should make my behavioral research easier to get approved — I could just say, “hey, guess what, I’m not secretly giving people syphilis or anything” and get the thumbs-up. Sadly, though, it doesn’t work like that. Even when I have a study that is eligible for expedited review, there is a heck of a lot of paperwork to fill out, and time to wait, and often pointless revisions to make — all in order to do something as simple as asking people a few questions about what kind of day they had yesterday.

So why are university IRBs so inefficient? There are a number of reasons, but I believe that one of the core problems is that the system is built on a foundation of perverse incentives for the IRB.

The IRB’s task can be thought of like a signal detection problem. Simplifying a little bit, you can think of the protocols that researchers submit as being either worthy or unworthy. For any given protocol, the IRB has to decide to approve or reject. So there are two kinds of correct decisions (approve a worthy protocol or reject an unworthy one) and two kinds of mistaken decisions (reject a worthy protocol or approve an unworthy one). And the big problem is that the IRB’s potential costs associated with the two different kinds of mistakes are severely imbalanced.

If the IRB mistakenly rejects a worthy protocol, what is the worst thing that could happen? The investigator might make a phone call and resubmit the application, taking up some extra staff time, but the IRB will not get into any serious trouble. And the costs of this mistake are chiefly borne by the researcher, not the IRB. Furthermore, within a university, there is no appeals process or oversight authority empowered to act on a rejected protocol.

By contrast, if the IRB mistakenly approves an unworthy protocol, all kinds of bad things could happen. Even if no subjects are harmed, an audit could turn up the mistake and the IRB could get in trouble. And in more serious cases — if subjects do get exposed to inappropriate risks, or actually get harmed — things can get much, much worse. The IRB could get shut down (halting all research at the university), the professional IRB staff could get fired, and the university could get sued by the harmed subjects.

These asymmetric incentives mean that IRBs have a very strong incentive to err on the side of rejecting too much research. So it’s no wonder that the process is so slow and clunky, and even simple low-risk protocols are routinely sent back for revisions. The staff at my IRB are good people who want to help researchers when they can. But the actual review board members are often people with no personal stake in seeing that research gets done efficiently, and some have no formal science training at all (which can lead them to imagine harmful effects of research that have no basis in reality). And for both the paid staff and the board members, even those with the best intentions work within an incentive structure that is completely out of whack.

So a big part of me was outraged (and a tiny, naughty part of me jealous) to learn that in commercial medical settings, the IRB incentives are out of whack too — but in the opposite direction. If you are a researcher a private, for-profit research company, you get approval for your research by paying a commercial IRB to review it. It doesn’t take a genius to look at this setup and figure out that a commercial IRB that approves lots of research is going to be popular with its customer base. So it was probably just a matter of time before a scandal erupted. And now one has.

In a test of the commercial IRB system, the Government Accountability Office submitted a fake protocol to 3 different commercial IRBs. The protocol was rigged to be full of unsafe, high-risk elements. And apparently one of the companies, Coast IRB, fell for the sting, deeming the protocol safe and low-risk and giving it approval. Upon further investigation from the GAO, it turns out that Coast has not rejected a single proposal in the last 5 years, and it made over $9 million last year. Hmmm…

In the aftermath of this incident, it is very likely that attention is again going to get focused where it always gets focused: on the possibility that IRBs might be approving bad, unsafe research. But such a focus may be misguided. The case of Coast IRB shows that even commercial IRBs face very serious costs when they get caught approving bad research. The company has just seen its entire $9-mil-a-year business evaporate while it undergoes an audit. Employees may lose their jobs. Owners may lose profits and see their shares lose value. The entire company could go out of business.

Instead, the problem with both university and commercial IRBs is on the approval side: the system does not present the right level of incentives for approving worthy research. In the university IRB case, the incentive is too low. And in the commercial IRB case, it’s too high. Hypothetically speaking, even if somebody at a Coast IRB kind of place knew the potential costs of getting caught approving bad research, in a rational cost-benefit analysis those potential costs would have been balanced against a multimillion-dollar revenue stream that depended on them approving lots of protocols, good and bad.

So what will happen next? If you are a member of Congress and you want to fix commercial IRBs, you could alter the cost-benefit balance on either side. That is, you could either diminish the profit motive associated with approving research, or you could make it even more costly for a company to mistakenly approve bad research. The problem is that any new regulatory policy designed to fix commercial IRBs could very well affect university IRBs as well, since both kinds of IRBs fall under many of the same regulations. And if you raise the costs and punishments associated with approving bad research (or institute even more intrusive regulations and oversight to try to prevent such approvals from happening), you will make the perverse incentives at universities even more perverse.

Personally, I think it’s at least a littie bit weird that IRBs — institutions designed to safeguard the interests of research subjects — can be run as for-profit businesses whose very financial existence depends upon those they are supposed to watch. If Congress wants to fix the system in the commercial medical industry, they need to look at the fundamental question of whether that is a sustainable model, and narrowly tailor any changes to apply to commerical IRBs. The answer is most definitely not to create more intrusive oversight or threaten punishments across the board. Let’s hope that is not the direction they choose to go.

The magazine curse?

Paul Krugman writes about Robert Rubin, Alan Greenspan, and Lawrence Summers, who appeared on the cover of Time in 1999:

Two… have since succumbed to the magazine cover curse, the plunge in reputation that so often follows lionization in the media.

Umm, hey Mr. Krugman… think this might just be regression to the mean? Sports Illustrated knows what I’m talking about. So does your fellow Nobel-in-economics laureate Daniel Kahneman.

Even if you could, why would you?

According to an article appearing in BMC Psychiatry, 4% of British psychiatrists say that if a patient asked to be “cured” of homosexuality, they would try to do so. And 17% of psychiatrists state that they have previously helped a patient to reduce same-sex attraction. (I almost wrote “tried to help,” but that’s not how it’s worded in the abstract. Perhaps they actually think they were successful.)

In light of the spotted legacy of homosexuality and psychiatric diagnosis, I suppose you could spin this positively and say that we’ve come a long way — 96% of British psychiatrists know better. Still…

(H/T to Scientific American.)

Mom, Dad: Chillax

Alan Kazdin and Carlo Rotella have a sensible essay on Slate discussing how to change your child’s problematic behaviors. Key principle: it isn’t enough to punish the bad behavior. You have to find an opposite behavior and reward it.

They also discuss some of the frustrations and challenges of trying to eliminate problem behavior — things like extinction bursts and a tendency of stressed parents to unwittingly engage in variable reinforcement, which entrenches rather than eliminates the behavior.

But part of their sensible answer is: do you really want to bother? I was generally familiar with the learning-theory stuff, but a little surprised at how common many of these behaviors are.

Many unwanted behaviors, including some that disturb parents, tend to drop out on their own, especially if you don’t overreact to them and reinforce them with a great deal of excited attention…

Approximately 60 percent of 4- and 5-year-old boys can’t sit still as long as adults want them to, and approximately 50 percent of 4- and 5-year-old boys and girls whine to the extent that their parents consider it a significant problem. Both fidgeting and whining tend to decrease on their own with age, especially if you don’t reinforce these annoying behaviors by showing your child that they’re a surefire way to get your (exasperated) attention. Thirty to 40 percent of 10- and 11-year-old boys and girls lie in a way that their parents identify as a significant problem, but this age seems to be the peak, and the rate of problem lying tends to plummet thereafter and cease to be an issue. By adolescence, more than 50 percent of males and 20 percent to 35 percent of females have engaged in one delinquent behavior—typically theft or vandalism. For most children, it does not turn into a continuing problem.

Kids!

MIT restricts academic freedom?

According to an article at Ars Technica, the faculty at MIT have voted to require that all academic publications be open-access. More specifically, the policy requires that when submitting an article to a journal publisher, authors must grant MIT a license to distribute the work for free, and authors have to provide the publication to the MIT provost. If you want to publish with a journal that refuses to allow open access, you have to submit a written request and get approval from the provost.

I’m all for open public access. But I am also all for academic freedom. When a university dictates where its faculty can publish, that seems to me to set a dangerous precedent. If a university can say that faculty cannot publish in Journal X because the university doesn’t like the journal’s copyright policy, who’s to say that the next step isn’t “Don’t publish in Journal Y because we don’t like their editorial position on [fill in controversial issue here]”?