Here’s eight grand to adopt our textbook

I got the following email this morning. Note the part I’ve underlined:

***

Dear Introductory Psychology Professor:

[Redacted] Press was created as a faculty venture six years ago focusing solely on interactive low cost digital text packages with free printed texts. This concept has been widely accepted by faculty and students alike. The rising price of textbooks is well known to college faculty, students, and even government agencies.  Our digital textbooks offer a low cost alternative to traditional expensive textbooks.
We would like to introduce you to our Introductory Psychology low cost interactive package including:

A $40 digital interactive text with embedded videos and audio and words with internet links — a better way for today’s students
A free printed text called a student text supplement
Access to a password protected website with interactive updates and materials
A test marketing program with stipends up to $8,000 for individual professors and up to $15,000 or more for departments
An online test center for each chapter of the interactive text, plus instructor’s manual
Test bank questions to upload to any online platform such as Blackboard
Technical and consulting support — 24/7
We invite you to take a narrated tour of [Redacted] Press before you review the interactive Introductory Psychology text. It is a brief tour of [Redacted] Press and interactive texts and will enable you to better understand the benefits of our program within minutes. You start the tour by going to: [URL redacted] (you can cut and paste this URL directly into your browser).This tour will demonstrate the interactive elements of our texts and give you an opportunity to review the [Redacted] interactive Introductory Psychology text at your leisure.

After you have taken the tour, if you email me your mailing address and the number of students in your upcoming classes, we will send you the digital text and brochure on the Introductory Psychology package and tailor a test marketing stipend program for you and even for your department.

We are confident you will see the numerous advantages of moving towards digital, interactive texts and will help us faculty move students into the digital age of education.

Thank you in advance for your time and interest,

***

I went to the website and looked at the text briefly, and I wouldn’t ask a student to pay $40 for it. It’s just not that good, and for a few bucks more, a student can get an ebook edition of a name-brand textbook.

But more to the point, is it just me, or does that “test marketing program” sound like a pretext for a kickback? Awfully close to the consulting fees and conference junkets that doctors and pharmaceutical companies are always getting in trouble for.

(Of course, I’m also suspicious of the numbers. At $40 a pop, you’d need to sell 200 ebooks just to cover the $8000 kickback stipend.)

Perhaps a minor credibility problem

Richard Dawkins has a new book coming out, titled The Greatest Show on Earth, in which he tries to win over fence-sitters with a case for evolution.

But, is it just me, or did his last book The God Delusion (as in, if you believe in God you’re delusional) maybe alienate some of his potential audience? Or is there a big market segment of creationist atheists that I don’t know about?

A scientist replies to people who say “I knew it all along”

Pick the one that best applies:

1. No you didn’t. The answer sounds plausible and you are a reasonably smart person so you quickly absorbed it as the correct one. So quickly, in fact, that in hindsight it now feels like you knew it all along. It is hard to have a memory of not knowing something, because way back when you did not know, you did not know that you did not know. So now you think you knew it all along, because you know it now and you don’t have a distinct memory of not knowing.

2. No you didn’t. You have previously wondered, or maybe just heard conventional wisdom that sounds like the answer you know now. Now that you know the right answer, the one you have just heard, you can search your memory and discover that you’ve thought or heard something vaguely resembling the answer before. But in fact, if you really thought about it, you could probably dig up a memory or some conventional wisdom that supports a completely different answer. Consider also that you never took a public stand, you never made it real, you never made yourself accountable for the answer you’re now claiming you knew all along. Which means that if the right answer had turned out to be completely different, it would be just as easy to say you knew that one all along instead.

3. No you didn’t. You thought it all along, but you didn’t know it all along. Your beliefs were based in your ideology or your worldview, not on any objective evidence. If you ever encountered somebody who believed differently because they had a different ideology or worldview, then at most the two of you stood there talking past each other, offering zero enlightenment to anybody approaching the issue without prejudice. Those people needed hard evidence, and you only had arguments. You didn’t know, you just thought you knew.

4. No you didn’t. You made a lucky guess. You are mentally engaged with the world, and so like all mentally engaged humans you form lots of guesses and speculations and opinions about lots of things. If you guess enough times about enough things, some of those guesses will eventually turn out to be right. That doesn’t mean you knew it all along.

5. No you didn’t. You knew the superficial version that everybody knew and that, to the scientists, was beside the point. The story you just heard or the press article you just read has omitted the scientifically interesting part. The scientists weren’t interested in the simple descriptive fact, the one that they, you, and everybody else knew all along. They were interested in how it worked or why it was the way it was.

6. Yes you did. Congratulations. You are hereby authorized to say things like, “Still no cure for cancer,” or “My tax dollars went to this?!?” Have at it.

Personality, economics, and human development

Just back from the Association for Research in Personality 2009 conference in Evanston. Lots of interesting stuff.

One of the main themes underlying the conference was integration with economics. There were (nominally) 2 symposia on personality and economics, as well as a keynote from James Heckman.

I say “nominally” because one of the symposia was really just a bunch of psychologists using an economics panel study (the SOEP) to study personality and life satisfaction. Very interesting stuff — the size of the dataset allows them to use some very sophisticated quantitative models (though I had some quibbles with them not including systematic growth functions) — but it didn’t feel to me like it was very far outside of the mainstream personality psychology paradigm.

One of the highlights for me, though, was Heckman’s keynote address.

First, what it wasn’t: when I first heard that a big-shot economist was getting interested in personality, I assumed he wanted to use personality traits to predict economically relevant behaviors, like how people form preferences and deal with uncertainty. It sounded like a good idea, because many economists (and their psychologist cousins in decision-making) have traditionally been strong situationists and thus resistant to thinking that personality matters. And in fact, that’s what one of the talks in the actually-about-economics symposium was about (as well as some emerging work elsewhere in DM) — how personality predicts economic decisions. It’s good and important stuff, if maybe a little unsurprising as a general direction to go.

But Heckman is interested in personality in a different way. In particular, he is interested in personality development and change. His interest grows out of research showing that interventions designed to lift people (esp. young kids) out of poverty (like the Perry Preschool Study, a precursor to Head Start) are working — kids who receive early care and educational help are more likely to go on to graduate from high school, more likely to be employed full-time as adults, less likely to get involved in crime, etc. Where Heckman got involved is in understanding the mechanisms. His work has shown that these programs don’t just boost cognitive skills (that’s economist-speak for IQ) — in fact, gains in tested IQ fade a few years after the intervention. Instead, the interventions seem to be mediated by lasting changes what economists call “noncognitive skills,” which is a slightly hilarious (if you’re a psychologist) term for personality. Enduring changes in things like diligence, cooperation, positive social relationships, etc. are what seem to be driving the effects. In Big Five terms, agreeableness and conscientiousness.

Not only is it refreshing to see an economist getting interested in personality (and as a sidenote, with what I took as a very authentic interest in making it a true 2-way street), but it’s refreshing to see anybody view personality as something that is subject to change via environmental inputs. That’s a drum I’ve been banging for a while, and the field is starting to come back to that as an interest (not only or even substantially because of my drum-banging — people like Brent Roberts, Ravenna Helson, Rebecca Shiner, Dan Mroczek, Avshalom Caspi, etc. have been banging it way longer than I have). But the Q&A showed that there’ll be some resistance. One of the presenters from the life-satisfaction panel — in fact, the one who seemed somewhat resistant to including systematic growth in his models — tried to challenge Heckman on that point, suggesting (wrongly in my view) that traits are too stable to be meaningful targets for intervention.

The same questioner also raised what I thought was a more interesting point, which is, isn’t a bit creepy to be thinking about public-policy interventions designed to mold personality? Heckman’s answer was a good start though maybe a little unsatisfying. He basically said that he sees what he’s doing as empowering people to act on their preferences. (Hence the economists’ “skills” rather than “personality.”) If you’re more capable of being cooperative and diligent, you can still choose a life of poverty and crime if you want it, but you are now empowered with the wherewithal to obtain and keep a decent job if that’s what you would really prefer. This harkens back to Wallace’s (1966) abilities conception of personality, which maybe could stand for a dusting-off.

NRC unveils methodology

The Chronicle blog reports that the NRC just released the methodology for its long awaited program quality rankings. The actual rankings are expected sometime this year.

NRC rankings are sort of like US News rankings, except (a) they’re specifically about doctoral programs and thus more heavily research-focused, and (b) faculty and administrators don’t feel quite as obliged to pretend they ignore the NRC rankings the same way they pretend to ignore US News. The method that the NRC came up with is pretty complex — but there’s a decent flowchart overview in the methodology handbook.

The core problem for the NRC is  deciding how to combine all the various program characteristics they collect — stuff like numbers of publications, grants, citation rates, etc. — into a single dimension of quality. So they decided to come at it a couple of ways. First, they surveyed faculty about how much various attributes matter. (Not a direct quote, but along the lines of, “How important are grants in determining the overall quality of a program?”) Second, they asked faculty respondents to rank a handful of actual programs, and then they used regressions to generate implicit weights (so e.g. if the programs that everybody says are the best are all big grant-getters, then grants get weighted heavily). The explicit and implicit weights were then combined. Everything was done separately field-by-field.

What’s both cool and crazy is that they decided to preserve the uncertainty in the weights. (e.g., some respondents might have said that grants are the most important thing, others said grants are less important.) So they are going to iteratively resample from the distribution of weights, and for each program they will produce and report a range of rankings instead of a single ranking. (Specifically, it looks like they’re going to report the interquartile range.) So for each program, they’ll report something like, “Among all basketweaving departments, yours is ranked somewhere between #7 and #23.”

This immediately makes me think of 2 things:

1. Will they make the primary data available? As a psychologist, I’d think you could have a field day testing for self-serving biases and other interesting stuff in the importance ratings. There’s all kinds of interesting stuff you could do. For example, if an individual doesn’t get a lot of grants but is in a department that rakes it in, would they show a “department-serving bias” by saying that grants are important, or a true self-serving bias by saying that they aren’t? Would these biases vary by field?

2. When the actual numbers come out, will top programs disregard the ranges and just say that they’re number 1? If the upper bound of your range is #1 and your lower bound is better than everybody else’s lower bound, you’ve got a reasonable case to say you’re the best. I have a feeling that programs in that position will do exactly that. And the next-highest program will say, “We’re indistinguishable from Program A, so we’re #1 too.”

A very encouraging reply

Who knew letter-writing could actually make a difference?

In response to the letter I sent yesterday to the CITI program, I got a prompt and very responsive reply from someone involved in running the program. She explained that the module had originally been written just for biomedical researchers. When it was adapted for social/behavioral researchers, the writers simply inserted new cases without really thinking about them. Most importantly, she said that she agreed with me and will revise the module.

Cool!

UPDATE (7/6/2011): Not cool. Despite their promises, they didn’t change a thing.

Milgram is not Tuskegee

My IRB requires me to take a course on human subjects research every couple of years. The course, offered by the Collaborative Institutional Training Initiative (CITI), mostly deals with details of federal research regulations covering human subjects research.

However the first module is titled “History and Ethics” and purports to give an overview and background of why such regulations exist. It contains several historical inaccuracies and distortions, including attempts to equate the Milgram obedience studies with Nazi medical experiments and the Tuskegee syphilis study. I just sent the following letter to the CITI co-founders in the hopes that they will correct their presentation:

* * *

Dear Dr. Braunschweiger and Ms. Hansen:

I just completed the CITI course, which is mandated by my IRB. I am writing to strongly object to the way the research of Stanley Milgram and others was presented in the “History and Ethics” module.

The module begins by stating that modern regulations “were driven by scandals in both biomedical and social/behavioral research.” It goes on to list events whose “aftermath” led to the formation of the modern IRB system. The subsection for biomedical research lists Nazi medical experiments and the PHS Tuskegee Syphilis study. The subsection for social/behavioral research lists what it calls “similar events,” including the Milgram obedience experiments, the Zimbardo/Stanford prison experiment, and several others.

The course makes no attempt to distinguish among the reasons why the various studies are relevant. They are all called “scandals,” described as “similar,” and presented in parallel. This is severely misleading.

Clearly, the Nazi experiments are morally abhorrent on their face. The Tuskegee study was also deeply unethical by modern standards and, most would argue, even by the standards of its day: it involved no informed consent, and after the discovery that penicillin was an effective treatment for syphilis, continuation of the experiment meant withholding a life-saving medical treatment.

But Milgram’s studies of obedience to authority are a much different case. His research predated the establishment of modern IRBs, but even by modern standards it was an ethical experiment, as the societal benefits from knowledge gained are a strong justification for the use of deception. Indeed, just this year a replication of Milgram’s study was published in the American Psychologist, the flagship journal of the American Psychological Association. The researcher, Jerry M. Burger of Santa Clara University, received permission from his IRB to conduct the replication. He made some adjustments to add further safeguards beyond what Milgram did — but these adjustments were only possible by knowing, in hindsight, the outcome of Milgram’s original experiments. (See: http://www.apa.org/journals/releases/amp641-1.pdf)

Thus, Tuskegee and Milgram are both relevant to modern thinking about research ethics, but for completely different reasons. Tuskegee is an example of a deeply flawed study that violated numerous ethical principles. By contrast, Milgram was an ethically sound study whose relevance to modern researchers is in the substance of its findings — to wit, that research subjects are more vulnerable than we might think to the influence of scientific and institutional authority. Yet in spite of these clear differences, the CITI course calls them all “scandals” and presents them in parallel, and alongside other ethically questionable studies, implying that they are all relevant in the same way.

(The parallelism implied with other studies on the list is problematic as well. Take for example the Stanford prison experiment. It would arguably not be approved by a modern IRB. But an important part of its modern relevance is that the researchers discontinued the study when they realized it was harming subjects — anticipating a central tenet of modern research ethics. This is in stark contrast to Tuskegee, where even after an effective treatment for syphilis was discovered, the researchers continued the study and never intervened on behalf of the subjects.)

In conclusion, I strongly urge you to revise your course. It appears that the module is trying to get across the point that biomedical research and social/behavioral research both require ethical standards and regulation — which is certainly true. But the histories, relevant issues, and ramifications are not the same. The attempt to create some sort of parallelism in the presentation (Tuskegee = Milgram? Nazis = Zimbardo?) is inaccurate and misguided, and does a disservice to the legacy of important social/behavioral research.

Sincerely,
Sanjay Srivastava

UPDATE: I got a response a day after I sent the letter. See this post: A very encouraging reply.

UPDATE 7/6/2011: Scratch that. Two years later, they haven’t changed a thing.

Improving the grant system ain’t so easy

Today’s NY Times has an article by Gina Kolata about how the National Cancer Institute plays it safe with grant funding. The main point of the article is that NCI funds too many “safe” studies — studies that promise a high probability of making a modest, incremental discovery. This is done at the expense of more speculative and exploratory studies that take bigger risks but could lead to greater leaps in knowledge.

The article, and by and large the commenters on it, seem to assume that things would be better if the NCI funded more high-risk research. Missing is any analysis of what might be the downsides of adopting such a strategy.

By definition, a high-risk proposal has a lower probabilty of producing usable results. (That’s what people mean by “risk” in this context.) So for every big breakthrough, you’d be funding a larger number of dead ends. That raises three problems: a substantive policy problem, a practical problem, and a political problem.

1. The substantive problem is in knowing what would be the net effect of changing the system. If you change the system so that you invest grant dollars in research that pays off half as often, but when it does the findings are twice as valuable, it’s a wash — you haven’t made things better or worse overall. So it’s a problem of adjusting the system to optimize the risk X reward payoffs. I’m not saying the current situation is optimal; but nobody is presenting any serious analysis of whether an alternative investment strategy would be better.

2. The practical problem is that we would have to find some way to choose among high-risk studies. The problem everybody is pointing to is that in the current system, scientists have to present preliminary studies, stick to incremental variations on well-established paradigms, reassure grant panels that their proposal is going to pay off, etc. Suppose we move away from that… how would you choose amongst all the riskier proposals?

People like to point to historical breakthroughs that never would have been funded by a play-it-safe NCI. But it may be a mistake to believe those studies would have been funded by a take-a-risk NCI, because we have the benefit of hindsight and a great deal of forgetting. Before the research was carried out — i.e., at the time it would have been a grant proposal — every one of those would-be-breakthrough proposals would have looked just as promising as a dozen of their contemporaries that turned out to be dead-ends and are now lost to history. So it’s not at all clear that all of those breakthroughs would have been funded within a system that took bigger risks, because they would have been competing against an even larger pool of equally (un)promising high-risk ideas.

3. The political problem is that even if we could solve #1 and #2, we as a society would have to have the stomach for putting up with a lot of research that produces no meaningful results. The scientific community, politicians, and the general public would have to be willing to constantly remind themselves that scientific dead ends are not a “waste” of research dollars — they are the inevitable consequence of taking risks. There would surely be resistance, especially at the political level.

So what’s the solution? I’m sure there could be some improvements made within the current system, especially in getting review panels and program officers to reorient to higher-risk studies. But I think the bigger issue has to do with the overall amount of money available. As the top-rated commenter on Kolata’s article points out, the FY 2010 defense appropriation is more than 6 times what we have spent at NCI since Nixon declared a “war” on cancer 38 years ago. If you make resources scarce, of course you’re going to make people cautious about how they invest those resources. There’s a reason angel investors are invariably multi-millionnaires. If you want to inspire the scientific equivalent of angel investing, then the people giving out the money are going to have to feel like they’ve got enough money to take risks with.

On cultural significance and the value of a life

With Michael Jackson and Farrah Fawcett dying on the same day, there are a lot of articles discussing them together. This one at MSNBC is a pretty representative example.

In reading the coverage, I can’t help but think that Farrah Fawcett’s cultural significance is getting pumped up. Not to say that she wasn’t a major cultural icon. But I think there’s something else going on.

As a culture we like to think that the value of a life is unmeasurable, and therefore all lives are equally sacred (economists be damned). Nobody would say that the extent to which society publicly mourns somebody’s death is a measure of their worth as a human being (most of us don’t get TV specials when we die). Media coverage is a function of fame and public impact, and private funerals are about mourning a beloved person, and those are usually completely different spheres. But the fact that Farrah Fawcett and Michael Jackson died on the day puts us in the uncomfortable position of looking at their deaths side-by-side. Fame and human worth get mixed together in the media coverage of somebody who has just died, and it’s hard to only apply one standard and not the other.

In this case, if we step back and look objectively in terms of cultural significance, I don’t think it’s hard to reach the conclusion that Farrah Fawcett and Michael Jackson were not on the same level. That isn’t to diminish the place that Fawcett held in society. But few people in history could measure up to Michael Jackson, who triggered a tectonic shift in how our culture thinks about music, dance, race, and celebrity. Rationally we can acknowledge that inequality without implying that one person’s life was more valuable than the other’s. But I suspect that on a gut level, it feels vaguely ghoulish to do so too loudly. So the end result is that Fawcett may be getting credited for even greater cultural significance than she otherwise would have.

(Related tangent: I can’t be the only one who feels uncomfortable every year during the Oscar tributes to Hollywood folks who’ve passed away, seeing the famous actors get louder applause than the obscure cinematographers. I suspect it’s the same sort of conflict between fame vs. human worth that’s driving that discomfort.)

Evidence-based policy

I’m all for basing social policy on good social science evidence. But as Dean Dad writes:

We have anecdotal evidence that suggests that students who actually take math for all four years of high school do better in math here than those who don’t. We also have anecdotal evidence that bears crap in the woods. Why the hell do the high schools only require two years of math?

I say we can bypass the regression analysis on this one.