Personality, economics, and human development

Just back from the Association for Research in Personality 2009 conference in Evanston. Lots of interesting stuff.

One of the main themes underlying the conference was integration with economics. There were (nominally) 2 symposia on personality and economics, as well as a keynote from James Heckman.

I say “nominally” because one of the symposia was really just a bunch of psychologists using an economics panel study (the SOEP) to study personality and life satisfaction. Very interesting stuff — the size of the dataset allows them to use some very sophisticated quantitative models (though I had some quibbles with them not including systematic growth functions) — but it didn’t feel to me like it was very far outside of the mainstream personality psychology paradigm.

One of the highlights for me, though, was Heckman’s keynote address.

First, what it wasn’t: when I first heard that a big-shot economist was getting interested in personality, I assumed he wanted to use personality traits to predict economically relevant behaviors, like how people form preferences and deal with uncertainty. It sounded like a good idea, because many economists (and their psychologist cousins in decision-making) have traditionally been strong situationists and thus resistant to thinking that personality matters. And in fact, that’s what one of the talks in the actually-about-economics symposium was about (as well as some emerging work elsewhere in DM) — how personality predicts economic decisions. It’s good and important stuff, if maybe a little unsurprising as a general direction to go.

But Heckman is interested in personality in a different way. In particular, he is interested in personality development and change. His interest grows out of research showing that interventions designed to lift people (esp. young kids) out of poverty (like the Perry Preschool Study, a precursor to Head Start) are working — kids who receive early care and educational help are more likely to go on to graduate from high school, more likely to be employed full-time as adults, less likely to get involved in crime, etc. Where Heckman got involved is in understanding the mechanisms. His work has shown that these programs don’t just boost cognitive skills (that’s economist-speak for IQ) — in fact, gains in tested IQ fade a few years after the intervention. Instead, the interventions seem to be mediated by lasting changes what economists call “noncognitive skills,” which is a slightly hilarious (if you’re a psychologist) term for personality. Enduring changes in things like diligence, cooperation, positive social relationships, etc. are what seem to be driving the effects. In Big Five terms, agreeableness and conscientiousness.

Not only is it refreshing to see an economist getting interested in personality (and as a sidenote, with what I took as a very authentic interest in making it a true 2-way street), but it’s refreshing to see anybody view personality as something that is subject to change via environmental inputs. That’s a drum I’ve been banging for a while, and the field is starting to come back to that as an interest (not only or even substantially because of my drum-banging — people like Brent Roberts, Ravenna Helson, Rebecca Shiner, Dan Mroczek, Avshalom Caspi, etc. have been banging it way longer than I have). But the Q&A showed that there’ll be some resistance. One of the presenters from the life-satisfaction panel — in fact, the one who seemed somewhat resistant to including systematic growth in his models — tried to challenge Heckman on that point, suggesting (wrongly in my view) that traits are too stable to be meaningful targets for intervention.

The same questioner also raised what I thought was a more interesting point, which is, isn’t a bit creepy to be thinking about public-policy interventions designed to mold personality? Heckman’s answer was a good start though maybe a little unsatisfying. He basically said that he sees what he’s doing as empowering people to act on their preferences. (Hence the economists’ “skills” rather than “personality.”) If you’re more capable of being cooperative and diligent, you can still choose a life of poverty and crime if you want it, but you are now empowered with the wherewithal to obtain and keep a decent job if that’s what you would really prefer. This harkens back to Wallace’s (1966) abilities conception of personality, which maybe could stand for a dusting-off.

NRC unveils methodology

The Chronicle blog reports that the NRC just released the methodology for its long awaited program quality rankings. The actual rankings are expected sometime this year.

NRC rankings are sort of like US News rankings, except (a) they’re specifically about doctoral programs and thus more heavily research-focused, and (b) faculty and administrators don’t feel quite as obliged to pretend they ignore the NRC rankings the same way they pretend to ignore US News. The method that the NRC came up with is pretty complex — but there’s a decent flowchart overview in the methodology handbook.

The core problem for the NRC is  deciding how to combine all the various program characteristics they collect — stuff like numbers of publications, grants, citation rates, etc. — into a single dimension of quality. So they decided to come at it a couple of ways. First, they surveyed faculty about how much various attributes matter. (Not a direct quote, but along the lines of, “How important are grants in determining the overall quality of a program?”) Second, they asked faculty respondents to rank a handful of actual programs, and then they used regressions to generate implicit weights (so e.g. if the programs that everybody says are the best are all big grant-getters, then grants get weighted heavily). The explicit and implicit weights were then combined. Everything was done separately field-by-field.

What’s both cool and crazy is that they decided to preserve the uncertainty in the weights. (e.g., some respondents might have said that grants are the most important thing, others said grants are less important.) So they are going to iteratively resample from the distribution of weights, and for each program they will produce and report a range of rankings instead of a single ranking. (Specifically, it looks like they’re going to report the interquartile range.) So for each program, they’ll report something like, “Among all basketweaving departments, yours is ranked somewhere between #7 and #23.”

This immediately makes me think of 2 things:

1. Will they make the primary data available? As a psychologist, I’d think you could have a field day testing for self-serving biases and other interesting stuff in the importance ratings. There’s all kinds of interesting stuff you could do. For example, if an individual doesn’t get a lot of grants but is in a department that rakes it in, would they show a “department-serving bias” by saying that grants are important, or a true self-serving bias by saying that they aren’t? Would these biases vary by field?

2. When the actual numbers come out, will top programs disregard the ranges and just say that they’re number 1? If the upper bound of your range is #1 and your lower bound is better than everybody else’s lower bound, you’ve got a reasonable case to say you’re the best. I have a feeling that programs in that position will do exactly that. And the next-highest program will say, “We’re indistinguishable from Program A, so we’re #1 too.”

A very encouraging reply

Who knew letter-writing could actually make a difference?

In response to the letter I sent yesterday to the CITI program, I got a prompt and very responsive reply from someone involved in running the program. She explained that the module had originally been written just for biomedical researchers. When it was adapted for social/behavioral researchers, the writers simply inserted new cases without really thinking about them. Most importantly, she said that she agreed with me and will revise the module.

Cool!

UPDATE (7/6/2011): Not cool. Despite their promises, they didn’t change a thing.

Milgram is not Tuskegee

My IRB requires me to take a course on human subjects research every couple of years. The course, offered by the Collaborative Institutional Training Initiative (CITI), mostly deals with details of federal research regulations covering human subjects research.

However the first module is titled “History and Ethics” and purports to give an overview and background of why such regulations exist. It contains several historical inaccuracies and distortions, including attempts to equate the Milgram obedience studies with Nazi medical experiments and the Tuskegee syphilis study. I just sent the following letter to the CITI co-founders in the hopes that they will correct their presentation:

* * *

Dear Dr. Braunschweiger and Ms. Hansen:

I just completed the CITI course, which is mandated by my IRB. I am writing to strongly object to the way the research of Stanley Milgram and others was presented in the “History and Ethics” module.

The module begins by stating that modern regulations “were driven by scandals in both biomedical and social/behavioral research.” It goes on to list events whose “aftermath” led to the formation of the modern IRB system. The subsection for biomedical research lists Nazi medical experiments and the PHS Tuskegee Syphilis study. The subsection for social/behavioral research lists what it calls “similar events,” including the Milgram obedience experiments, the Zimbardo/Stanford prison experiment, and several others.

The course makes no attempt to distinguish among the reasons why the various studies are relevant. They are all called “scandals,” described as “similar,” and presented in parallel. This is severely misleading.

Clearly, the Nazi experiments are morally abhorrent on their face. The Tuskegee study was also deeply unethical by modern standards and, most would argue, even by the standards of its day: it involved no informed consent, and after the discovery that penicillin was an effective treatment for syphilis, continuation of the experiment meant withholding a life-saving medical treatment.

But Milgram’s studies of obedience to authority are a much different case. His research predated the establishment of modern IRBs, but even by modern standards it was an ethical experiment, as the societal benefits from knowledge gained are a strong justification for the use of deception. Indeed, just this year a replication of Milgram’s study was published in the American Psychologist, the flagship journal of the American Psychological Association. The researcher, Jerry M. Burger of Santa Clara University, received permission from his IRB to conduct the replication. He made some adjustments to add further safeguards beyond what Milgram did — but these adjustments were only possible by knowing, in hindsight, the outcome of Milgram’s original experiments. (See: http://www.apa.org/journals/releases/amp641-1.pdf)

Thus, Tuskegee and Milgram are both relevant to modern thinking about research ethics, but for completely different reasons. Tuskegee is an example of a deeply flawed study that violated numerous ethical principles. By contrast, Milgram was an ethically sound study whose relevance to modern researchers is in the substance of its findings — to wit, that research subjects are more vulnerable than we might think to the influence of scientific and institutional authority. Yet in spite of these clear differences, the CITI course calls them all “scandals” and presents them in parallel, and alongside other ethically questionable studies, implying that they are all relevant in the same way.

(The parallelism implied with other studies on the list is problematic as well. Take for example the Stanford prison experiment. It would arguably not be approved by a modern IRB. But an important part of its modern relevance is that the researchers discontinued the study when they realized it was harming subjects — anticipating a central tenet of modern research ethics. This is in stark contrast to Tuskegee, where even after an effective treatment for syphilis was discovered, the researchers continued the study and never intervened on behalf of the subjects.)

In conclusion, I strongly urge you to revise your course. It appears that the module is trying to get across the point that biomedical research and social/behavioral research both require ethical standards and regulation — which is certainly true. But the histories, relevant issues, and ramifications are not the same. The attempt to create some sort of parallelism in the presentation (Tuskegee = Milgram? Nazis = Zimbardo?) is inaccurate and misguided, and does a disservice to the legacy of important social/behavioral research.

Sincerely,
Sanjay Srivastava

UPDATE: I got a response a day after I sent the letter. See this post: A very encouraging reply.

UPDATE 7/6/2011: Scratch that. Two years later, they haven’t changed a thing.