Replicability in personality psychology, and the symbiosis between cumulative science and reproducible science

There is apparently an idea going around that personality psychologists are sitting on the sidelines having a moment of schadenfreude during the whole social psychology Replicability Crisis thing.

Not true.

The Association for Research in Personality conference just wrapped up in St. Louis. It was a great conference, with lots of terrific research. (Highlight: watching three of my students give kickass presentations.) And the ongoing scientific discussion about openness and reproducibility had a definite, noticeable effect on the program.

The most obvious influence was the (packed) opening session on reproducibility. First, Rich Lucas talked about the effects of JRP’s recent policy of requiring authors to explicitly talk about power and sample size decisions. The policy has had a noticeable impact on sample sizes of published papers, without major side effects like tilting toward college samples or cheap self-report measures.

Second, Simine Vazire talked about the particular challenges of addressing openness and replicability in personality psychology. A lot of the discussion in psychology has been driven by experimental psychologists, and Simine talked about what the general issues that cut across all of science look like when applied in particular to personality psychology. One cool recommendation she had (not just for personality psychologists) was to imagine that you had to include a “Most Damning Result” section in your paper, where you had to report the one result that looked worst for your hypothesis. How would that change your thinking?*

Third, David Condon talked about particular issues for early-career researchers, though really it was for anyone who wants to keep learning – he had a charming story of how he was inspired by seeing one of his big-name intellectual heroes give a major award address at a conference, then show up the next morning for an “Introduction to R” workshop. He talked a lot about tools and technology that we can use to help us do more open, reproducible science.

And finally, Dan Mroczek talked about research he has been doing with a large consortium to try to do reproducible research with existing longitudinal datasets. They have been using an integrated data analysis framework as a way of combining longitudinal datasets to test novel questions, and to look at issues like generalizability and reproducibility across existing data. Dan’s talk was a particularly good example of why we need broad participation in the replicability conversation. We all care about the same broad issues, but the particular solutions that experimental social psychologists identify aren’t going to work for everybody.

In addition to its obvious presence in the plenary session, reproducibility and openness seemed to suffuse the conference. As Rick Robins pointed out to me, there seemed to be a lot more people presenting null findings in a more open, frank way. And talk of which findings were replicated and which weren’t, people tempering conclusions from initial data, etc. was common and totally well received like it was a normal part of science. Imagine that.

One things that stuck out at me in particular was the relationship between reproducible science and cumulative science. Usually I think of the first helping the second; you need robust, reproducible findings as a foundation before you can either dig deeper into process or expand out in various ways. But in many ways, the conference reminded me that the reverse is true as well: cumulative science helps reproducibility.

When people are working on the same or related problems, using the same or related constructs and measures, etc. then it becomes much easier to do robust, reproducible science. In many ways structural models like the Big Five have helped personality psychology with that. For example, the integrated data analysis that Dan talked about requires you to have measures of the same constructs in every dataset. The Big Five provide a common coordinate system to map different trait measures onto, even if they weren’t originally conceptualized that way. Psychology needs more models like that in other domains – common coordinate systems of constructs and measures that help make sense of how different research programs fit together.

And Simine talked about (and has blogged about) the idea that we should collect fewer but better datasets, with more power and better but more labor-intensive methods. If we are open with our data, we can do something really well, and then combine or look across datasets better to take advantage of what other people do really well – but only if we are all working on the same things so that there is enough useful commonality across all those open datasets.

That means we need to move away from a career model of science where every researcher is supposed to have an effect, construct, or theory that is their own little domain that they’re king or queen of. Personality psychology used to be that way, but the Big Five has been a major counter to that, at least in the domain of traits. That kind of convergence isn’t problem-free — the model needs to evolve (Big Six, anyone?), which means that people need the freedom to work outside of it; and it can’t try to subsume things that are outside of its zone of relevance. Some people certainly won’t love it – there’s a certain satisfaction to being the World’s Leading Expert on X, even if X is some construct or process that only you and maybe your former students are studying. But that’s where other fields have gone, even going as far as expanding beyond the single-investigator lab model: Big Science is the norm in many parts of physics, genomics, and other fields. With the kinds of problems we are trying to solve in psychology – not just our reproducibility problems, but our substantive scientific ones — that may increasingly be a model for us as well.



* Actually, I don’t think she was only imagining. Simine is the incoming editor at SPPS.** Give it a try, I bet she’ll desk-accept the first paper that does it, just on principle.

** And the main reason I now have footnotes in most of my blog posts.

Norms for the Big Five Inventory and other personality measures

Every once in a while I get emails asking me about norms for the Big Five Inventory. I got one the other day, and I figured that if more than one person has asked about it, it’s probably worth a blog post.

There’s a way of thinking about norms — which I suspect is the most common way of thinking about norms — that treats them as some sort of absolute interpretive framework. The idea is that you could tell somebody, hey, if you got this score on the Agreeableness scale, it means you have this amount of agreeableness.

But I generally think that’s not the right way of thinking about it. Lew Goldberg put it this way:

One should be very wary of using canned “norms” because it isn’t obvious that one could ever find a population of which one’s present sample is a representative subset. Most “norms” are misleading, and therefore they should not be used.

That is because “norms” are always calculated in reference to some particular sample, drawn from some particular population (which BTW is pretty much never “the population of all human beings”). Norms are most emphatically NOT an absolute interpretation — they are unavoidably comparative.

So the problem arises because the usual way people talk about norms tends to bury that fact. So people say, oh, you scored at the 70th percentile. They don’t go on to say the 70th percentile of what. For published scales that give normed scores, it often turns out to mean the 70th percentile of the distribution of people who somehow made it into the scale author’s convenience sample 20 years ago.

So what should you do to help people interpret their scores? Lew’s advice is to use the sample you have at hand to construct local norms. For example, if you’re giving feedback to students in a class, tell them their percentile relative to the class.

Another approach is to use distributional information from existing dataset and just be explicit about what comparison you are making and where the data come from. For the BFI, I sometimes refer people to a large dataset of adult American Internet users that I used for a paper. Sample descriptives are in the paper, and we’ve put up a table of means and SDs broken down by age and gender for people who want to make those finer distinctions. You can then use those means and SDs to convert your raw scores into z-scores, and then calculate or look up the normal-distribution percentile. You would then say something like, “This is where you stand relative to a bunch of Internet users who took this questionnaire online.” (You don’t have to use that dataset, of course. Think about what would be an appropriate comparison group and then poke around Google Scholar looking for a paper that reports descriptive statistics for the kind of sample you want.)

Either the “local norms” approach or the “comparison sample” approach can work for many situations, though local norms may be difficult for very small samples. If the sample as a whole is unusual in some way, the local norms will remove the average “unusualness” whereas the comparison-sample approach will keep it in there, and you can decide which is the more useful comparison. (For example, an astronaut who scores in the 50th percentile of conscientiousness relative to other astronauts would be around the 93rd percentile relative to college undergrads.) But the most important thing is to avoid anything that sounds absolute. Be consistent and clear about the fact that you are making comparisons and about who you are comparing somebody to.

Two obviously wrong statements about personality and political ideology

On the heels of yesterday’s post about the link between religiosity and conservatism, I came across a New York Magazine article discussing recent research on personality, genetics, and political ideology. The article summarizes a lot of really interesting work by John Jost on ideology, Jonathan Haidt on moral foundations, David Pizarro on emotional responses and politics, etc. etc. But when it says things like…

Over the past few years, researchers haven’t just tied basic character traits to liberalism and conservatism, they’ve begun to finger specific genes they say hard-wire those ideologies.

… I just cringe. Research on personality and genetics does not support the conclusion that ideology is hard-wired, any more than our work on how political discourse ties religiosity to politics shows that ideology is a blank-slate social artifact.

Any attempt to understand the role of personality and genetics in political attitudes and ideology will have to avoid endorsing 2 obviously wrong conclusions:

1. Ideology and political attitudes have nothing to do with personality or genes.

2. Genes code for ideology and political attitudes in a clear, unconditional way.

Maybe in some distal and complex way our genes code for variations in how different psychological response systems work — under what conditions they are more and less active, how sensitive they are to various inputs, how strongly they produce their various responses, etc. In situ, these individual differences are going to interact with things like how messages are framed, how they are presented in conjunction with other information and stimuli, who is presenting the information, what we think the leaders and fellow members of our important social groups think and feel, etc.

What this interactivity means for doing science is that if you hold one thing constant (whether by experimental control or by averaging over differences) and let the other one vary, you will find an effect of the one you let vary. For example, if you look at how different people respond to the same set of sociopolitical issues, you are going to get reliable patterns of different responses that reflect people’s personalities. And if you frame and present the same issue in several different ways, and measure the average effect of the different framings, you are going to get different average responses that reflect message effects. Both are interesting experimental results, but both are testing only pieces of a plausible theoretical model.

Most researchers know this, I think. For example, from the NYMag article:

Fowler laughs at the idea that he had isolated a single gene responsible for liberalism—an idea circulated in much of the chatter about the study. “There are hundreds if not thousands of genes that are all interacting to affect complex social behaviors,” Fowler says, and scientists have only a rough sense of that process. “There’s a really long, complex causal chain at work here,” says UC-Berkeley political scientist Laura Stoker, “and we won’t get any real understanding without hundreds and hundreds of years’ more research.”

Let’s stay away from lazy and boring concepts like hard-wired. The real answers are going to be a lot more interesting.

Where does the link between religiosity and conservatism come from?

My collaborator Ari Malka has an op-ed titled Are religious Americans always conservative?

Why, then, does religiosity relate to conservatism at all? One possibility is that there is some type of organic connection between being a religious person and being a conservative person. Perhaps the traits, moral standards and ways of thinking that characterize religious people also naturally lead them to prefer conservative social outcomes and policies. Another possibility, however, is that this relation really has to do with the messages from political and religious discourse, and how some people respond to these messages.

Two pieces of evidence support this latter explanation…

The evidence comes from a new paper we have out in Political Psychology. Here’s the abstract:

Some argue that there is an organic connection between being religious and being politically conservative. We evaluate an alternative thesis that the relation between religiosity and political conservatism largely results from engagement with political discourse that indicates that these characteristics go together. In a combined sample of national survey respondents from 1996-2008, religiosity was associated with conservative positions on a wide range of attitudes and values among the highly politically engaged, but this association was generally weaker or nonexistent among those less engaged with politics. The specific political characteristics for which this pattern existed varied across ethno-religious groups. These results suggest that whether religiosity translates into political conservatism depends to an important degree on level of engagement with political discourse.

Malka, A., Lelkes, Y., Srivastava, S., Cohen, A. B., & Miller, D. T. (2012). The association of religiosity and political conservatism: The role of political engagement. Political Psychology, 33, 275-299.

The adaptive and flexible workplace personality: What should I talk about?

I was invited to give a talk at an in-service day for my university’s library staff. They are asking people from around the university to contribute, and since our library staff does so much for the rest of us, I thought it would be nice to help out. (Seriously, who doesn’t think librarians are awesome?)

The theme of the in-service day is “Exercising Your Adaptability and Flexibility” (which I think is geared toward helping people think about changes in technology and other kinds of workplace changes). The working title for my talk is “The Adaptive and Flexible Workplace Personality.” They gave me pretty wide latitude to come up with something that fits that theme, and obviously I want to keep it grounded in research. I have a few ideas, but I thought I’d see if I can use the blog to generate some more.

Personality and social psychologists, what would you talk about? What do you think would be important and useful to include? I have one hour, and the staff will be a mix of professional librarians, IT folks, other library staff, etc. I’d like to keep it lively, and maybe focus on 2 or 3 take-home points that people would find useful or thought-provoking.

Fun with Google Correlate

A new tool called Google Correlate lets you input a search term and then creates a state-by-state map of how many people search for it. It then it shows you what other search terms have similar state-by-state patterns.

A search for my name (what else would I have plugged in first?) shows the most searches coming from my home state of Oregon, and a notable lack of interest stemming from the Great Plains. Of note: interest in McBain: The Movie follows a very similar regional pattern:

Google Correlate search for sanjay srivastava and mcbain: the movie

I’m trying to think of a good scientific use for this tool, but I keep getting stuck on the fact that the top regional correlate of personality is “nipple stimulation.”

Personality traits are unrelated to health (if you only measure traits that are unrelated to health)

In the NY Times, Richard Sloan writes:

It’s true that in some respects we do have control over our health. By exercising, eating nutritious foods and not smoking, we reduce our risk of heart disease and cancer. But the belief that a fighting spirit helps us to recover from injury or illness goes beyond healthful behavior. It reflects the persistent view that personality or a way of thinking can raise or reduce the likelihood of illness.

But there’s no evidence to back up the idea that an upbeat attitude can prevent any illness or help someone recover from one more readily. On the contrary, a recently completed study of nearly 60,000 people in Finland and Sweden who were followed for almost 30 years found no significant association between personality traits and the likelihood of developing or surviving cancer. Cancer doesn’t care if we’re good or bad, virtuous or vicious, compassionate or inconsiderate. Neither does heart disease or AIDS or any other illness or injury.

Sloan, a researcher in behavioral medicine, is trying to make a point about “a fighting spirit,” but in the process he makes a larger point about personality traits being unassociated with health. And when he overreaches, he is clearly and demonstrably wrong.

That study of 60,000 people (which the Times helpfully links to) used the Eysenck Personality Inventory and thus only looked at two personality traits, extraversion and neuroticism. They found no association between those traits and incidence of cancer or survival after cancer. But the problem is that the researchers didn’t measure conscientiousness, the personality trait factor that has been most robustly associated with all kinds of health behaviors and health outcomes (including early mortality).

Of course, conscientiousness isn’t really about upbeat attitude or a fighting spirit. It’s more about diligently taking care of yourself in many small ways over a lifetime. In that respect Sloan’s central point about “fighting spirit” isn’t disputed by the conscientiousness findings. (Researchers working in the substantial optimism and health literature may or may not feel differently.) Moreover, the moral and philosophical implications — whether we should praise or blame sick people for their attitudes — go well beyond the empirical science (though they certainly can and should be informed by it). But a reader could easily get confused that Sloan is making a broader point that personality doesn’t matter in health outcomes — and that just ain’t so.

I’m not sure Sloan intended to take such a broad swipe against personality traits, given that his own research has examined links between hostility and cardiac outcomes. Then again, browsing his publications leaves me confused. His op-ed says that being “compassionate or inconsiderate” has nothing to do with heart disease; but this abstract from one of his empirical studies concludes that “[trait] hostility may be associated with risk for cardiovascular disease through its effects on interpersonal interactions.” I haven’t read his papers — I just Google Scholared him this morning — so I’ll give him the benefit of the doubt that there’s some distinction I’m missing out on.

Self-selection into online or face-to-face studies

A new paper by Edward Witt, Brent Donellan, and Matthew Orlando looks at self-selection biases in subject pools:

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version…

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What’s more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

Self-selection in subject pools is not a new topic — I’ve heard plenty of people talk about an early-participant conscientiousness effect (though I don’t know if that’s been documented or if it’s just lab-lore). But the analyses of personality differences in who takes online versus in-person studies are new, as far as I know — and they definitely add a new wrinkle.

My lab’s experience has been that we get a lot more students responding to postings for online studies than face-to-face, but it seems like we sometimes get better data from the face-to-face studies. Personality measures don’t seem to be much different in quality (in terms of reliabilities, factor structures, etc.), but with experiments where we need subjects’ focused attention for some task, the data are a lot less noisy when they come from the lab. That could be part of the selection effect (altruistic students might be “better” subjects to help the researchers), though I bet a lot of it has to do with old-fashioned experimental control of the testing environment.

What could be done? When I was an undergrad taking intro to psych, each student was given a list of studies to participate in. All you knew was the codenames of the studies and some contact information, and it was your responsibility to arrange with the experimenter to take the experiment. It was a pain on all sides, but it was a good way to avoid these kinds of self-selection biases.

Of course, some people would argue that the use of undergraduate subject pools itself is a bigger problem. But given that they aren’t going away, this is definitely something to pay attention to.

McAdams on Bush: a psychobiography

Personality psychologist Dan McAdams has a new book out called George W. Bush and the Redemptive Dream. Dan was my undergraduate advisor, and I saw him give a provocative talk about this work at last summer’s ARP conference. I just told my wife to add the book to my Christmas list.

Most of McAdams’s research centers on personal narratives — the stories that people create and tell about themselves, and what role these stories play in identity and personality. But in the talk — and I gather in the book as well — Dan drew on a variety of theories and frameworks to understand some of Bush’s most consequential actions before and during his time in office. Here’s a brief description from an announcement I got about the book:

This short, streamlined psychological biography uses some of the best scientific concepts in personality and social psychology to shed light on Bush’s life, with a focus on understanding his fateful decision, as President, to launch a military invasion of Iraq.  The analysis draws heavily from contemporary research on Big Five traits, psychological goals and strivings, and narrative identity, as well as social identity theory, evolutionary psychology, research on motivated social cognition, research on authoritarianism and related concepts in political psychology, and Jon Haidt’s brilliant synthesis of moral intuitions.

Once upon a time, psychobiography was a pretty well-respected enterprise in personality psychology. I think it’s fallen out of favor in part because of the field’s emphasis on the Big Five traits and other discrete, fractionated variables. That emphasis has had benefits, focusing the field on constructs and theories that we can rigorously quantify and formalize.

But early personality psychologists like Gordon Allport and Henry Murray emphasized that any comprehensive study of personality must be able to account for the person as an integrated whole and a unique individual. The field has lost track of that to a substantial degree. But unlike earlier psychobiographers, who had very little and/or bad science to draw upon, McAdams has almost a century worth of theories and empirical research to bring to bear. That doesn’t mean the task is easy now. But I’m definitely looking forward to reading how Dan took it on.

Is there anything special about the Five-Factor Model?

I recently put up a clip-job list of all the ideas I’ve been too busy or lazy to flesh out into real posts in the last month. One of the items was about a recent Psych Inquiry commentary I wrote in response to a piece by Jack Block. Tal actually read the commentary (thanks, Tal!) and commented:

…What I couldn’t really get a sense of from your paper is whether you actually believe there’s anything special about the FFM as distinct from any number of other models, or if you view it as just a matter of convenience that the FFM happens to be the most widely adopted model. I suspect Block would have said that even if you think the FFM is all in the eyes of the beholder, there’s still no good reason to think that it’s the right structure, and that with only slightly different assumptions and a slightly different historical trajectory, we could all have been working with a six or seven-factor model. So I guess my question would be: should one read the title of your paper as saying that the FFM is the model that describes the structure of social perceptions, or are you making a more general point about all psychometric models based on semantically-mediated observations?

That’s a great question.

As I think I make clear in the paper, I think it’s highly unlikely that the FFM is isomorphic with some underlying, extra-perceptual reality of bodies or behavior. In other words, I don’t expect we’ll find five brain systems whose functioning maps one-to-one onto the five factors. I could be wrong, but I have seen exactly zero evidence that makes me think that’s the case.

But since I argue in the paper that the FFM is a model of the social concerns of ordinary social perceivers, I think it’s fair to ask whether it’s isomorphic with something else. Like maybe there are five basic, universal social concerns that all humans share, or something like that. And my answer is… no, I don’t think so.

For one thing, I don’t think the cross-cultural evidence is strong enough to support that conclusion. (Being in the same department as Gerard Saucier has helped me see that.) McCrae and Costa have done a very good job of showing that the FFM can be exported to other cultures — if we give people the FFM as a meaning system, they’ll use it in roughly the way we expect. But emic studies have been a lot more varied.

I also am not convinced that factor analysis — a method that derives independent factors from between-person covariance structures — is the “true” way to model person perception and social meaning. Useful? As a way of deriving a descriptive/taxonomic model, absolutely. Orthogonal factor analysis has some very useful properties, like mapping a multidimensional space very efficiently. And there’s a consistent something behind that useful model, in the sense that something is causing that five-factor structure to replicate (conditional on the item selection procedures, samples from certain cultures, statistical assumptions, etc.).

But there’s no reason to think that that means the five-factor structure has a simple, one-to-one relationship to whatever reality it’s grounded in — whether the reality of target persons’ behavior or of perceivers’ concerns. Why would social concerns be orthogonal (and by implication, causally unrelated to one another)? Why, if these are major themes in human social concerns, don’t we have good words for them at the five-factor level of abstraction? (“Agreeableness”? Blech. Worst factor label ever.) Why do they emerge in the between-person covariance structure but not in experimental methods that probe social representation at the individual level (ala Dabady, Bell, & Kihlstrom, 1999)?

As to Tal’s last question (“are you making a more general point about all psychometric models based on semantically-mediated observations?”): I think I say this in the paper, but I don’t think there is, or ever will be, any structural model of personality that isn’t pivotally dependent on human perception and judgment. (Ouch, double negative. Put more straightforwardly: all models of personality depend on human interpretations of personality.) I have a footnote where I comment that the Q sort can be seen as a model of what Jack Block wants to know about persons. I’ll even extend that to models that use biological constructs as their units rather than linguistic ones, but maybe I’ll save that argument for another day…