Survey on perceptions of hypocrisy

Brian Clark, a grad student I work with at UO, is conducting a survey. It’s not paid, but if it sounds interesting to you or if you’d like to help out please consider taking it.

Survey – Perceptions of Hypocrisy

B.A.M. Clark – University of Oregon

Recruiting volunteers to respond to an online questionnaire lasting approximately 20 min. It consists of making judgments about several hypothetical scenarios and providing some demographic information. You must be at least 18 yrs old. The responses you provide are anonymous. Your help is unpaid, voluntary, and very much appreciated.

Website for the survey: https://oregon.qualtrics.com/SE/?SID=SV_3PGVixoWN9iI0F6


The brain scans, they do nothing

Breaking news: New brain scan reveals nothing at all.

This is an amazing discovery’, said leading neuroscientist Baroness Susan Greenfield, ‘the pictures tell us nothing about how the brain works, provide us with no insights into the nature of human consciousness, and all with such lovely colours.’ …

The development, which has been widely reported around the world, is also significant because it allows journalists to publish big fancy pictures of the brain that look really impressive while having little or no explanatory value.

I’ve previously mentioned the well documented bias to think that brain pictures automatically make research more sciencey, even if the pictures are irrelevant to the conclusions. Satire makes that point a lot better though.

Obama on personality change and peer report

Personality psychologists, take note: the president has taken a position on two important issues in our field. Personality does change during adulthood as a function of important social roles. And informant reports from close others are an excellent way to measure personality traits.

Watch: the relevant part is at 10:18.

Personality traits are unrelated to health (if you only measure traits that are unrelated to health)

In the NY Times, Richard Sloan writes:

It’s true that in some respects we do have control over our health. By exercising, eating nutritious foods and not smoking, we reduce our risk of heart disease and cancer. But the belief that a fighting spirit helps us to recover from injury or illness goes beyond healthful behavior. It reflects the persistent view that personality or a way of thinking can raise or reduce the likelihood of illness.

But there’s no evidence to back up the idea that an upbeat attitude can prevent any illness or help someone recover from one more readily. On the contrary, a recently completed study of nearly 60,000 people in Finland and Sweden who were followed for almost 30 years found no significant association between personality traits and the likelihood of developing or surviving cancer. Cancer doesn’t care if we’re good or bad, virtuous or vicious, compassionate or inconsiderate. Neither does heart disease or AIDS or any other illness or injury.

Sloan, a researcher in behavioral medicine, is trying to make a point about “a fighting spirit,” but in the process he makes a larger point about personality traits being unassociated with health. And when he overreaches, he is clearly and demonstrably wrong.

That study of 60,000 people (which the Times helpfully links to) used the Eysenck Personality Inventory and thus only looked at two personality traits, extraversion and neuroticism. They found no association between those traits and incidence of cancer or survival after cancer. But the problem is that the researchers didn’t measure conscientiousness, the personality trait factor that has been most robustly associated with all kinds of health behaviors and health outcomes (including early mortality).

Of course, conscientiousness isn’t really about upbeat attitude or a fighting spirit. It’s more about diligently taking care of yourself in many small ways over a lifetime. In that respect Sloan’s central point about “fighting spirit” isn’t disputed by the conscientiousness findings. (Researchers working in the substantial optimism and health literature may or may not feel differently.) Moreover, the moral and philosophical implications — whether we should praise or blame sick people for their attitudes — go well beyond the empirical science (though they certainly can and should be informed by it). But a reader could easily get confused that Sloan is making a broader point that personality doesn’t matter in health outcomes — and that just ain’t so.

I’m not sure Sloan intended to take such a broad swipe against personality traits, given that his own research has examined links between hostility and cardiac outcomes. Then again, browsing his publications leaves me confused. His op-ed says that being “compassionate or inconsiderate” has nothing to do with heart disease; but this abstract from one of his empirical studies concludes that “[trait] hostility may be associated with risk for cardiovascular disease through its effects on interpersonal interactions.” I haven’t read his papers — I just Google Scholared him this morning — so I’ll give him the benefit of the doubt that there’s some distinction I’m missing out on.

The ongoing legacy of a case of scientific misconduct

Almost a decade ago, a scientific misconduct scandal shocked social psychologists. A prominent researcher on a career fast track was discovered to have fabricated data and committed other forms of misconduct for 4 articles in prominent journals (JPSP, PSPB, and Psychological Science). The studies were funded by NIH and published while she was at Harvard. A joint investigation by NIH and Harvard resulted in the researcher, Karen Ruggiero, admitting to misconduct, retracting the articles, and leaving academia.

The other day I read a post by Ben Goldacre about a new blog called Retraction Watch that follows scientific retractions. Goldacre mentions a study that followed up on citations of a retracted article from the late ’80s. That immediately reminded me of the Ruggiero incident, which was a big deal when I was a grad student. And it made me wonder: are people still citing Karen Ruggiero’s retracted papers?

Fortunately, it’s relatively easy to do a quick check via Google Scholar — just look up the (now-retracted) articles, click the “Cited by” link, and count the number of hits. The investigation report came out in December 2001, and the last of the retractions was published in March 2002. We should probably allow for some publication lag, so let’s forgive anything with a publication year of 2002 or earlier. How many citations are there from 2003 onward?

Here’s what I found:

  • Ruggiero, K.M. & Marx, D.M (1999). Less pain and more to gain: Why high-status group members blame their failure on discrimination. Journal of Personality and Social Psychology, 77, 774-784. Cited 7 times since 2003. (Google Scholar gives 9 hits, but 2 appear to be duplicates.)
  • Ruggiero, K.M., Steele, J., Hwang, A., & Marx, D.M. (2000). Why did I get a ‘D’?  The effects of social comparisons on women’s attributions to discrimination. Personality and Social Psychology Bulletin, 26, 1271-1283. Cited 2 times since 2003.
  • Ruggiero, K.M. & Major, B.N. (1998). Group status and attributions to discrimination:  Are low- or high-status group members more likely to blame their failure on discrimination? Personality and Social Psychology Bulletin, 24, 821-838. Cited 9 times since 2003.
  • Ruggiero, K.M., Mitchell, J.P., Krieger, N., Marx, D.M., & Lorenzo, M.L. (2000). Now you see it, now you don’t:  Explicit versus implicit measures of the personal/group discrimination discrepancy. Psychological Science, 22, 57-67. Cited 3 times since 2003.

[Let me pause here to note that the investigation concluded Ruggiero acted alone. I have listed complete citations, but please keep in mind that her co-authors were not responsible for the misconduct.]

Are these numbers a lot or a little? You can judge for yourself, but it’s at least worth noting that all are greater than zero. Some of the citations are as recent as 2010. I did not read the citing articles, but based on the titles, none appeared to be discussions of scientific misconduct; all seemed to be related to the substance of the retracted papers.

How could that happen? Since most people find articles to cite through electronic databases, I thought I’d take a look to see how these articles are listed. When I looked up these articles in PsycINFO, all 4 listings clearly state that the articles were retracted. But that isn’t true of other databases.

When I looked up the JPSP article in Google Scholar, clicking on the title took me to a ScienceDirect link (ScienceDirect is a product of Elsevier, although Elsevier does not publish JPSP). The ScienceDirect listing contained the title, abstract, etc. but did not say anything about the article having been retracted.

Clicking on both of the PSPB articles and the Psychological Science article in Google Scholar led me to entries in the Sage Journals Online database (Sage publishes those journals). None of those links mentioned the retractions. In fact, in addition to the usual information on each article (title, abstract, etc.), Sage goes a step further and lists other articles that cite them!

Google Scholar seems to give different links depending on whether you are on an institutional network that has access to certain databases, so your results may vary depending on where you are (and perhaps what search terms you use). However, it’s also worth noting that Google Scholar itself did not flag the articles as having been retracted. Sometimes my searches separately brought up the retraction notices on the same page (but always lower), and sometimes they didn’t.

Perhaps worst of all, when I retrieved the electronic full text of all four articles, I got the articles in their original form. None of the articles was marked to indicate that the article had been subsequently retracted.

Is this a problem? I think it is. Papers do get corrected or retracted from time to time (not always for sinister reasons like scientific misconduct), and it is important that researchers don’t keep citing them. I don’t know if this is an anomaly, but it does make you wonder if some databases are not being properly updated with retractions — and what effect that is having on science.

Self-selection into online or face-to-face studies

A new paper by Edward Witt, Brent Donellan, and Matthew Orlando looks at self-selection biases in subject pools:

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version…

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What’s more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

Self-selection in subject pools is not a new topic — I’ve heard plenty of people talk about an early-participant conscientiousness effect (though I don’t know if that’s been documented or if it’s just lab-lore). But the analyses of personality differences in who takes online versus in-person studies are new, as far as I know — and they definitely add a new wrinkle.

My lab’s experience has been that we get a lot more students responding to postings for online studies than face-to-face, but it seems like we sometimes get better data from the face-to-face studies. Personality measures don’t seem to be much different in quality (in terms of reliabilities, factor structures, etc.), but with experiments where we need subjects’ focused attention for some task, the data are a lot less noisy when they come from the lab. That could be part of the selection effect (altruistic students might be “better” subjects to help the researchers), though I bet a lot of it has to do with old-fashioned experimental control of the testing environment.

What could be done? When I was an undergrad taking intro to psych, each student was given a list of studies to participate in. All you knew was the codenames of the studies and some contact information, and it was your responsibility to arrange with the experimenter to take the experiment. It was a pain on all sides, but it was a good way to avoid these kinds of self-selection biases.

Of course, some people would argue that the use of undergraduate subject pools itself is a bigger problem. But given that they aren’t going away, this is definitely something to pay attention to.

McAdams on Bush: a psychobiography

Personality psychologist Dan McAdams has a new book out called George W. Bush and the Redemptive Dream. Dan was my undergraduate advisor, and I saw him give a provocative talk about this work at last summer’s ARP conference. I just told my wife to add the book to my Christmas list.

Most of McAdams’s research centers on personal narratives — the stories that people create and tell about themselves, and what role these stories play in identity and personality. But in the talk — and I gather in the book as well — Dan drew on a variety of theories and frameworks to understand some of Bush’s most consequential actions before and during his time in office. Here’s a brief description from an announcement I got about the book:

This short, streamlined psychological biography uses some of the best scientific concepts in personality and social psychology to shed light on Bush’s life, with a focus on understanding his fateful decision, as President, to launch a military invasion of Iraq.  The analysis draws heavily from contemporary research on Big Five traits, psychological goals and strivings, and narrative identity, as well as social identity theory, evolutionary psychology, research on motivated social cognition, research on authoritarianism and related concepts in political psychology, and Jon Haidt’s brilliant synthesis of moral intuitions.

Once upon a time, psychobiography was a pretty well-respected enterprise in personality psychology. I think it’s fallen out of favor in part because of the field’s emphasis on the Big Five traits and other discrete, fractionated variables. That emphasis has had benefits, focusing the field on constructs and theories that we can rigorously quantify and formalize.

But early personality psychologists like Gordon Allport and Henry Murray emphasized that any comprehensive study of personality must be able to account for the person as an integrated whole and a unique individual. The field has lost track of that to a substantial degree. But unlike earlier psychobiographers, who had very little and/or bad science to draw upon, McAdams has almost a century worth of theories and empirical research to bring to bear. That doesn’t mean the task is easy now. But I’m definitely looking forward to reading how Dan took it on.

Search terms people have used to get to my blog today, and my guess who was doing the searching

name 5 psychological disorders: Student, furtively typing on an iPhone in the middle of an Intro to Psych midterm

time to rejection science: Pessimistic graduate student

brains cognition pictures: David Brooks

giving up on the academic job search psychology: Very pessimistic graduate student

does toy story 3 make people cry: Someone picking out a rental for a fourth date

sad animation clips: Graduate student who has never heard of the Handbook of Emotion Elicitation and Assessment

personalities of economists: Optimistic graduate student

academic job interview clothes: Lucky graduate student

pashler learning styles: Masters student in education about to have mind blown

 

The measurable value of a humanities education

Cedar Riener discusses the importance of humanities and arts in higher education. His post is in response to a recent Stanley Fish column on a crisis in the humanities. I’m glad Cedar wrote this post, because when I read Fish’s piece, after I got through the part where he dismisses all of the usual arguments for the humanities, I reread it twice and couldn’t find him presenting any good arguments in favor.

Cedar reviews recent evidence showing benefits of bilingualism and study abroad (making the case for departments of French, Russian, etc.). But importantly, he also discusses the difficulty of measuring outcomes:

Finally, I think a take-home message we should all get from the science of why there is value in the humanities (and the liberal arts in general) is that we should be humble in our drive to tie education to specific and direct goals.  This approach is short-sighted, not just because bilingualism improves creativity and prevents cognitive aging, but because most of the effects of any sort of education are very very hard to measure.  We psychologists can assail education research for not providing clear answers on anything, but at some point we have to conclude that the kind of clear answers we want just don’t exist.

Outcome-oriented policies are only as good as somebody’s ability to list, define, and measure outcomes. A lot of the criticisms of standardized testing center around this issue. As a scientist who does a fair amount of psychometrics in my line of work, I’m pretty optimistic about our ability to construct assessments if we have a good and comprehensive definition of what we want to measure. But the having-a-good-and-comprehensive-definition part is hellaciously hard when it comes to things like the effects of education. If universities keep shifting to “accountability” policies before we can solve this problem, we are in for a rough time.

Want to make people cry? Try sad kids, sad animals, or sad animation

Among the difficulties of doing experimental research on emotions is getting people to have them in the lab, where you can study them up close. There are quite a few ways researchers try to elicit emotions — in fact, half of a recent book is dedicated to the topic.

One of the most common approaches is to show subjects film clips. In principle, film clips ought to have a lot of advantages for an experimenter. Unlike asking people to recall personal memories, film clips are standardized – everybody gets the same treatment, so there are no differences in the content of the emotion-eliciting stimulus. And film clips can be a lot more engrossing and evocative than other standardizable stimuli like pictures or music.

That’s the ideal. In practice, though, it can be very hard to find film clips that will elicit a similar reaction from lots of different people. One person’s tearjerker is another person’s boring chick-flick. In fact, when I was part of a team a few years back that was developing a set of new film clips to elicit sadness in the lab, the two female grad students that were trying to find the clips kept getting pilot data showing that the men were unmoved by anything. It turned out that the grads were picking clips that they personally found sad — which was all Beaches-style stuff about women’s relationships with women. We eventually had to ban anything with Susan Sarandon. The stuff that worked the best with everybody, men and women alike, turned out to be clips of sad kids and sad animals. (Futurama fans will know what I’m talking about. Two words: Jurassic Bark.)

Perhaps that shouldn’t have been too much of a surprise. At the time, the state of the art in sadness elicitation was a clip from The Champ where a seven-year-old Ricky Schroder watches his father die in front of him. That one still works well, and the other clips that ended up working were similar themes.

Now, according to a recent article in Time, it seems like we can add animated films to the list of guaranteed tear-elicitors. Apparently there was an epidemic of adults weeping at screenings of Toy Story 3. I haven’t seen that one, but I did see Up, and you’d have to be a psychopath not to at least well up a little bit during the flashback sequence. A filmmaker has an interesting theory on why that may be:

Lee Unkrich, who, having directed Toy Story 3, co-directed and edited Toy Story 2  and edited the original, is something of an expert; he has a few theories on why the latest film set people off. The most interesting is that animated movies can be more affecting than movies with real people in them. “Live action movies are someone else’s story,” he says. “With animation, audiences can’t think that. Their guards are down.” Because the characters are clearly not alive, he suggests counterintuitively, people identify with them more readily.

It’s an interesting explanation, and it becomes especially interesting when you try to extend it to animation of adult human characters (like Up) or live-action movies about kids. Why is a live Susan Sarandon perceived as “somebody else” by a substantial part of the audience (especially people of a different age group and gender), but audiences have no problem immersing themselves into an animated Carl and Ellie Fredricksen or a live, seven-year-old Ricky Schroder? What triggers that barrier with some people and drops it with others? The answers would reach far past methodological questions about how to elicit emotions in the lab, and get at basic questions of empathy and identity.