From the department of “things that could be said about almost anything”

If you want to change the behavior, you have to change the incentives. Moralistic huffing and puffing won’t cut it.

That sentence jumped out at me as being true of just about every domain of public policy.

(In this case it’s from Dean Dad’s blog post about public higher ed outsourcing growth to private higher ed. My own institution has essentially done this internally. Our finances are more like a public institution with regard to in-state students, and like a private with regard to out-of-state students. Since our state’s contribution to higher ed is dismal and dropping, the higher-ups have decided to balance the budget through growth — but that’s almost entirely by admitting more out-of-state students.)

Prepping for SEM

I’m teaching the first section of a structural equation modeling class tomorrow morning. This is the 3rd time I’m teaching the course, and I find that the more times I teach it, the less traditional SEM I actually cover. I’m dedicating quite a bit of the first week to discussing principles of causal inference, spending the second week re-introducing regression as a modeling framework (rather than a toolbox statistical test), and returning to causal inference later when we talk about path analysis and mediation (including assigning a formidable critique by John Bullock et al. coming out soon in JPSP).

The reason I’m moving in that direction is that I’ve found that a lot of students want to rush into questionable uses of SEM without understanding what they’re getting into. I’m probably guilty of having done that, and I’ll probably do it again someday, but I’d like to think I’m learning to be more cautious about the kinds of inferences I’m willing to make. To people who don’t know better, SEM often seems like magical fairy dust that you can sprinkle on cross-sectional observational data to turn it into something causally conclusive. I’ve probably been pretty far on the permissive end of the spectrum that Andrew Gelman talks about, in part because I think experimental social psychology sometimes overemphasizes internal validity to the exclusion of external validity (and I’m not talking about the special situations that Mook gets over-cited for). But I want to instill an appropriate level of caution.

BTW, I just came across this quote from Donald Campbell and William Shadish: “When it comes to causal inference from quasi-experiments, design rules, not statistics.” I’d considered writing “IT’S THE DESIGN, STUPID” on the board tomorrow morning, but they probably said it nicer.

On base rates and the “accuracy” of computerized Facebook gaydar

I never know what to make of reports stating the “accuracy” of some test or detection algorithm. Take this example, from a New York Times article by Steve Lohr titled How Privacy Vanishes Online:

In a class project at the Massachusetts Institute of Technology that received some attention last year, Carter Jernigan and Behram Mistree analyzed more than 4,000 Facebook profiles of students, including links to friends who said they were gay. The pair was able to predict, with 78 percent accuracy, whether a profile belonged to a gay male.

I have no idea what “78 percent accuracy” means in this context. The most obvious answer would seem to be that of all 4,000 profiles analyzed, 78% were correctly classified as gay versus not gay. But if that’s the case, I have an algorithm that beats the pants off of theirs. Are you ready for it?

Say that everybody is not gay.

Figure that around 5 to 10 percent of the population is gay. If these 4,000 students are representative of that, then saying not gay every time will yield an “accuracy” of 90-95%.

But wait — maybe by “accuracy” they mean what percentage of gay people are correctly identified as such. In that case, I have an algorithm that will be 100% accurate by that standard. Ready?

Say that everybody is gay.

You can see how silly this gets. To understand how good the test is, you need two numbers: sensitivity and specificity. My algorithms each turn out to be 100% on one and 0% on the other. Which means that they’re both crap. (A good test needs to be high on both.) I am hoping that the MIT class’s algorithm was a little better, and the useful numbers just didn’t get translated. But this news report tells us nothing that we need to know to evaluate it.

Do people know how much power and status they have?

Do you know how much power and status you have in the important social situations in your life? Cameron Anderson and I have a chapter coming out in a few months looking at that question. The chapter is titled “Accurate When It Counts: Perceiving Power and Status in Social Groups.” (It draws in part on an earlier empirical paper we did together.) The part before the colon probably gives away a little bit of the answer. We present a case that most people, much of the time, are pretty good at perceiving their own and others’ power and status. (Better than they are at perceiving likability or personality traits.)

You can read the chapter if you want to see where the main point is coming from. I just want to briefly comment on a preliminary issue we had to develop along the way…

One of the fun things about writing this paper was working out what it means to be accurate in perceiving power and status. Accuracy has a long and challenging history in social perception research. How do you quantify how well somebody knows somebody else’s (or their own) likability, extraversion, morality, or — in our case — power or status?

We started by creating working definitions of power and status. What became clear along the way is that the accuracy question gets answered differently for power than for status because of the different definitions. For power, we adopted Susan Fiske’s definition that power is asymmetric outcome control (in a nutshell, Person A has power over Person B if A has control over B’s valued outcomes). For status, we defined it as respect and influence in the eyes of others.

Drawing on those definitions, here’s what we say about how to define accuracy in perceiving power:

The outcome-control framework is useful for studying perceptions. Outcome control is a structural property of relationships that does not depend on any person’s construal of a situation. Thus, one person may have power over another person even if one or both people do not realize it at a given time. (For example, a late-night TV host and the female intern he dates might both think about their relationship in purely romantic terms, but the fact that the host makes decisions about the intern’s salary and career advancement means that he has power over her). Because the outcome-control framework separates psychological processes such as the perception of power from power per se, it is conceptually coherent to ask questions about the accuracy of perceptions.

And here’s how accuracy is different for status:

Like power, status is a feature of a relationship (Fiske & Berdahl, 2007). Like power, status may vary from one situation to another. And like with power, it is possible for a single individual to misperceive her own status or the status of another person. However, because status is about respect and prestige in the eyes of others, at its core it involves collective perceptions – that is, status is a component of reputation. Thus status is socially constructed in a different and perhaps more fundamental way than power. Whereas it might make sense to say that an individual has power but nobody knows it, it would not make sense to say the same about status. This gives status a complicated but necessary relation to interpersonal perceptions, which will become important when we consider what it means to be accurate in perceiving status.

On a side note: egads, am I becoming a social constructivist?

Reference:

Srivastava, S. & Anderson, C. (in press). Accurate when it counts: Perceiving power and status in social groups. In J. L. Smith, W. Ickes, J. Hall, S. D. Hodges, & W. Gardner (Eds.), Managing interpersonal sensitivity: Knowing when—and when not—to understand others.

Does that include midterms?

Okay, so just now when I saw this

… I immediately thought of this:

Tetlock, P. E. (1981). Pre- to postelection shifts in presidential rhetoric: Impression management or cognitive adjustment. Journal of Personality and Social Psychology, 41, 207-212.

Used content analysis to assess the conceptual or integrative complexity of pre- and post-election policy statements of 20th-century American presidents. Two hypotheses were tested. According to the impression management hypothesis, presidents present issues in deliberately simplistic ways during election campaigns but in more complex ways upon assuming office when they face the necessity of justifying sometimes unpopular decisions to skeptical constituencies. According to the cognitive adjustment hypothesis, presidents gradually become more complex in their thinking during their tenure in office as they become increasingly familiar with high-level policy issues. Results support only the impression management position. The complexity of presidential policy statements increased sharply immediately after inauguration but did not increase with length of time in office. Complexity of policy statements also significantly declined in reelection years.

I haven’t coded the transcript for integrative complexity yet. But when the reporter writes, “Mr. Obama has been seeking to narrow the complex arguments over health care policy,” it sounds a heck of a lot like what Tetlock was talking about.

Perceiver effects in interpersonal perception

Hot off the presses is a paper I wrote with Steve Guglielmo and Jenni Beer on perceiver effects in the Social Relations Model. Here’s the abstract:

In interpersonal perception, “perceiver effects” are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed.

A couple of quick comments to add:

  • This is an example of using the Big Five / Five-Factor Model not as a model of personality per se, but as a model of social perception. I very briefly mention this potential use of the Big Five in my guide to measuring the Big Five, and I’m currently working on a manuscript expanding on this idea. (BTW, I’m certainly not the first person to think of the Big Five in this way. I’m trying to carry this idea forward a bit, but it’s one of those cases where I oscillate between thinking what I’m saying about it is radically new and thinking ho-hum-we-already-thought-of-that.)
  • While we were working on this manuscript, I became aware that a group led by Dustin Wood was looking at very similar issues (but with some interesting differences in approach and areas of non-overlap). They’ve got a paper in press at JPSP.

If you want to read more you can download the PDF:

Srivastava, S., Guglielmo, S., & Beer, J. S. (2010). Perceiving others’ personalities: Examining the dimensionality, assumed similarity to the self, and stability of perceiver effects. Journal of Personality and Social Psychology, 98, 520-534.

Take the DSM-5 disorder quiz!

Below are the names of some psychological disorders. For each one, choose one of the following:

A. This is under formal consideration to be included as a new disorder in the DSM-5.

B. Somebody out there has suggested that this should be a disorder, but it is not part of the current proposal.

C. I made it up.

Answers will be posted in the comments section.

1. Factitious dietary disorder – producing, feigning, or exaggerating dietary restrictions to gain attention or manipulate others

2. Skin picking disorder – recurrent skin picking resulting in skin lesions

3. Olfactory reference syndrome – preoccupation with the belief that one emits a foul or offensive body odor, which is not perceived by others

4. Solastalgia – psychological or existential stress caused by environmental changes like global warming

5. Hypereudaimonia – recurrent happiness and success that interferes with interpersonal functioning

6. Premenstrual dysphoric disorder – disabling irritability before and during menstruation

7. Internet addiction disorder – compulsive overuse of computers that interferes with daily life

8. Sudden wealth syndrome – anxiety or panic following the sudden acquisition of large amounts of wealth

9. Kleine Levin syndrome – recurrent episodes of sleeping 11+ hours a day accompanied by feelings of unreality or confusion

10. Quotation syndrome – following brain injury, speech becomes limited to the recitation of quotes from movies, books, TV, etc.

11. Infracaninophilia – compulsively supporting individuals or teams perceived as likely to lose competitions

12. Acquired situational narcissism – narcissism that results from being a celebrity