Kids: why bother?

Tara Parker-Pope at the NYT Well blog writes:

One of the more surprising trends in marriage during the past 20 years is the fact that most couples no longer view children as essential to a happy relationship.

A few years ago, the Pew Research Center released a survey called “What Makes Marriage Work?” Not surprisingly, fidelity ranked at the top of the nine-item list — 93 percent of respondents said faithfulness was essential to a good marriage.

But what about children? As an ingredient to a happy marriage, kids were far from essential, ranking eighth behind good sex, sharing chores, adequate income and a nice house, among other things. Only 41 percent of respondents said children were important to a happy marriage, down from 65 percent in 1990. The only thing less important to a happy marriage than children, the survey found, was whether a couple agreed on politics.

Parker-Pope suggests that people rank children lower because marriages are becoming more adult-centered. Maybe, maybe not. Another interpretation is that maybe people are just wising up.

My colleagues and I have documented that for most (though not all) couples, relationship satisfaction goes down after children enter the picture. And Sara Gorchoff and others have shown that marital satisfaction goes up when the kids leave. (Obligatory note: there are still unresolved questions about the causality behind these trends.)

Parker-Pope’s explanation might make contemporary couples sound more selfish (“we want to be happy, and kids will ruin it!”). But I can see it the opposite way. Maybe contemporary couples (who, after all, are still procreating) realize that there are other reasons to have kids besides enhancing the quality of their marital relationship.

On knowing that you’re often wrong but not knowing when

Felix Salmon (via Andrew Gelman):

Many if not most of my opinions are wrong (although of course I have no idea which they are), and … many of the most interesting and useful things I write come out of my being wrong rather than being right. This is not, as Wilkinson noted to Cowen, an easy intellectual stance to hold: he calls it “a weird violation of the actual computational constraints of the human mind”. But I think it’s undoubtedly worth working on.

This makes me feel better about the surprise and then ensuing guilt I experienced the first time one of my published results replicated.

Is there anything special about the Five-Factor Model?

I recently put up a clip-job list of all the ideas I’ve been too busy or lazy to flesh out into real posts in the last month. One of the items was about a recent Psych Inquiry commentary I wrote in response to a piece by Jack Block. Tal actually read the commentary (thanks, Tal!) and commented:

…What I couldn’t really get a sense of from your paper is whether you actually believe there’s anything special about the FFM as distinct from any number of other models, or if you view it as just a matter of convenience that the FFM happens to be the most widely adopted model. I suspect Block would have said that even if you think the FFM is all in the eyes of the beholder, there’s still no good reason to think that it’s the right structure, and that with only slightly different assumptions and a slightly different historical trajectory, we could all have been working with a six or seven-factor model. So I guess my question would be: should one read the title of your paper as saying that the FFM is the model that describes the structure of social perceptions, or are you making a more general point about all psychometric models based on semantically-mediated observations?

That’s a great question.

As I think I make clear in the paper, I think it’s highly unlikely that the FFM is isomorphic with some underlying, extra-perceptual reality of bodies or behavior. In other words, I don’t expect we’ll find five brain systems whose functioning maps one-to-one onto the five factors. I could be wrong, but I have seen exactly zero evidence that makes me think that’s the case.

But since I argue in the paper that the FFM is a model of the social concerns of ordinary social perceivers, I think it’s fair to ask whether it’s isomorphic with something else. Like maybe there are five basic, universal social concerns that all humans share, or something like that. And my answer is… no, I don’t think so.

For one thing, I don’t think the cross-cultural evidence is strong enough to support that conclusion. (Being in the same department as Gerard Saucier has helped me see that.) McCrae and Costa have done a very good job of showing that the FFM can be exported to other cultures — if we give people the FFM as a meaning system, they’ll use it in roughly the way we expect. But emic studies have been a lot more varied.

I also am not convinced that factor analysis — a method that derives independent factors from between-person covariance structures — is the “true” way to model person perception and social meaning. Useful? As a way of deriving a descriptive/taxonomic model, absolutely. Orthogonal factor analysis has some very useful properties, like mapping a multidimensional space very efficiently. And there’s a consistent something behind that useful model, in the sense that something is causing that five-factor structure to replicate (conditional on the item selection procedures, samples from certain cultures, statistical assumptions, etc.).

But there’s no reason to think that that means the five-factor structure has a simple, one-to-one relationship to whatever reality it’s grounded in — whether the reality of target persons’ behavior or of perceivers’ concerns. Why would social concerns be orthogonal (and by implication, causally unrelated to one another)? Why, if these are major themes in human social concerns, don’t we have good words for them at the five-factor level of abstraction? (“Agreeableness”? Blech. Worst factor label ever.) Why do they emerge in the between-person covariance structure but not in experimental methods that probe social representation at the individual level (ala Dabady, Bell, & Kihlstrom, 1999)?

As to Tal’s last question (“are you making a more general point about all psychometric models based on semantically-mediated observations?”): I think I say this in the paper, but I don’t think there is, or ever will be, any structural model of personality that isn’t pivotally dependent on human perception and judgment. (Ouch, double negative. Put more straightforwardly: all models of personality depend on human interpretations of personality.) I have a footnote where I comment that the Q sort can be seen as a model of what Jack Block wants to know about persons. I’ll even extend that to models that use biological constructs as their units rather than linguistic ones, but maybe I’ll save that argument for another day…

Apparently I’m on a blogging break

I just noticed that I haven’t posted in over a month. Don’t fear, loyal readers (am I being presumptuous with that plural? hi Mom!). I haven’t abandoned the blog, apparently I’ve just been too busy or preoccupied to flesh out any coherent thoughts.

So instead, here are some things that, over the last month, I’ve thought about posting but haven’t summoned up the wherewithal to turn into anything long enough to be interesting:

  • Should psychology graduate students routinely learn R in addition to, or perhaps instead of, other statistics software? (I used to think SPSS or SAS was capable enough for the modal grad student and R was too much of a pain in the ass, but I’m starting to come around. Plus R is cheaper, which is generally good for graduate students.)
  • What should we do about gee-whiz science journalism covering social neuroscience that essentially reduces to, “Wow, can you believe that X happens in the brain?” (Still working on that one. Maybe it’s too deeply ingrained to do anything.)
  • Reasons why you should read my new commentary in Psychological Inquiry. (Though really, if it takes a blog post to explain why an article is worth reading, maybe the article isn’t worth reading. I suggest you read it and tell me.)
  • A call for proposals for what controversial, dangerous, or weird research I should conduct now that I just got tenure.
  • Is your university as sketchy as my university? (Okay, my university probably isn’t really all that sketchy. And based on the previous item, you know I’m not just saying that to cover my butt.)
  • My complicated reactions to the very thought-provoking Bullock et al. “mediation is hard” paper in JPSP.

Our spring term is almost over, so maybe I’ll get to one of these sometime soon.

Birtherism, cognitive dissonance, and the persistence of belief

As the Arizona legislature debates a bill to demand proof of citizenship from presidential candidates, Slate has a new piece about why the “birther” movement won’t go away (which I came across thanks to Eric Knowles). Of particular interest:

The irony of all the birth-certificate proposals—similar bills have been introduced in six states—is that they contain the seeds of the birther movement’s destruction. The moment Obama calls their bluff and hands his birth certificate to the Arizona secretary of state, it’s over.

In theory. That’s the beauty of the birther myth, or any conspiracy theory: No amount of evidence can ever completely dispel the questions. When Obama produced his Hawaii birth certificate and the state of Hawaii verified it, it was a fake. When reporters uncovered announcements of Obama’s birth in 1961 copies of the Honolulu Advertiser and the Honolulu Star-Bulletin, they had been planted. If the Arizona secretary of state verified Obama’s birth certificate, that would be due to the government mind-control chip implanted in his molar.

To put all this another way: Birtherism is here to stay. And not because more people are going crazy, but because crazy has been redefined…

The article puts a bit of a partisan spin on the underlying psychological explanation (oh those crazy conservatives!). But this is just another manifestation of a basic psychological process documented in the 1950s by Leon Festinger (which I’ve previously discussed in relation to the persistence of the discredited vaccine-autism link).

When people have a deeply held belief that they have publicly committed to, disconfirmatory evidence doesn’t necessarily weaken the belief. Instead, Festinger predicted — based on cognitive dissonance theory — that under the right circumstances, disconfirmatory evidence can make beliefs grow stronger and metastasize. Festinger first documented this phenonemon among doomsday cultists, and it plays out again and again among modern conspiracy theories from the left, right, and in between.

Where this all intersects with politics, as the Slate piece points out, is in how politicians can fan and exploit conspiracies. Festinger laid out five conditions for disconformatory evidence to intensify belief. The fifth is, “After the discomfirming evidence comes to light, the believer has social support from other believers.” Politicians speaking in code can sound to believers like sympathetic supporters, while still leaving themselves plausible deniability in more rational circles. I don’t know how many have read Festinger, but they almost certainly know what they’re doing.

Update: Brendan Nyhan sent me a link to a forthcoming paper of his (with Jason Reifler) titled When Corrections Fail: The Persistence of Political Misperceptions. In a series of experiments, they show that correcting misperceptions can backfire when the correction runs contrary to somebody’s political beliefs. For example, conservatives became more certain that Iraq had WMDs prior to the war when they read news articles that corrected that misperception. Likewise for liberals and the misperception that Bush banned all stem cell research.

Their paper doesn’t test all aspects of Festinger’s model, though it does draw on work in the cog-dissonance tradition. Festinger thought that the social context of beliefs was important: you have to publicly commit to your beliefs, and after receiving disconfirmatory evidence you have to have social support from fellow believers. Nyhan and Reifler’s experiment dealt with subjects’ pre-existing beliefs, so it’s entirely possible that those elements were part of the prior experience that subjects brought into the lab. It would be interesting to run an experiment to see if you could modulate the backfire effect by amplifying or dampening those factors experimentally.

New resource for interpersonal perception researchers

Via Dave Kenny, I just found out about a new set of resources for researchers interested in personality and social relationships — and especially for users of the Social Relations Model.

Persoc is a research network founded by a group of mostly German researchers, although they seem to be interested in bringing people together from all over. From their website:

In September 2007 a group of young researchers who repeatedly met at conferences realized that they were all fascinated by the complex interplay of personality and social relationships. While we studied the effects of personality on very different social processes (e.g., zero acquaintance judgments, group formation, friendship development, mate choice, relationship maintenance), we shared a strong focus on observing real-life phenomena and implementing advanced methods to analyze our data. Since the official start of Persoc in late 2008, several meetings and workshops have deepened both, our interconnectedness as well as our understanding and interest in personality and social relationships. Persoc is funded by the German Research Foundation (DFG).

Among other things, they have created an R package called TripleR for analyzing round-robin data using the SRM componential approach. TripleR is intended as an alternative to the venerable SOREMO software created by Kenny. The persoc website also includes a page discussing theoretical concepts in interpersonal perception, an overview of a number of useful research designs, and other information.

Modeling the Jedi Theory of Emotions

Today I gave my structural equation modeling class the following homework:

In Star Wars I: The Phantom Menace, Yoda presented the Jedi Theory of Emotions:  “Fear is the path to the dark side. Fear leads to anger. Anger leads to hate. Hate leads to suffering.”

1. Specify the Jedi Theory of Emotions as a path model with 4 variables (FEAR, ANGER, HATE, and SUFFERING). Draw a complete path diagram, using lowercase Roman letters (a, b, c, etc.) for the causal parameters.

2. Were there any holes or ambiguities in the Jedi Theory (as stated by Yoda) that required you to make theoretical assumptions or guesses? What were they?

3. Using the tracing rule, fill in the model-implied correlation matrix (assuming that all variables are standardized):

FEAR ANGER HATE SUFFERING
FEAR 1
ANGER 1
HATE 1
SUFFERING 1

4. Generate a plausible equivalent model. (An equivalent model is a model that specifies a different causal structure but implies the same correlation matrix.)

5. Suppose you run a study and collect data on these four variables. Your data gives you the following correlation matrix.

FEAR ANGER HATE SUFFERING
FEAR 1
ANGER .5 1
HATE .3 .6 1
SUFFERING .4 .3 .5 1

Is the Jedi Theory a good fit to the data? In what way(s), if any, would you revise the model?

Some comments…

For #1, everybody always comes up with a recursive, full mediation model — e.g., fear only causes hate via anger as an intervening cause, and there are no loops or third-variable associations between fear and hate, etc. It’s an opportunity to bring up the ambiguity of theories expressed in natural language: just because Yoda didn’t say “and anger can also cause fear sometimes too,” does that mean he’s ruling that out?

Relatedly, observational data will only give you unbiased causal estimates — of the effect of fear on anger, for example — if you assume that Yoda gave a complete and correct specification of the true causal structure (or if you fill in the gaps yourself and include enough constraints to identify the model). How much do you trust Yoda’s model? Questions 4 and 5 are supposed to help students to think about ways in which the model could and could not be falsified.

In a comment on an earlier post, I repeated an observation I once heard someone make, that psychologists tend to model all relationships as zero unless given reason to think otherwise, whereas econometricians tend to model all relationships as free parameters unless given reason to think otherwise. I’m not sure why that is the case (maybe a legacy of NHST in experimental psychology, where you’re supposed to start by hypothesizing a zero relationship and then look for reasons to reject that hypothesis). At any rate, if you think like an econometrician and come from the no true zeroes school of thought, you’ll need something more than just observational data on 4 variables in order to test this model. That makes the Jedi Theory a tough nut to crack. Experimental manipulation gets ethically more dubious as you proceed down the proposed causal chain. And I’m not sure how easy it would be to come up with good instruments for all of these variables.

I also briefly worried that I might be sucking the enjoyment out of the movie. But then I remembered that the quote is from The Phantom Menace, so that’s already been done.

From the department of “things that could be said about almost anything”

If you want to change the behavior, you have to change the incentives. Moralistic huffing and puffing won’t cut it.

That sentence jumped out at me as being true of just about every domain of public policy.

(In this case it’s from Dean Dad’s blog post about public higher ed outsourcing growth to private higher ed. My own institution has essentially done this internally. Our finances are more like a public institution with regard to in-state students, and like a private with regard to out-of-state students. Since our state’s contribution to higher ed is dismal and dropping, the higher-ups have decided to balance the budget through growth — but that’s almost entirely by admitting more out-of-state students.)

Prepping for SEM

I’m teaching the first section of a structural equation modeling class tomorrow morning. This is the 3rd time I’m teaching the course, and I find that the more times I teach it, the less traditional SEM I actually cover. I’m dedicating quite a bit of the first week to discussing principles of causal inference, spending the second week re-introducing regression as a modeling framework (rather than a toolbox statistical test), and returning to causal inference later when we talk about path analysis and mediation (including assigning a formidable critique by John Bullock et al. coming out soon in JPSP).

The reason I’m moving in that direction is that I’ve found that a lot of students want to rush into questionable uses of SEM without understanding what they’re getting into. I’m probably guilty of having done that, and I’ll probably do it again someday, but I’d like to think I’m learning to be more cautious about the kinds of inferences I’m willing to make. To people who don’t know better, SEM often seems like magical fairy dust that you can sprinkle on cross-sectional observational data to turn it into something causally conclusive. I’ve probably been pretty far on the permissive end of the spectrum that Andrew Gelman talks about, in part because I think experimental social psychology sometimes overemphasizes internal validity to the exclusion of external validity (and I’m not talking about the special situations that Mook gets over-cited for). But I want to instill an appropriate level of caution.

BTW, I just came across this quote from Donald Campbell and William Shadish: “When it comes to causal inference from quasi-experiments, design rules, not statistics.” I’d considered writing “IT’S THE DESIGN, STUPID” on the board tomorrow morning, but they probably said it nicer.

On base rates and the “accuracy” of computerized Facebook gaydar

I never know what to make of reports stating the “accuracy” of some test or detection algorithm. Take this example, from a New York Times article by Steve Lohr titled How Privacy Vanishes Online:

In a class project at the Massachusetts Institute of Technology that received some attention last year, Carter Jernigan and Behram Mistree analyzed more than 4,000 Facebook profiles of students, including links to friends who said they were gay. The pair was able to predict, with 78 percent accuracy, whether a profile belonged to a gay male.

I have no idea what “78 percent accuracy” means in this context. The most obvious answer would seem to be that of all 4,000 profiles analyzed, 78% were correctly classified as gay versus not gay. But if that’s the case, I have an algorithm that beats the pants off of theirs. Are you ready for it?

Say that everybody is not gay.

Figure that around 5 to 10 percent of the population is gay. If these 4,000 students are representative of that, then saying not gay every time will yield an “accuracy” of 90-95%.

But wait — maybe by “accuracy” they mean what percentage of gay people are correctly identified as such. In that case, I have an algorithm that will be 100% accurate by that standard. Ready?

Say that everybody is gay.

You can see how silly this gets. To understand how good the test is, you need two numbers: sensitivity and specificity. My algorithms each turn out to be 100% on one and 0% on the other. Which means that they’re both crap. (A good test needs to be high on both.) I am hoping that the MIT class’s algorithm was a little better, and the useful numbers just didn’t get translated. But this news report tells us nothing that we need to know to evaluate it.