I just noticed that I haven’t posted in over a month. Don’t fear, loyal readers (am I being presumptuous with that plural? hi Mom!). I haven’t abandoned the blog, apparently I’ve just been too busy or preoccupied to flesh out any coherent thoughts.
So instead, here are some things that, over the last month, I’ve thought about posting but haven’t summoned up the wherewithal to turn into anything long enough to be interesting:
- Should psychology graduate students routinely learn R in addition to, or perhaps instead of, other statistics software? (I used to think SPSS or SAS was capable enough for the modal grad student and R was too much of a pain in the ass, but I’m starting to come around. Plus R is cheaper, which is generally good for graduate students.)
- What should we do about gee-whiz science journalism covering social neuroscience that essentially reduces to, “Wow, can you believe that X happens in the brain?” (Still working on that one. Maybe it’s too deeply ingrained to do anything.)
- Reasons why you should read my new commentary in Psychological Inquiry. (Though really, if it takes a blog post to explain why an article is worth reading, maybe the article isn’t worth reading. I suggest you read it and tell me.)
- A call for proposals for what controversial, dangerous, or weird research I should conduct now that I just got tenure.
- Is your university as sketchy as my university? (Okay, my university probably isn’t really all that sketchy. And based on the previous item, you know I’m not just saying that to cover my butt.)
- My complicated reactions to the very thought-provoking Bullock et al. “mediation is hard” paper in JPSP.
Our spring term is almost over, so maybe I’ll get to one of these sometime soon.
7 thoughts on “Apparently I’m on a blogging break”
I would love to hear your thoughts about Bullock et al’s mediation paper. It spurred some interesting conversation in our departmental seminar.
(Just to encourage you to blog post… I am a graduate student who reads your blog)
Welcome back, and congrats on getting tenure!
I read your commentary and liked it a lot. I sympathize with the idea that any structure you get out of a measure (whether it has five or seventeen factors) tells you as much if not more about how people perceive personality as it does about the underlying mechanisms that give rise to personality. That said, what I couldn’t really get a sense of from your paper is whether you actually believe there’s anything special about the FFM as distinct from any number of other models, or if you view it as just a matter of convenience that the FFM happens to be the most widely adopted model. I suspect Block would have said that even if you think the FFM is all in the eyes of the beholder, there’s still no good reason to think that it’s the right structure, and that with only slightly different assumptions and a slightly different historical trajectory, we could all have been working with a six or seven-factor model. So I guess my question would be: should one read the title of your paper as saying that the FFM is the model that describes the structure of social perceptions, or are you making a more general point about all psychometric models based on semantically-mediated observations?
Oh, and I think it’s a great idea to make students learn R. I only took it up because another grad student in the lab insisted it was better than sliced bread (he was right); the learning curve was steep, but very much worth it. I don’t know if it should be mandatory, since many stats classes in psych departments don’t really have much of a hands-on component. But for classes that do regularly use SPSS or SAS, I don’t see a major downside. Yes, it’s a pain in the ass, but so are many other things we force ourselves (or are forced) to learn because they’re valuable.
I’ll be curious to hear your detailed thoughts on the Bullock paper if you get the chance…
Yes, psychology students should learn R.
I am a psychology Phd student, and since I started learning R, i understand what it is i’m doing with my methods far more (mostly due to R’s obscure error messages).
I also feel that R has the ability to suport whatever analysis you care to do. For a simple example, parallel analysis which is difficult in SPSS is simple in R, and this technique often shows up structures that you might not have thought of, and certainly won’t show up in SPSS.
The modelling syntax is also a joy to work with, eliminating all the different adhoc procedures into one thing of beauty (I love R).
I would also argue that anything that forces psych students to actually think about what they’re doing would increase their understanding and hopefully interest in stats.
Thanks for the encouragement, everyone. I’ll see if I can pull together my thoughts about the mediation paper and write a proper post. Like I said, my reactions were complicated. But I’m hoping that lots of people in psychology read that paper and think seriously about its implications.
Re R, I absolutely agree with everybody about the benefits. For me, when it comes to whether we should make it the bread-and-butter stats software, it’s always been about balancing against the not-insignificant costs of a steeper learning curve. If over and above learning statistics, you’re going to spend more time learning how to actually program in an environment that everybody agrees has a steeper learning curve than SPSS or SAS, that’s going to take grad students’ time away from other activities. As someone who teaches a full range of grad students — not just the more quantitatively-oriented folks who have self-taught R and are now its advocates inside of psychology — the tradeoff isn’t a slam dunk, which is why I’ve been slow to come around.
“What should we do about gee-whiz science journalism covering social neuroscience that essentially reduces to, “Wow, can you believe that X happens in the brain?”
Do an fMRI study comparing neural activation in response to reading about a psychological phenomenon described in psychological terms, vs. the same phenomenon described in neural terms (with fMRI pics).
This also comes under the heading of “controversial, dangerous, or weird research I should conduct now that I just got tenure.”
Great idea! Here’s how I imagine it would go: If the glowing blobs are bigger in the neuro condition, we can say that it’s because a neuroscience framing makes people think harder. If the blobs are smaller, we can say it’s because the brain processes neuroscience evidence more efficiently. Either way, the neuro side wins.
Comments are closed.