Hiring Robert Berdahl would be a show of strength for the OUS Chancellor and Board

My university’s president, Richard Lariviere, was fired last week. I sent this letter to Chancellor George Pernsteiner and the members of the State Board of Higher Education on Friday, December 2, 2011. Links have been added for the blog post.

Dear Chancellor and Members of the Board:

I am writing to you to urge you to hire Robert Berdahl as interim President of the University of Oregon. I agree with the UO Senate Executive Committee’s recommendation that Berdahl and Berdahl alone is suited for this position. I will not restate their reasoning here (all of which I concur with), but I want to add something.

Earlier this week, Dr. Berdahl wrote an op-ed in the Register-Guard criticizing you for firing Richard Lariviere. In conversations, some of my colleagues have suggested that the op-ed would make it difficult for you to credibly hire Dr. Berdahl. I believe the opposite is true: hiring Berdahl would be a showing of credibility and strength on your part. Here is why:

About Dr. Lariviere’s termination, you have stated that it “has nothing to do with policy positions or conflicting visions for the future of the University of Oregon.” Rather, “This was an issue of lack of communication and eroded trust.” (OUS Press Release of Nov 28, 2011). Right now, rightly or wrongly, many people in the University of Oregon community, around the state, and beyond doubt those words. They believe that you could not tolerate dissent, that you acted because your authority was threatened, and that you were afraid of change.

This, now, is your opportunity to back up your words with actions and show your critics, the state, and the world that you mean what you say. Dr. Berdahl has a long and distinguished history of establishing trust and communication with people he disagrees with and working effectively with state governance bodies. And he shares much of Dr. Lariviere’s broad vision. By selecting Dr. Berdahl, you would show that your idea of teamwork does not mean lock-step submission, that this is not about ego, and that you are willing to have a change agent on your team who will work with you for the good of higher education in all of Oregon. Thus, not only would Dr. Berdahl be an outstanding president, his selection would also go a long way toward restoring confidence in your governance and repairing badly damaged communication.

For the good of the university, its students, and the state that it serves, I urge you to select Dr. Berdahl.

Sincerely,
Sanjay Srivastava

In other news, people who say “I got my bachelor’s degree in medicine” are not getting jobs as doctors

The email below has been making the rounds. The APA should post it on their website, but I have not found it there. Since it says “FYI and distribution” I am taking the liberty myself.

As noted below, the data are only based on people who stopped at a bachelor’s degree (no grad school). The vast majority of undergrad psychology majors are just called “psychology.” Since people in the survey self-reported their major, I would speculate that a lot of the people claiming to have majored in “clinical psychology,” “social psychology,” etc. were just making it up to sound impressive.

From: Chairs of Councils of Directors of Training Councils [mailto:CCTC@LISTS.APA.ORG] On Behalf Of Belar, Cynthia
Sent: Saturday, November 12, 2011 11:32 AM
To: CCTC@LISTS.APA.ORG
Subject: [CCTC] NPR report

FYI and distribution.

Unfortunately, a recent report on National Public Radio [SS: and now CBS] may be misleading regarding the employment status of undergraduate psychology majors, and confusing about the employment status of clinical psychologists.

On Nov 9, 2011 NPR reported graduates with majors in clinical psychology had the highest unemployment rate — nearly 20%. Although technically correct, these data are based on terminal bachelor’s degrees, not graduate degrees, so they have no relevance to the employment status of clinical psychologists for whom the doctoral degree is required. Nor does this report represent the employment status of undergraduate majors in psychology in general, as clinical psychology majors are only a miniscule subset (<1%) of the psychology majors reported in those data.

Since APA has received many inquiries from those interpreting the NPR report as reflecting poorly on the employment status of clinical psychologists and recipients of bachelor’s degrees in psychology in general, we have prepared the following information for clarification.

* The data NPR cited are from a table recently published by the Wall Street Journal entitled From College Major to Career.  They are self-report data from the American Community Survey (ACS) by the Census Bureau.

* There are eight undergraduate degrees in psychology reported: clinical psychology, cognitive science and biopsychology, counseling psychology, educational psychology, industrial and organizational psychology, miscellaneous psychology, psychology and social psychology.

* The category of “psychology” was the 5th most popular among all majors reported, with an unemployment rate for psychology of 6.1% that is not much different from biology (5.6%), computer science (5.6%), economics (6.3%) and geography (6.1%).

* The vast majority of undergraduate institutions that provide degrees in psychology either provide a BA or BS in psychology – not a degree in an area of specialization such as clinical (perhaps explaining why the popularity of clinical psychology as a major is ranked 168, while psychology as a major is ranked as 5)

* Data from the previous year’s Census Bureau survey are available on the Georgetown University Center on Education and the Workforce website; see http://cew.georgetown.edu/collegepayoff/.  These data also illustrate how unrepresentative the data on clinical psychology are of undergraduate psychology education in general.  As noted on page 170, clinical psychology represents less than one percent (0.76%) of the approximately 1.5 million psychology majors reported.  The authors also note: “Sample size was too small to be statistically valid.” Of interest was that the unemployment rate for clinical psychology bachelor’s degrees in that year was 5%.

* With respect to employment of individuals holding doctoral degrees in clinical psychology,  the data on 2009 degree recipients reveal that 3.8% were unemployed seeking employment: http://www.apa.org/workforce/publications/09-doc-empl/table-2.pdf

Although the NPR report and its focus on clinical psychology has masked important information on the large number of undergraduate majors in psychology, it has brought to light the need for more public understanding of the undergraduate major in psychology.  According to the National Center on Educational Statistics, roughly 90,000 students graduate each year with a bachelor’s degree in psychology. The Wall Street Journal data and those from the Georgetown Center for Education and the Workforce suggest that employment rates for psychology majors are similar to many other disciplines.  Moreover, the graduates are employed across multiple sectors as would be consistent with the goals of the undergraduate major in psychology.

APA has specific policies guiding the undergraduate major in psychology, including Guidelines for the Undergraduate Psychology Major and Principles for Quality Undergraduate Education in Psychology.  We strongly encourage consumers of undergraduate education to use these guides in making choices among majors on their campuses.  We also wish to highlight that a bachelor’s degree in clinical psychology is a miniscule subset of psychology majors, and that a doctoral degree is required for one to become a clinical psychologist.

We wish to acknowledge Jeff Strohl (Georgetown Center for Education and the Workforce) and Joseph Light (Wall Street Journal) for their helpfulness in ensuring we had accurate data.

Cynthia D. Belar, PhD, ABPP | Executive Director

Education Directorate
American Psychological Association

Mark Zuckerberg on psychology and social media

In response to Florida Governor Rick Scott attacking Florida universities for graduating too many psychology majors (among other disciplines), a group of department chairs put out a report explaining and defending the discipline. Toward the end they list some famous psychology majors, and among them is Mark Zuckerberg.

Here’s Zuckerberg in the Deseret News:

“All of these problems at the end of the day are human problems,” he said. “I think that that’s one of the core insights that we try to apply to developing Facebook. What [people are] really interested in is what’s going on with the people they care about. It’s all about giving people the tools and controls that they need to be comfortable sharing the information that they want. If you do that, you create a very valuable service. It’s as much psychology and sociology as it is technology.”

And it’s not just talk — he’s hiring psychology PhDs (including a University of Oregon graduate).

See also here (psych major stuff starts around 1:00; gets especially interesting around 2:50).

 

Hard copy? Really?

Is there some legitimate, non-Luddite reason why some psychology departments continue to insist on hard copy for letters of recommendation? Electronic signatures are legal, folks, and most of your peers have gotten with the program.

Seriously, is there something I’m missing?

Does psilocybin cause changes in personality? Maybe, but not so fast

This morning I came across a news article about a new study claiming that psilocybin (the active ingredient in hallucinogenic mushrooms) causes lasting changes in personality, specifically the Big Five factor of openness to experience.

It was hard to make out methodological details from the press report, so I looked up the journal article (gated). The study, by Katherine MacLean, Matthew Johnson, and Roland Griffiths, was published in the Journal of Psychopharmacology. When I read the abstract I got excited. Double blind! Experimentally manipulated! Damn, I thought, this looks a lot better than I thought it was going to be.

The results section was a little bit of a letdown.

Here’s the short version: Everybody came in for 2 to 5 sessions. In session 1 some people got psilocybin and some got a placebo (the placebo was methylphenidate, a.k.a., Ritalin; they also counted as “placebos” some people who got a very low dose of psilocybin in their first session). What the authors report is a significant increase in NEO Openness from pretest to after the last session. That analysis is based on the entire sample of N=52 (everybody got an active dose of psilocybin at least once before the study was over). In a separate analysis they report no significant change from pretest to after session 1 for the n=32 people who got the placebo first. So they are basing a causal inference on the difference between significant and not significant. D’oh!

To make it (even) worse, the “control” analysis had fewer subjects, hence less power, than the “treatment” analysis. So it’s possible that openness increased as much or even more in the placebo contrast as it did in the psilocybin contrast. (My hunch is that’s not what happened, but it’s not ruled out. They didn’t report the means.)

None of this means there is definitely no effect of psilocybin on Openness; it just means that the published paper doesn’t report an analysis that would answer that question. I hope the authors, or somebody else, come back with a better analysis. (A simple one would be a 2×2 ANOVA comparing pretest versus post-session-1 for the placebo-first versus psilocybin-first subjects. A slightly more involved analysis might involve a multilevel model that could take advantage of the fact that some subjects had multiple post-psilocybin measurements.)

Aside from the statistics, I had a few observations.

One thing you’d worry about with this kind of study – where the main DV is self-reported – is demand or expectancy effects on the part of subjects. I know it was double-blind, but they might have a good idea about whether they got psilocybin. My guess is that they have some pretty strong expectations about how shrooms are supposed to affect them. And these are people who volunteered to get dosed with psilocybin, so they probably had pretty positive expectations. I wouldn’t call the self-report issue a dealbreaker, but in a followup I’d love to see some corroborating data (like peer reports, ecological momentary assessments, or a structured behavioral observation of some kind).

On the other hand, they didn’t find changes in other personality traits. If the subjects had a broad expectation that psilocybin would make them better people, you would expect to see changes across the board. If their expectations were focused around Openness-related traits, that’s less relevant.

If you accept the validity of the measures, it’s also noteworthy that they didn’t get higher in neuroticism — which is not consistent with what the government tells you will happen if you take shrooms.

One of the most striking numbers in the paper is the baseline sample mean on NEO Openness — about 64. That is a T-score (normed [such as it is] to have a mean = 50, SD = 10). So that means that in comparison to the NEO norming sample, the average person in this sample was about 1.4 SDs above the mean — which is above the 90th percentile — in Openness. I find that to be a fascinating peek into who volunteers for a psilocybin study. (It does raise questions about generalizability though.)

Finally, because psilocybin was manipulated within subjects, the long-term (one year-ish) followup analysis did not have a control group. Everybody had been dosed. They predicted Openness at one year out based on the kinds of trip people reported (people who had a “complete mystical experience” also had the sustained increase in openness). For a much stronger inference, of course, you’d want to manipulate psilocybin between subjects.

Do not use what I am about to teach you

I am gearing up to teach Structural Equation Modeling this fall term. (We are on quarters, so we start late — our first day of classes is next Monday.)

Here’s the syllabus. (pdf)

I’ve taught this course a bunch of times now, and each time I teach it I add more and more material on causal inference. In part it’s a reaction to my own ongoing education and evolving thinking about causation, and in part it’s from seeing a lot of empirical work that makes what I think are poorly supported causal inferences. (Not just articles that use SEM either.)

Last time I taught SEM, I wondered if I was heaping on so many warnings and caveats that the message started to veer into, “Don’t use SEM.” I hope that is not the case. SEM is a powerful tool when used well. I actually want the discussion of causal inference to help my students think critically about all kinds of designs and analyses. Even people who only run randomized experiments could benefit from a little more depth than the sophomore-year slogan that seems to be all some researchers (AHEM, Reviewer B) have been taught about causation.

A dubious textbook marketing proposal

I got an email the other day:

*****

Dear Professor Srivastava,

My name is [NAME] and I am a consultant working with the [PUBLISHING COMPANY THAT YOU HAVE ALMOST CERTAINLY HEARD OF] team on the new textbook, [TEXTBOOK], by [AUTHOR]. I am emailing to see if you would be interested in class testing a chapter from this new textbook.  In exchange for your class test, [PUBLISHER] will give you a one year membership to the APS as a stipend for your help. This is a $194 value.

If you teach the [COURSE THAT I DON’T ACTUALLY TEACH] course, please read on.

[PUBLISHER] is looking for instructors to class test either of the following chapters:

    Chapter 3: [SOMETHING ABOUT THE BRAIN]
    Chapter 8: [SOMETHING ABOUT THE MIND]

You can integrate the chapter you select into your course as you see fit – we will ask you and your students to fill out a very brief online survey after the class test.

[AUTHOR] is [IMPRESSIVE-SOUNDING LIST OF AWARDS AND CREDENTIALS]

If you would like to be considered for this class test, please click the following link and sign up for the project: [LINK]

This is a terrific way for you to learn about an exciting new textbook for the [COURSE THAT I DON’T ACTUALLY TEACH] course and see if it is a good fit for you and your students.  

I look forward to hearing from you.

[NAME]

Consultant for [PUBLISHER]

*****

This sounds ethically problematic to me, for at least two reasons:

1. It is a conflict of interest. My students are paying tuition money to my employer, and my employer is paying a salary to me, to provide a high-quality education. If I choose course materials based on outside financial compensation rather than what I think is the best for their education, that is a conflict of interest.

2. My students would be forced to participate in a marketing study without their consent. In response to my query, the consultant said the students would not be paid. But compensation or no, I can see no practical way to incorporate these materials into the course and still allow students to fully opt out. Even if students choose not to fill out the survey, it is still shaping the content of their course.

I suppose I could make the test readings optional, spend no classroom time on them, base no assignments or test questions on them, and fully disclose the arrangement to my students. But my experience of college students and non-required reading assignments tells me that exactly nobody would do the reading or fill out the survey, unless they thought it would curry favor with me (so maybe the disclosure is a bad idea). I don’t imagine that is what the consultant has in mind.

It is possible that I have misconstrued an important part of this invitation. So I have offered the emailer to write a response, and if he does I will post it. I’ve also decided to redact the identifying details. I realize that lowers the probability of getting a response, but my purpose is to make it known that this kind of thing goes on  — not to embarrass the specific parties involved.

The usability of statistics; or, what happens when you think that (p=.05) != (p=.06)

The difference between significant and not significant is not itself significant.

That is the title of a 2006 paper by statisticians Andrew Gelman and Hal Stern. It is also the theme of a new review article in Nature Neuroscience by Sander Nieuwenhuis, Birte U Forstmann, and Eric-Jan Wagenmakers (via Gelman’s blog). The review examined several hundred papers in behavioral, systems, and cognitive neuroscience. Of all the papers that tried to compare two effects, about half of them made this error instead of properly testing for an interaction.

I don’t know how often the error makes it through to published papers in social and personality psychology, but I see it pretty regularly as a reviewer. I call it when I see it; sometimes other reviewers call it out too, sometimes they don’t.

I can also remember making this error as a grad student – and my advisor correcting me on it. But the funny thing is, it’s not something I was taught. I’m quite sure that nowhere along the way did any of my teachers say you can compare two effects by seeing if one is significant and the other isn’t. I just started doing it on my own. (And now I sometimes channel my old advisor and correct my own students on the same error, and I’m sure nobody’s teaching it to them either.)

If I wasn’t taught to make this error, where was I getting it from? When we talk about whether researchers have biases, usually we think of hot-button issues like political bias. But I think this reflects a more straightforward kind of bias — old habits of thinking that we carry with us into our professional work. To someone without scientific training, it seems like you should be able to ask “Does X cause Y, yes or no?” and expect a straightforward answer. Scientific training teaches us a couple of things. First, the question is too simple: it’s not a yes or no question; the answer is always going to come with some uncertainty; etc. Second, the logic behind the tool that most of us use – null hypothesis significance testing (NHST) – does not even approximate the form of the question. (Roughly: “In a world where X has zero effect on Y, would we see a result this far from the truth less than 5% of the time?”)

So I think what happens is that when we are taught the abstract logic of what we are doing, it doesn’t really pervade our thinking until it’s been ground into us through repetition. For a period of time – maybe in some cases forever – we carry out the mechanics of what we have been taught to do (run an ANOVA) but we map it onto our old habits of thinking (“Does X cause Y, yes or no?”). And then we elaborate and extrapolate from them in ways that are entirely sensible by their own internal logic (“One ANOVA was significant and the other wasn’t, so X causes Y more than it causes Z, right?”).

One of the arguments you sometimes hear against NHST is that it doesn’t reflect the way researchers think. It’s a sort of usability argument: NHST is the butterfly ballot of statistical methods. In principle, I don’t think that argument carries the day on its own (if we need to use methods and models that don’t track our intuitions, we should). But it should be part of the discussion. And importantly, the Nieuwenhuis et al. review shows us how using unintuitive methods can have real consequences.

The adaptive and flexible workplace personality: What should I talk about?

I was invited to give a talk at an in-service day for my university’s library staff. They are asking people from around the university to contribute, and since our library staff does so much for the rest of us, I thought it would be nice to help out. (Seriously, who doesn’t think librarians are awesome?)

The theme of the in-service day is “Exercising Your Adaptability and Flexibility” (which I think is geared toward helping people think about changes in technology and other kinds of workplace changes). The working title for my talk is “The Adaptive and Flexible Workplace Personality.” They gave me pretty wide latitude to come up with something that fits that theme, and obviously I want to keep it grounded in research. I have a few ideas, but I thought I’d see if I can use the blog to generate some more.

Personality and social psychologists, what would you talk about? What do you think would be important and useful to include? I have one hour, and the staff will be a mix of professional librarians, IT folks, other library staff, etc. I’d like to keep it lively, and maybe focus on 2 or 3 take-home points that people would find useful or thought-provoking.

More ANPRM coverage in the blogosphere

A quick post, as I’m on vacation (sort of)… Institutional Review Blog has some absolutely terrific coverage of the proposed IRB rule changes, aka the ANPRM. The blogger, Zachary Schrag, is a historian who has made IRBs a focus of his research. In particular his Quick Guide to the ANPRM is a must-read for any social scientist considering writing public comments. All the coverage and commentary at his blog is conveniently tagged ANPRM so you can find it easily.

Also, thanks to Tal Yarkoni for a shout-out over at [citation needed].