Admittedly, I seem to be missing the point

In her blog at Discover, marine biologist Sheril Kirshenbaum writes about her experiences being judged based on her gender, especially in combination with her age and appearance, rather than her professional qualifications. It’s a great read. Sadly, I’ve heard too many similar tales from female colleagues.

But there’s one thing missing from her story… Who is the household-name scientist who propositioned her? Kirshenbaum doesn’t say, but inquiring (and gossipy) minds want to know. She does write: “I remind[ed] him I have a popular science blog and warn never to call back.”

Am I a terrible, awful person because a small part of me wants him to call back?

You are not your brain

I just read a very interesting Salon interview with Alva Noe. Noe is a philosopher who has a new book out, titled Out of Our Heads: Why Your Are Not Your Brain and Other Lessons from the Biology of Consciousness.

In the interview, Noe argues that many attempts by neuroscientists to explain consciousness are misguided. He stipulates that understanding the brain is necessary for understanding consciousness. But understanding the brain is not sufficient. Thus, he takes exception to statements like the following from Francis Crick:

You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.

At the outset of the interview, I wondered if Noe was going in the direction of fuzzy, anti-scientific holism. But that was not the case at all. When Noe says that the brain is necessary but not sufficient for consciousness, he is arguing for a rigorous scientific approach to studying the mind, but one that takes a fundamentally different view of what consciousness is. To Noe, consciousness is irreducibly about the relationship between the brain and the outside world. That “irreducibly” is key. It’s not just enough for neuroscientists to say, “Well, yeah, I’ve got stimuli in my fMRI designs.” In accounting for conscious experience, you have to go deeper.

The core of Noe’s argument reminds me a lot of the early conflict in psychology between structuralists and functionalists. The structuralists believed that if you want to understand some aspect of mind, you needed to break it down into its lower-level constituent pieces. The functionalists believed that to understand an aspect of the mind, you needed to understand how it relates to the organism and its environment. So, for example, a structuralist might study emotions by trying to identify components of emotion: stimulus, appraisal, physiological response, expressive behavior, etc. And in modern times, many structuralists try to understand emotions by understanding the interactions of brain networks. By contrast, a functionalist might study emotions by asking what does and does not trigger them, how emotions relate to the individual’s goals and beliefs, and how an emotion can change an organism’s relationship with its environment.

The structuralism-functionalism debate was a contentious one in the early days of psychology. If you think the obvious answer is “you need to do both,” you’re right, but only trivially so. It’s easy to pay lip service; but in practice, it’s a challenge to do research and formulate theories in a way that doesn’t hew to one or the other approach. Many neuroscientists would repudiate overt expressions of greedy reductionism, but they approach conscious experience like structuralists. This approach leads to hidden assumptions that affect how they set their agenda and formulate their theories. And occasionally the hidden assumptions are not so well hidden, like in the Crick quote above, or when misguided neuroscientists assume a direct, invariant relationship between physiological activity and mental experience. (To wit: “they exhibited high levels of activity in the part of the brain called the amygdala, indicating anxiety.” Seriously?)

So although it’s easy to say “you need to do both,” it’s a hell of a lot harder to actually do both in a smart way. The interview mostly focuses on how Noe thinks that neuroscientists are doing things wrong. I’m curious to see whether his book has good ideas about how to do it right.

Jared from Subway banished for extreme deviance

A new rule under consideration by the FTC (see also here) will require that ads with customer testimonials show typical results, not just best-case outcomes.

Of course, following best practices in data visualization would mean you should show the central tendency and the variability (in all directions). I’m not holding my breath for density plots on the nightly news, though. A single, typical exemplar would still be an improvement over a single, cherrypicked extreme.

However… Maybe I’m too jaded, but I wonder about unintended consequences. For example, will there be a flood of crappy research after this rule? If companies are required to depict “typical” results, they may churn out poorly designed studies to get the numbers they want, hoping to lend more credibility to bogus products. And if these studies are marketed as “scientific” and then easily (and publicly) disputed, that could feed into kneejerk cynicism in the public about science more broadly.

Consider that FDA clinical trials are one of the most highly regulated forms of research around, with numerous checks and balances designed to ensure integrity.  The system mostly works, but there are still serious concerns about conflicts of interest. How well is the FTC going to ensure the quality of research on consumer products, herbal supplements, diet plans, and the like? Will there be independent investigators, peer review, mandatory publication of negative results, etc.?

An epidemic of narcissism-ism?

Is there an epidemic of narcissism? Maybe so, maybe not — but it’s certainly becoming fashionable to call people narcissists.

At Slate, Emily Yoffe writes, “This is the cultural moment of the narcissist.” She’s certainly doing her part — the article names plenty of putative narcissists. Called out by Yoffe or her sources: Harvard MBAs, Arnold Schwarzenegger, John Edwards, journalists who twitter, Rod Blagojevich, the Octomom, Leona Helmsley, Bill Clinton, Ingmar Bergman, Frank Lloyd Wright, Stanley Kubrick, and Salvador Dali. (Plus we get a bonus diagnosis: Bernie Madoff is a psychopath.)

Yoffe’s article draws on Jean Twenge’s theory that a cultural shift is causing an increase in narcissism among younger generations. The article doesn’t mention that Twenge’s data and interpretations are disputed, which has led to a lively and at times contentious debate. But rather than discuss that controversy head-on (maybe some other time), I want to address a different though related issue:

Why is it becoming fashionable to label other people as narcissists?

One answer, of course, would be that if Twenge is right, then there are more narcissists around to be noticed. But I don’t think that could be the whole picture. The generational theory wouldn’t explain most of the examples named in the article, who are too old to qualify as “Generation Me.”

Another possibility, I think, comes in a way from flipping Twenge’s argument on its head. Twenge argues that (among other influences) social media like Youtube, Facebook, etc. help make people narcissistic by giving them an outlet and an audience to cultivate their self-aggrandizing impulses. But I think it’s important to also consider the ways that new technology makes people accountable. If I boast on Facebook about how cool I was in high school, the firsthand witnesses will call me out right there on my wall. If I claim a raft of prestigious achievements, anybody can use Google to quickly check the facts (and forward them to their friends). In short: the Internet may allow narcissists to reach a wider audience for their boasts, but it has also led to some spectacular takedowns. The takedowns can get more publicity than the original material, in the process putting narcissism on the map.

Oh, and as an aside, this passage from Yoffe’s article irritates me to no small degree:

Personality disorders … differ from the major mental illnesses, such as schizophrenia and manic-depression, which are believed to have a biological origin. Personality disorders are seen as a failure of character development.

False dichotomy FAIL.

Sure I’ll reduce my paid hours. How about the hours that my class meets?

Or maybe you could just ease the tenure requirements a bit. Yeah, that’ll fly.

UO to ask faculty to take voluntary pay cuts

Nick Kristof gets a B- social psych, and an incomplete in media studies

In today’s NYT, Nicholas Kristof writes about the implications of people choosing their own media sources. His argument: traditional newspapers present people with a wide spectrum of objective reporting. But when people choose their own news sources, they’ll gravitate toward voices that agree with their own ideology.

Along the way, Kristof sort of references research on confirmation bias and group polarization, though he doesn’t call them that, and weirdly he credits Harvard law professor Cass Sunstein for discovering group polarization.

But my main thought is this… Neither confirmation bias nor group polarization are new phenomena. Is it really true that people used to read and think about a broad spectrum of news and opinion? Or are we mis-remembering a supposedly golden era of objective reporting? Back when most big towns had multiple newspapers, you could pick the one that fit your ideology. You could subscribe to The Nation or National Review. You could buy books by Gore Vidal or William F. Buckley.

Plus, confirmation bias isn’t just about what information you choose to consume — it’s also about what you pay attention to, how you interpret it, and what you remember. Did everybody watch Murrow and Cronkite in the same way? Or did a liberal and a conservative watching the same newscast have a qualitatively different experience of it, by virtue of what they brought to the table?

No doubt things have changed a whole heck of a lot in the media, and they’re going to change a lot more. But I’m skeptical whenever I hear somebody argue that society is in decline because of some technological or cultural change. It’s a common narrative, but one that might be more poorly supported than we think.

Research for America

Stimulus money is seeping into the research world. Federal funding agencies are offering one-time-only funding opportunities for researchers through programs like the NIH challenge grants, which are intended to inject money into the economy while correcting some of the recent decline in federal research investment.

Another idea that’s being floated is to use some of the stimulus money to fund post-bac research positions. Over at the NY Times, Sam Wang and Sandra Aamodt have proposed creating a Research for America program that would create paid 2-year scientific research jobs for recent college graduates.

Some might use it to kickstart a career in science. However, for those who go on to other careers, they would carry with them an understanding and firsthand experience of how science works. Considering the current low level of scientific literacy in America, that couldn’t be a bad thing.

Wang & Aamodt’s piece, as well as many of the comments in the thread, talk about the pros and cons of traditional investigator grants versus the RfA program. In psychology, I think the benefits would overlap a fair amount. Labor makes up a large part of the expense of conducting behavioral research. Our measurements are acquired not just through equipment, but also through human beings who do things like FACS coding or other expert judgments. The 2-year, full-time commitment would be a boon to researchers who use labor- and training-intensive methods, many of whom currently depend on student research assistants who work a few hours a week for a few months and then move on.

Finally, a use for the heuristics and biases literature

How do you make a video game opponent realistically stupid?

A lot of attention in the artificial intelligence literature has gone into making computers as smart as possible. This has any number of pretty obvious applications: sorting through large datasets, improving decision-making, dishing out humility, destroying the human race.

But for game designers, a different problem has emerged: how to make a game opponent believably bad:

… People want to play against an opponent that is well matched to their skills, and so there are generally levels of AI in the game that the player can choose from. The simplest way to introduce stupidity into AI is to reduce the amount of computation that it’s allowed to perform. Chess AI generally performs billions of calculations when deciding what move to make. The more calculations that are made (and the more time taken), then (generally) the better the computer will play. If you reduce the amount of calculations performed, the computer will be a worse player. The problem with this approach is that it decreases the realism of the AI player. When you reduce the amount of computation, the AI will begin to make incredibly stupid mistakes — mistakes that are so stupid, no human would ever make them. The artificial nature of the game will then become apparent, which destroys the illusion of playing against a real opponent.

The approach being taken by game makers is to continue to make AI engines that are optimally rational — but then to introduce a probabilistic amount of realistic stupidity. For example, in poker, weak players are more likely to fold in the face of a large raise, even when the odds are in their favor. Game designers can incorporate this into creating “easy” opponents who are more likely (but not guaranteed) to fold when the human player raises.

So far, it appears that the game designers are using a pretty domain-specific approach — like modifying their poker AI based on the human errors that are common in poker. I wonder if additional traction could be gained from the broader psychology literature on heuristics. Heuristics are decision-making shortcuts that allow humans to make pretty good and highly efficient decisions across a wide range of important circumstances. But heuristics can also lead to biases that make us fall short of an optimal, rational expert, which is what most AI is programmed to be. Would game designers benefit from building their AI engines around prospect theory? Could you model the emotional states, and subsequently the appraisal tendencies, of computer opponents? Maybe someone is working on that already.

Newsflash: TV news sucks. Film at 11.

A study of medical news reporting in Australian media has reached the following conclusions:

  • In general, news outlets don’t do a great job of reporting medical research.
  • “Broadsheet” newspapers (vs. tabloids; or what we in America call “newspapers, you know, but not the crappy kind”) do relatively better than other media formats, with 58% of stories being considered satisfactory.
  • Online news sites lag behind print media but are catching up.
  • TV news does the worst job.

Oh, that explains it

A new study by Timothy Salthouse adds to the body of work suggesting that raw cognitive performance begins to decline in early adulthood.

News reports are presenting the basic age pattern as a new finding. It’s not, or at least it’s not new in the way it’s being portrayed. The idea that fluid intelligence peaks in the 20s and then declines has been around for a while. I remember learning it as an undergrad. I teach it in my Intro classes.

So why is a new study being published? Because the research, reported in Neurobiology of Aging, tries to tease apart some thorny methodological problems in estimating how mental abilities change with age.

If you simply compare different people of different ages (a cross-sectional design), you don’t know if the differences are because of what happens to people as they get older, or instead because of cohort effects (i.e., generational differences). In other words, maybe members of more recent generations do better at these tasks by virtue of better schooling, better early nutrition, or something like that. In that case, apparent differences between old people and young people might have nothing to do with the process of getting older per se.

To avoid cohort effects, you could follow the same people over time (a longitudinal design). However, if you do that you have to worry about something else — practice effects. The broad underlying ability may be declining, but people might be getting “test-smart” if you give them the same (or similar) tests again and again, which would mask any true underlying decline.

As a result of different findings obtained with different methods, there was a majority view among researchers that fluid performance starts to decline in early adulthood, but also a significant minority view that that declines happen later.

What Salthouse did was to look at cross-sectional and longitudinal data side-by-side in a way that allowed him to estimate the age trajectory after accounting for both kinds of biases. In principle, this should yield more precise estimates than previous studies about the particular shape of the trend. Based on the combined data, Salthouse concluded that the early-adulthood peak was more consistent with the evidence.

It’s understandable, but unfortunate, that the media coverage isn’t going into this level of nuance. Science is incremental, and this study is a significant contribution (though by no means the last word). But news stories often have a set narrative – the lone scientist having a “eureka!” moment with a shattering breakthrough that “proves” his theory. Science doesn’t work that way, but that’s the way it’s usually covered.