Nick Kristof gets a B- social psych, and an incomplete in media studies

In today’s NYT, Nicholas Kristof writes about the implications of people choosing their own media sources. His argument: traditional newspapers present people with a wide spectrum of objective reporting. But when people choose their own news sources, they’ll gravitate toward voices that agree with their own ideology.

Along the way, Kristof sort of references research on confirmation bias and group polarization, though he doesn’t call them that, and weirdly he credits Harvard law professor Cass Sunstein for discovering group polarization.

But my main thought is this… Neither confirmation bias nor group polarization are new phenomena. Is it really true that people used to read and think about a broad spectrum of news and opinion? Or are we mis-remembering a supposedly golden era of objective reporting? Back when most big towns had multiple newspapers, you could pick the one that fit your ideology. You could subscribe to The Nation or National Review. You could buy books by Gore Vidal or William F. Buckley.

Plus, confirmation bias isn’t just about what information you choose to consume — it’s also about what you pay attention to, how you interpret it, and what you remember. Did everybody watch Murrow and Cronkite in the same way? Or did a liberal and a conservative watching the same newscast have a qualitatively different experience of it, by virtue of what they brought to the table?

No doubt things have changed a whole heck of a lot in the media, and they’re going to change a lot more. But I’m skeptical whenever I hear somebody argue that society is in decline because of some technological or cultural change. It’s a common narrative, but one that might be more poorly supported than we think.

Finally, a use for the heuristics and biases literature

How do you make a video game opponent realistically stupid?

A lot of attention in the artificial intelligence literature has gone into making computers as smart as possible. This has any number of pretty obvious applications: sorting through large datasets, improving decision-making, dishing out humility, destroying the human race.

But for game designers, a different problem has emerged: how to make a game opponent believably bad:

… People want to play against an opponent that is well matched to their skills, and so there are generally levels of AI in the game that the player can choose from. The simplest way to introduce stupidity into AI is to reduce the amount of computation that it’s allowed to perform. Chess AI generally performs billions of calculations when deciding what move to make. The more calculations that are made (and the more time taken), then (generally) the better the computer will play. If you reduce the amount of calculations performed, the computer will be a worse player. The problem with this approach is that it decreases the realism of the AI player. When you reduce the amount of computation, the AI will begin to make incredibly stupid mistakes — mistakes that are so stupid, no human would ever make them. The artificial nature of the game will then become apparent, which destroys the illusion of playing against a real opponent.

The approach being taken by game makers is to continue to make AI engines that are optimally rational — but then to introduce a probabilistic amount of realistic stupidity. For example, in poker, weak players are more likely to fold in the face of a large raise, even when the odds are in their favor. Game designers can incorporate this into creating “easy” opponents who are more likely (but not guaranteed) to fold when the human player raises.

So far, it appears that the game designers are using a pretty domain-specific approach — like modifying their poker AI based on the human errors that are common in poker. I wonder if additional traction could be gained from the broader psychology literature on heuristics. Heuristics are decision-making shortcuts that allow humans to make pretty good and highly efficient decisions across a wide range of important circumstances. But heuristics can also lead to biases that make us fall short of an optimal, rational expert, which is what most AI is programmed to be. Would game designers benefit from building their AI engines around prospect theory? Could you model the emotional states, and subsequently the appraisal tendencies, of computer opponents? Maybe someone is working on that already.