I had the honor to deliver the closing address to the Society for the Improvement of Psychological Science on July 9, 2019 in Rotterdam. The following are my prepared remarks. (These remarks are also archived on PsyArXiv.)
Some years ago, not long after people in psychology began talking in earnest about a replication crisis and what to do about it, I was talking with a colleague who has been around the field for longer than I have. He said to me, “Oh, this is all just cyclical. Psychology goes through a bout of self-flagellation every decade or two. It’ll be over soon and nothing will be different.”
I can’t say I blame him. Psychology has had other periods of reform that have fizzled out. One of the more recent ones was the statistical reform effort of the late 20th century – and you should read Fiona Fidler’s history of it, because it is completely fascinating. Luminaries like Jacob Cohen, Paul Meehl, Robert Rosenthal, and others were members and advisors of a blue-ribbon APA task force to change the practice of statistics in psychology. This resulted in the APA style manual adding a requirement to report effect sizes – one which is occasionally even followed, though the accompanying call to interpret effect sizes has gotten much less traction – and a few other modest improvements. But it was nothing like the sea change that many of them believed was needed.
But flash forward to the current decade. When people ask themselves, “Will this time be different?” it is fair to say there is a widespread feeling that indeed it could be. There is no single reason. Instead, as Bobbie Spellman and others have written, it is a confluence of contributing factors.
One of them is technology. The flow of scientific information is no longer limited by what we can print on sheets of pulped-up tree guts bound together into heavy volumes and sent by truck, ship, and airplane around the world. Networked computing and storage means that we can share data, materials, code, preprints, and more, at a quantity and speed that was barely imagined even a couple of decades ago when I was starting graduate school. Technology has given scientists far better ways to understand and verify the work we are building on, collaborate, and share what we have discovered.
A second difference is that more people now view the problem not just as an analytic one – the domain of logicians and statisticians – but also, complementarily, as a human one. So, for example, the statistical understanding of p-values as a function of a model and data has been married to a social-scientific understanding: p-values are also a function of the incentives and institutions that the people calculating them are working under. We see meta-scientists collecting data and developing new theories of how scientific knowledge is produced. More people see the goal of studying scientific practice not just as diagnosis – identify a problem, write a paper about it, and go to your grave knowing you were right about something – but also of designing effective interventions and embracing the challenges of scaling up to implementation.
A third difference, and perhaps the most profound, is where the ideas and energy are coming from. A popular debate on Twitter is what to call this moment in our field’s history. Is it a crisis? A renaissance? A revolution? One term that gets used a lot is “open science movement.” Once, when this came up on Twitter, I asked a friend who’s a sociologist what he thought. He stared at me for a good three seconds, like I’d just grown a second head, and said: “OF COURSE it’s a social movement.” (It turns out that people debating “are we a social movement?” is classic social movement behavior.) I think that idea has an underappreciated depth to it. Because maybe the biggest difference is that what we are seeing now is truly a grassroots social movement.
What does it mean to take seriously the idea that open science is a social movement? Unlike blue-ribbon task forces, social movements do not usually have a single agenda or a formal charge. They certainly aren’t made up of elites handpicked by august institutions. Instead, movements are coalitions – of individuals, groups, communities, and organizations that have aligned, but often not identical, values, priorities, and ideas.
We see that pluralism in the open science movement. To take just one example, many in psychology see a close connection between openness and rigor. We trace problems with replicability and cumulative scientific progress back, in part, to problems with transparency. When we cannot see details of the methods used to produce important findings, when we cannot see what the data actually look like, when we cannot verify when in the research process key decisions were made, then we cannot properly evaluate claims and evidence. But another very different argument for openness is about access and justice: expanding who gets to see and benefit from the work of scientists, join the discourse around it, and do scientific work. Issues of access would be important no matter how replicable and internally rigorous our science was. Of course, many – and I count myself among them – embrace both of these as animating concerns, even if we came to them from different starting points. That’s one of the powerful things that can happen when movements bring together people with different ideas and different experiences. But as the movement grows and matures, the differences will increase too. Different concerns and priorities will not always be so easily aligned. We need to be ready for that.
SIPS is not the open science movement – the movement is much bigger than we are. Nobody has to be a part of SIPS to do open science or be part of the movement. We should never make the mistake of believing that a SIPS membership defines open science, as my predecessor Katie Corker told us so eloquently last year. But we have the potential to be a powerful force for good within the movement. When SIPS had its first meeting just three years ago, it felt like a small, ragtag band of outsiders who had just discovered they weren’t alone. Now look at us. We have grown in size so fast that our conference organizers could barely keep up. 525 people flew from around the world to get together and do service. Signing up for service! (Don’t tell your department chair.) People are doing it because they believe in our mission and want to do something about it.
This brings me to what I see as the biggest challenge that lies ahead for SIPS. As we have grown and will continue to grow, we need to be asking: What do we do about differences? Both the differences that already exist in our organization, and the differences that could be represented here but aren’t yet. Differences in backgrounds and identities, differences in culture and geography, differences in subfields and interests and approaches. To which my answer is: Differences can be our strength. But that won’t happen automatically. It will take deliberation, intent, and work to make them an asset.
What does that mean? Within the open science movement, many have been working on improvements. But there is a natural tendency for people to craft narrow solutions that just work for themselves, and for people and situations they know. SIPS is at its best when it breaks through that, when it brings together people with different knowledge and concerns to work together. When a discussion about getting larger and more diverse samples includes people who come from different kinds of institutions who have access to different resources, different organizational and technical skills, but see common cause, we get the Psychological Science Accelerator. When people who work with secondary data are in the room talking about preregistration, then instead of another template for a simple two-by-two, we get an AMPPS paper about preregistration for existing data. When mixed-methods researchers feel welcomed one year, they come back the next year with friends and organize a whole session on open qualitative research.
Moving forward, for SIPS to continue to be a force for good, we have to take the same expectations we have of our science and apply them to our movement, our organization, and ourselves. We have to listen to criticism from both within and outside of the society and ask what we can learn from it. Each one of us has to take diversity and inclusion as our own responsibility and ask ourselves, how can I make this not some nice add-on, but integral to the way I am trying to improve psychology? We have to view self-correction and improvement – including improvement in how diverse and inclusive we are – as an ongoing task, not a project we will finish and move on from.
I say this not just as some nice paean to diversity, but as an existential task for SIPS and the open science movement. This is core to our values. If we remake psychological science into something that works smashingly well for the people in this room, but not for anyone else, we will have failed at our mission. The history of collective human endeavors, including social movements – the ways they can reproduce sexism and racism and other forms of inequality, and succumb to power and prestige and faction – gives us every reason to be on our guard. But the energy, passion, and ideals I’ve seen expressed these last few days by the people in this room give me cause for hope. We are, at the end of the day, a service organization. Hundreds of people turned up in Rotterdam to try to make psychology better not just for themselves, but for the world.
So when people ask, “Will this time be different?” my answer is this: Don’t ever feel certain that the answer is yes, and maybe this time it will be.