Jim Manzi, a senior fellow at the Manhattan Institute, recently wrote an article that contains perhaps the most incisive critique of randomized control trials (RCTs) I've seen so far. RCTs already have a long history in the social sciences within the U.S. (I was embarrassingly unaware of this fact even though I have spent a lot of time blogging about their use in the field of development.) Criminologists have tried to use them to figure out what might reduce recidivism, with little luck:
...since the early 1980s, criminologists increasingly turned to randomized experiments. One of the most widely publicized of these tried to determine the best way for police officers to handle domestic violence. In 1981 and 1982, Lawrence Sherman, a respected criminology professor at the University of Cambridge, randomly assigned one of three responses to Minneapolis cops responding to misdemeanor domestic-violence incidents: they were required to arrest the assailant, to provide advice to both parties, or to send the assailant away for eight hours. The experiment showed a statistically significant lower rate of repeat calls for domestic violence for the mandatory-arrest group. The media and many politicians seized upon what seemed like a triumph for scientific knowledge, and mandatory arrest for domestic violence rapidly became a widespread practice in many large jurisdictions in the United States.
But sophisticated experimentalists understood that because of the issue’s high causal density, there would be hidden conditionals to the simple rule that “mandatory-arrest policies will reduce domestic violence.” The only way to unearth these conditionals was to conduct replications of the original experiment under a variety of conditions. Indeed, Sherman’s own analysis of the Minnesota study called for such replications. So researchers replicated the RFT six times in cities across the country. In three of those studies, the test groups exposed to the mandatory-arrest policy again experienced a lower rate of rearrest than the control groups did. But in the other three, the test groups had a higher rearrest rate.
This reminds me quite a bit of the controversies surrounding recent RCTs of microfinance. As I pointed out in an earlier post, one of the highly problematic issues surrounding RCTs of microfinance is that it is hard to extrapolate the results of, say, a finding that microcredit in an urban slum in India does little to raise incomes to a rural setting in Kenya. We are immediately forced back to theory and guesswork.
There is one area where RCTs have proven a rousing success -- the business world. As Manzi explains, Capital One was established with an explicit strategy of running randomized tests on every aspect of its operations, from the color of its solicitation envelopes to its collection policies. But it worked for Capital One only because it could cheaply replicate these experiments repeatedly, which gave it insight into how sensitive particular findings were to circumstances.
Where does that leave development organizations? Should we simply replicate RCTs of microfinance in every slum of the world, a la Capital One? This is obviously neither practical nor feasible. In the face of our own ignorance, Manzi concludes that we should stay humble and stick with trial-and-error processes of social evolution.
This makes sense, up to a point. RCTs will never replace mechanisms like a well-functioning market or democratic competition. But they can certainly focus attention on interventions that aren't working up to their potential. Returning to the case of microfinance, recently published RCTs focused on microcredit, and not on the savings side of the equation. Even if the general observation that "access to savings is important" cannot be translated into a precise policy for every developing country, it is still helpful in directing policymakers' attention to the next trial they should run in the trial-and-error process of government.
(H/t: Andrew Sullivan)
Join the Conversation