After last week’s review of Mark Rosenzweig’s review of Poor Economics, I got asked, via email and comments, what I thought about Martin Ravallion’s review in the same issue of the Journal of Economic Literature. Last week’s post was motivated by an issue I had been thinking about for a while, i.e. the issue of small gains in absolute terms that are large in percentage terms. Once I read Rosenzweig’s piece, discussing this issue much more eloquently that I had been (in my head), it all clicked together with the debate on “small” vs. “big” economics and became last week’s post.
Of course, it is true that it’s strange to cover only one of two papers on exactly the same topic in the same issue of a journal. So, this week, I follow up with my reading of Martin Ravallion’s review of the same book. (Full disclosure: what seems a lifetime ago, when I was a wide-eyed student of Erik Thorbecke and told him that I was interested in studying issues surrounding poverty, he wrote to Martin and recommended me as a summer intern. Since then, Martin has been my supervisor, then my research manager, and is currently my research director.)
It’s fair to say that Ravallion’s piece comes across as much more skeptical of the agenda put forth by Banerjee and Duflo’s book. Some of this should by now be familiar to many of the readers: Ravallion sees the authors as putting RCTs on a pedestal, and the questions that are most suited to be answered by this “gold standard” are small policy reform questions rather than big ones. Ravallion, along with others, have been on the record for a good while now questioning the premise that experimental evidence is always better and whether experiments should be preferred to collecting more data, and that’s where he starts.
To a reader not familiar with some of these arguments, the objections Ravallion raises can seem like a long laundry list. However, one likely does not need 15 different reasons to be convinced that RCTs should not be seen as a panacea for the “dismal science,” so below I summarize the key ones.
The most serious concern with RCTs is with the estimation of the average treatment effect. To obtain this estimate, researchers use the randomized offer as an instrument for treatment. However, many people decline these generous offers (to take pills, to get immunized, to take-up health insurance, etc.). Ravallion argues, citing Heckman, Urzua, and Vytlacil (2006), that when this happens the error term in the impact regression has no longer a mean of ‘zero’ conditional on treatment status, violating the main assumption for this instrumental variables (IV) strategy to hold.
It’s very hard to dismiss this argument. It is true that, in many cases, what we’re interested in is the “intention to treat” estimate, i.e. the difference between those offered the treatment and those who were not regardless of whether they ended up being “treated” or not. But, often times, we are also interested in whether something works. And, in those cases, we do need the average treatment effect. This is particularly the case when the take-up rates are low, for things such as microfinance or insurance.
However, while this is the case, we hardly see any discussion of this issue in papers that use this IV approach. It certainly shouldn’t be surprising that subjects make rational decisions about whether to accept or reject an offer to participate and that some things about them that we cannot observe are prognostic of the outcomes: after all in many studies, we see differences in observable characteristics between those who take-up the offer and those who don’t.
Given this, Ravallion argues that there is another way: collect lots of data. Surveys we do these days and the data collection tools we use are much more sophisticated than before and, often times, tailored to the problem at hand. Given the cost of experimentation on the one hand, and the (unseen) relative benefits of an experiment vs. a careful quasi- or non-experimental study, Ravallion argues that it is not a priori clear that we should always be going for the RCT.
Perhaps most compelling is the argument that when experiments are combined with non-experimental methods to estimate structural behavioral parameters, they can be most useful. The wish that more economics (and econometrics) is brought into the world of RCTs is also where Rosenzweig and Ravallion converge. Often times, the value of an RCT is not finding out whether intervention X worked or not. The important question, almost always, is: “What did we learn from this experiment?” “Does it tell me something that I can use to make a big(ger) difference in the lives of many people?”
It is true that once we have one experiment, we can have many more to “span all the relevant dimensions of variation in impact by scale and context.” Certainly, this is what many advocate and is what many of us are doing. However, Ravallion thinks this is too ambitious and proposes alternatives. One of them is to use those structural parameters estimated by marrying the best of both worlds and then using them to simulate alternative designs. He is aware that the RCT purists will view such an approach as not exactly kosher, but thinks it would be progress over what we have today. Currently, the balance is definitely tilting the other way but it’s hard to imagine that the pendulum will not swing in the other direction at some point in the near future…
The laundry list of criticism, however, is much too long and risks turning the reader off. The author’s personal encounters with J-PAL associates and a graduate student are unnecessary. Certainly, we can understand there are people who said those things, but should we jump to the conclusion that the majority of researchers involved in this type of work feel the same? Also, many of the criticisms wielded at RCTs can be also made about research using other methods: external validity, compliance, interference and spillovers, and even ethical considerations. I feel that the review would have been more effective in affecting the decisions of graduate students and junior researchers had it kept its focus on the more serious concerns and not listed everything under the sky that one can object to about RCTs.
In the end, it’s not hard to read Ravallion’s critique and come away with the view that he is an “anti-randomista,” but that would be almost certainly wrong. Hidden between the lines is the statement that “randomized assignment can help identification has not been at issue.” In fact, he is involved in experimental studies himself. He just wants to get at the bigger questions in development, such as the existence and implications of poverty traps or the effects of poor-area development programs. And, he definitely doesn’t want the method to determine the questions and, by consequence, our research. And, who can disagree with that?