Syndicate content

Have RCTs taken over development economics?

David McKenzie's picture

Last week the “State of Economics, State of the World” conference was held at the World Bank. I had the pleasure of discussing (along with Martin Ravallion) Esther Duflo’s talk on “The Influence of Randomized Controlled Trials on Development Economics Research and on Development Policy”. The website should have links to the papers and video stream replay up (if not already, then soon).

The first part of Esther’s talk traced out the growth in RCTs in development economics. She pointed out that in 2000 the top-5 journals published 21 articles in development, of which 0 were RCTs, while in 2015 there were 32, of which 10 were RCTs – so pretty much all the growth in development papers in top journals comes from RCTs. She also showed that the more recently BREAD members had received their PhD, the more likely they were to have done at least one RCT.
In my discussion I expanded on these facts to put them in context, and argue against what I see as a couple of strawman arguments: 1) that top journals only publish RCTs, and that RCTs have taken over development research; and 2) that young researchers have a “randomize or bust” attitude and refuse to do anything but RCTs. I thought I’d summarize what I said on both here.

Have RCTs taken over development research?
I looked also at development articles published in the next tier of good general interest journals (AEJ Applied, EJ, and ReStat); at the three leading development economics journals (JDE, EDCC and WBER), and at World Development. This resulted in the graph below, which shows the proportion of development articles in each group that were RCTs in 2015, as well as the total number of development papers they published in 2015.

We see that RCTs are a much higher proportion of the development papers published in general interest journals than in development journals. However, even in these journals they are the minority of development papers – there are more non-RCT development papers than RCTs even in these general journals. Moreover, since most of the development papers are published in field journals, RCTs are a small percentage of all development research: out of the 454 development papers published in these 14 journals in 2015, only 44 are RCTs (and this included a couple of lab-in-the-field experiments). As a result,  policymakers looking for non-RCT evidence have no shortage of research to choose from.

Randomize or bust?
Another claim is that the “best and brightest talent of a generation of development economists been devoted to producing rigorous impact evaluations” about topics which are easy to randomize  and that they take a “randomize or bust” attitude whereby they turn down many interesting research questions if they can’t randomize
To explore this, I examined the publication records of the 65 BREAD affiliates (this is the group of more junior members), restricting attention to the 53 researchers who had graduated in 2011 or earlier (to give them time to have published). The median researcher had published 9 papers, and the median share of their papers which were RCTs was 13 percent. Focusing on the subset of those who have published at least one RCT, the mean (median) percent of their published papers that are RCTs is 35 percent (30 percent), and the 10-90 range is 11 to 60 percent. So young researchers who publish RCTs also do write and publish papers that are not RCTs. Indeed this is also true of Esther and her co-authors on this paper (Abhijit Banerjee and Michael Kremer) – although known as the leaders of the “randomista” movement, the top-cited papers of all three researchers are not RCTs.
The rest of Esther’s talk was also well-worth watching or reading – she discusses how RCTS have influenced the practice of development research (making the points that they have increased the standards of non-experimental work, led to innovations in measurement, among other things), and development policy (using the case of USAID’s Development Innovation Ventures as an example).


Submitted by Martin Ravallion, Georgetown University on

The academic journals is not where the methodological substitution in favor of RCTs is most evident, or worrying. Rather it is in development policy evaluations where we have seen a marked switch in favor of RCTs within institutions such as the World Bank. The Bank’s own Independent Evaluation Group reported in 2012 that over 80% of the impact evaluations starting in 2007-10 used randomization, as compared with 57% in 2005-06 and only 19% in prior years. On why this is worrying, see my comments on the paper by Duflo, Banerjee and Kremer at the same conference.

Submitted by Dennis Bours on

The 2012 resource for the information on over 80% of 2007-2010 impact evaluations using randomization is "World Bank Group
Impact Evaluations - Relevance and Effectiveness". It was based on a DIME selection of IEs, with a focus on IEs mostly initiated by DIME.

If you ask DIME to develop the database, you will get a result that is heavily skewed towards randomization... And the report is very clear about it! If you keep on reading, the report says the following (and read the footnotes):
“The higher rate of randomization among recent World Bank IEs is due to the explicit focus of DIME on prospective evaluations (either on the basis of random assignment or random phase-in). Similarly, the SIEF selection guidelines also prioritize randomized IEs.(1) However, because the IE database used for analysis was compiled principally by DIME, the high prevalence of experimental IEs reported in recent years is likely biased upward. (2)”

(1) Footnote: For instance, under the SIEF selection criteria on technical quality, IEs using randomization as an identification strategy can attain the maximum possible score of 10, compared with a maximum possible score of 7 that quasi-experimental IEs can obtain.

(2) Footnote: The increase in randomization among ongoing IEs may partly reflect a bias in the IE database compiled by DIME, as most ongoing IEs included are prospective. Second, the count of ongoing IEs in the database is most complete for IEs initiated by DIME or for those programs that collaborate closely with DIME. Initiatives like DIME and SIEF also favor experimental methods and prospective
IEs as discussed in the report. Also, because most ongoing IEs are still in the discussion, design, or baseline data collection stage, the information on IE design is based on what is planned. But it could be that when it comes to execution, it is not possible to fully implement such a design in which case non-experimental methods of evaluation are used.

Always good to have the full picture.

Submitted by Ida Nadia Djenontin on

Please Professor Ravallion, where can we find your comments on the paper by Duflo?

Submitted by Carlos Oya on

Prof. Ravallion, your paper/reply is not posted on conference website. Any other weblinks? Very keen to read, thanks.

Submitted by Asif Dowla on

I am guessing many new ph.ds and students on the job market are doing significantly more RCTs. I have not crunched the data, but it would be easy to find out by looking at the development job market candidates from last 5 years or so. So, even though David's data does not show a significant increase in RCTs based publication now, it will only go up in the future. RCTs, I am afraid, could cause a "Dutch Disease" .

I am not against RCTs. I find them extremely useful and amazed by the creativity of the researchers. One positive aspect of RCTs that Esther didn't mention is their ability to debunk what Noah Smith calls "Econ 101" ism. For example, most economists would be against giving out anything for free. The typical Econ 101 suggestion will be to at least charge at least a user fee. However, Dupas study on bednets shows that Econ 101 answer is wrong. Giving out bed nets for free did not lead to misuse and prevent future purchases.


Researchers these days are using more and more of advanced econometrics with sophisticated mathematics in order to measure as objectively as possible. However they tend to forget that development is not all about mathematical science it is more than anything about social science also.Qualitative aspect plays a pivotal role in assessment of impact on the lives and livelihood of the people.There may be commonalities across the globe on certain parameters but the key influential factors may be typical to a particular economy. Hence to get a clear picture they must apply qualitative evaluation techniques.

Add new comment