Calling all skeptics


This page in:

Have you seen an impact evaluation result that gives you pause? Well, now there’s an institutional way to check on results of already published evaluations.    3ie recently announced a program for replication. They are going to focus on internal validity – replicating the results with the existing data and/or using different data from the same population to check results (in some cases).      

It works like this.   They have committed to funding replication studies of evaluations that are innovative, influential and/or counter-intuitive (you can check out the program description here).   They are currently in the process of assembling a list of candidate studies for replication. If you have an idea for a study to replicate, you can email them at [email protected] by May 31.   The list of potential studies will be reviewed by 3ie and their replication advisory board (composed of folks with a diverse set of backgrounds) to come up with a list of 10-20 candidate studies. These studies will then be open for folks to pick up through applications to 3ie in a process to be announced later (we will post a link when it comes available).  

Another interesting thing about this announcement is that it looks like they did a review of existing replications.   Their list is here and they count a paltry 15 replications – a bunch of which focus on the same paper.   So clearly there is work to do.   What do you think of this initiative?   Any ideas on the process and the things that they should focus on?  


Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

May 22, 2012

This is a welcome initiative. But as it is starting we need to think about whether we need to increase our knowledge(and the certainty about the accuracy of that knoweledge) on the extensive or intensive margin(more knowledge vs. more accurate knowledge). When it comes to impact evaluations (especially controlled impact evaluations) we have enough tools and opportunities(workshops, refereeing) to gauge the technical and operational soundness of an impact evaluation(internal validity). In this case replications to determine the external validity rather than the internal validity of an impact evaluation would be the way to go to increase our confidence on the accuracy of the knowledge that was generated.

May 24, 2012

Dear Blogger Sir. Thank you for calling all skeptics. I am I confess among them. I am indeed one of those party spoilers who wonder what the material connection is between the world-wide development of new institutions, departments, units, academic posts and research programs, largely in rich countries, to refine evaluation methods for, and (very optimistically) design of, development aid in poor countries, on the one hand, and the actual dialectic (i.e. the messy facts) of economic and social development in those poor countries on the other. Very little I fear. I cant see how increasing knowledge of how to improve the delivery of aid has much to do with imperatives of actual development which are absolutely the business of the Governments and people who live in the countries concerned, who are in turn unlikely to be spending time waiting for the guidance of lengthy, replicated, yet unreliable, randomized control trials run by foreign academic institutions in order to make urgent and semi-urgent decisions about how to spend their resources. To doubting Thomases and other simple-minded people like me this research effort seems to be a costly distraction. Could I be wrong?

May 28, 2012

The initiative is welcome, but the process is not. Why not just have one process whereby researchers submit their replication ideas, a subset of which are chosen and the same researchers are then required to develop more detailed proposals prior to final acceptance? The current process means that good replication ideas are likely to get contracted-out to those who didn't come-up with them. To critics this looks a little like a way of ensuring that even controversial ideas (first round) can be contained by giving them to relatively uncritical researchers (second round). Isn't that a little at odds with the basic principles of academic research, and in fact the claimed objectives of the initiative? In fact, if I had a good/publishable replication idea, why exactly would I want to submit it to this initiative?

Ben Wood
May 30, 2012

Thank you for your comments. 3ie believes that the marginal benefit of providing robust evidence to developing world policy makers outweighs the marginal cost of commissioning internal replications. While the tools for replication exist, due to the publication incentive structure they are rarely used. By targeting innovative, influential, and controversial impact evaluations, we plan to help fill the gap between theoretical support for results validation and actual replication studies. Additionally, our replication researchers will not be limited to controlled impact evaluations, as 3ie supports quasi-experimental as well as RCT methods. We acknowledge the additional need for external validation and hope future researchers take up that task.

Ben Wood
May 30, 2012

Doubters play an important role in development. So your comments and concerns are duly noted. 3ie is committed to engaging with our developing world stakeholders. We believe that validating some of the most policy-relevant impact evaluations will increase the robustness of this research. Our replication researchers will evaluate on-the-ground programs and aren’t limited to randomized controlled trials.

Increasing the robustness of evidence that supports effective development should help policy makers make more informed decisions. Any assistance or suggestions you might provide to help us with that goal would be greatly appreciated ([email protected]).

Ben Wood
May 30, 2012

Thank you for your comments. We chose this process to open the crowdsourcing to individuals throughout the development field. Thus, we are able to receive ideas from researchers, policy makers, and academics without limiting ourselves to people with the knowledge and time to conduct the replication themselves.

Your concerns over the potential disconnect between the first and second rounds are valid. But there is nothing to prevent a researcher from proposing a publication for replication and then submitting a proposal to conduct that same replication. Alternatively, 3ie would be happy to help disseminate replication research that individuals chose to independently undertake. Our goal is to increase the amount of robust evidence available for policy makers to make better informed decisions.