Published on Development Impact

Learning from (partial) failure

This page in:

Today, if you wander on over to the Millennium Challenge Corporation website, you’ll find a brief on the results from 5 impact evaluations of agricultural training projects.   This is exciting for a number of reasons.

First, the projects were not an unmitigated success. Of the five evaluations, one doesn’t give them results robust enough to merit discussion in the brief. Of the remaining four, 2 find increases in farm incomes across the sample, while one finds impacts for part of the sample.   However, none of the evaluations finds an overall increase in household income, or consumption (for the one which looked at this).  

Given that poverty reduction is one of MCC’s main goals, one might be inclined to take a dimmer view of the MCC after reading this, but I would argue just the opposite.   It took significant institutional courage to put this brief,  (and soon to follow) the evaluations, and the peer reports out on the web for everyone to take a good hard look at (full disclosure: I was one of the peer reviewers).    

Not only is this gutsy, but it’s also a serious public good.   As the brief notes, before the release of this body of evidence, there were only 3 impact evaluations on farming training programs out there.   So now the brief adds to this body (as do the individual evaluations), by giving us a set of preliminary lessons they take away from this experience in terms of the design of these programs.   General lessons come out on focusing training more intensively on a smaller number of farmers, rethinking the use of starter kits, and the efficacy of longer periods of mentoring for farmers. 

The critical mass of this evidence seems to be helping the MCC take these lessons to their programs -- not just upcoming ones, but also thinking about mid-course corrections for ones that are already going on.   Indeed, I can imagine that if they had only done one evaluation it would have been easy to dismiss it as “well, this kind of agricultural training just doesn’t work well in country X” and not change anything.    

It is also interesting to see the shift towards learning going on.   As I argued in an earlier post, impact evaluation is at its best when the aim is learning rather than judgment (as did Berk in a subsequent post).    The MCC brief makes explicit the argument that impact evaluations need to be designed for learning, not just accountability (which had been the primary goal when these set of evaluations started).   Indeed, the MCC response to this is, among other things, to convene sector experts and country partners to define a learning agenda.   This is exciting, and it is all the more promising when you take into account that the MCC has 40% of its major activities under impact evaluation – some 100 plus different evaluations.

The brief and the attendant evaluations also have a couple of important lessons in how we do impact evaluation.    First of all, they raise issues about randomized roll-out.    This is often an eminently politically feasible way to get randomization when capacity constraints bind, but the project team doesn’t want to leave anyone uncovered.   However, as the brief points out, this can sharply limit the likelihood that you will observe results further down the results chain (such as household income or consumption). Indeed, in my own experience, randomized roll-out has caused a few of my remaining hairs to be pulled out by their roots: once where the randomized roll out ran out much faster than we expected, another time where the larger program shifted emphasis after the first wave and almost didn’t implement the second wave of the intervention we were evaluating.

Second, the brief brings home again this point that the evaluator and the project need to work closely together at every stage.   And that the incentives (on both sides) need to be aligned to make sure this happens – particularly when the evaluator is from outside the funding institution or the project team.   Finally, the brief raises the point on thinking hard about complementary interventions.   They cite the example of irrigation – and indeed, this is one that I have found to be particularly hard to pin down in terms of how long it will be before it is done (albeit for good operational reasons most of the time). In the MCC case, some of the impacts of the other interventions depended (in theory) on irrigation being ready at the right time – and it wasn’t.  

At the end of the day, this is a significant effort, particularly when you consider the publication bias (not to mention potential institutional incentives) which means that a lot of “no impact” results do not see the light of public discourse.   I tip my hat to the MCC for not only putting the evaluations out there, but giving us a bunch of useful lessons in the process.  


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000