OUN: Why Sachs is not the only one to blame for the MVP mess.


This page in:

One of these men is receiving the bulk of the criticism in the development blogosphere. But, what about the people bankrolling him?

By now, you have surely heard that the Millenium Villages Project has received another $72 million to conduct a second stage (see the discussion by Poverty Matters Blog here). Sachs and a colleague responded to the sustained pressure from the blogosphere, mainly from Michael Clemens of the CGD and Gabriel Demombynes of the World Bank with a piece so absurd that Development Impact, which had stayed silent on this issue for a long time, couldn't take it anymore. David McKenzie's thorough undressing of the response is here. Between blog posts by Blattman, Clemens/Demombynes, McKenzie, and the exhanges these blogs spurred on, I believe that there is REALLY nothing left to be said on (i) whether MVP is being evaluated properly (No); whether it should be evaluated (perhaps/probably); (iii) whether an evaluation would answer the really interesting questions (No).

So, why am I writing? Because while Jeff Sachs deserves most of the criticism he is receiving, I have not seen any discussion of why the UN and George Soros' Open Society Foundations agreed to dish out more money and stand on a podium with Sachs and give him full support. David finished his post with this paragraph yesterday: "Finally, one must also question what donors like the Open Society Foundation and the UN relied on in terms of evidence when deciding to fund this second phase of the MVP project. Either donors are happy to fund such a program based on factors other than empirical evidence, or arguments like those above are misleading decision-making."

Looking into this just a tab bit further, I came across an op-ed written by Ban Ki Moon and a blog post by George Soros a couple of weeks ago. They provide a tiny window into what just might be going on. The Secretary General of the UN talks about growing up poor during the Korean War, recalls his visit to the MVP village in Malawi, and tells of his excitement in seeing modern technologies helping poor people.

Soros, on the other hand, admits that the first $50 million he gave was a humanitarian gesture even if nothing came of MVP. However, now he is convinced. How? He also visited some MVP sites in Kenya and Malawi. If not, he is parroting anecdotes provided by someone else. He may have been a doubter in 2005, but now he is a convert:

Five years ago I believed that the Millennium Village Project deserved a shot. After closely monitoring its progress I now believe it should be scaled up. We have seen how the Millennium Villages can transform people’s lives, and with this next phase I believe it is well on its way to transforming entire countries and regions.

So, the impact evaluation nerds are losing the battle to "I've seen it with my own eyes: it is working" and "we've been monitoring progress: it's all good." This is why I am writing.

The tack that the critics have taken so far is that MVP needs to be evaluated. That has not worked (yet). I propose another tack: we should start talking to the people backing MVP at the UN; Michael Clemens should have his people start calling George Soros' people. We should tell them that they should not be giving away precious resources, which could be used to implement interventions with far better evidence of effectiveness, to people who have not even started to provide evidence of impact for their projects. In other words, Occupy United Nations or the Open Society Foundation.

If we don't succeed in at least getting a more coherent explanation from the donors than "I've seen it working," then we're not doing our jobs.

P.S. For full disclosure, Gabriel Demombynes is a friend and colleague from the World Bank. I met Michael Clemens on several occassions and consider him a colleague. I've never met Messrs. Moon, Sachs, or Soros.



Berk Özler

Lead Economist, Development Research Group, World Bank

Gabriel Demombynes
October 20, 2011

Thanks, Berk, for adding a new perspective to what I agree has become a largely stale discussion.

Just two things:
1) Our study has been published in the peer-reviewed Journal of Development Effectivness:

2) The point of our study is NOT to criticize the Millennium Villages Project. We've said this many times, including in recent post on the Guardian's blog:

The main point of the study is not even to convince the MVP to undertake a rigorous evaluation. Rather, our aim was to look at the MVP as a case study which demonstrates the value of rigorous evaluation, in order to generate debate and (we hope) make it more likely that impact evaluation is seen as a worthwhile pursuit in future projects. The lively blog discussion and fact that the paper is now part of the syllabuses in many development courses are more than we expected to achieve with the study.

By the way, I was very pleased to learn that Paul Pronyk, the MVP's Director of Monitoring and Evaluation, included our paper on the syllabus for his Global Health Practice course. In one class session, two groups of students debate the MVP evaluation:


October 27, 2011

Thanks Berk - in that case, could you perhaps share a couple of links to the detailed discussions of how the IE protocols would be set up?

While I do not work on integrated rural development, I do often work on complex interventions, and it might stimulate my thinking to see thoughtful reviews of, for example, what are the trickiest intervening variables in measuring outcomes from a local ownership process, or how to set up the hypothesis of what capacity building for resilience will achieve so that it can be measured.

I haven't read everything in this debate, but most of what I have read takes as its starting point the claims made by Sachs, which is relatively uninteresting as a learning proposition.

Berk Özler
October 21, 2011

Hi Amanda,

These are really good comments and go to the heart of why I felt compelled to write yesterday.

1. Many projects do go on to get scaled up without solid evidence, many times with good reason. Cycles of producing evidence are not always in sync with cycles of governance. Government and policymakers have important decisions to make constantly and the world does not stop for them to be able to wait for researchers to produce credible evidence. But, of course, MVP is different -- exactly because it is an experiment that is trying to prove or disprove something.

2. You say: "If Sachs or MVP donors are making the argument 'why are you picking on us when this is pretty much the way of the world,' I'm sympathetic." Me too. This is why I posted this piece. I would like Sachs or the donors to come out and say exactly that. Or provide another explanation. I have friends who give out shoes to children in developing countries; I know people who give bicycles to girls. I don't go around pestering them about hwo it might be better of they were just giving cash; because I know if that was the only alternative they would likely not give anything (or not as much), which would be worse. But, if they start trying to justify why they are doing this with weak economic arguments or bad evidence, then I would debate them down to the wire. This is, I think, what is happening here. I just would like the MVP proponents to plainly and clearly explain why they are going forward with this, what the aim is, instead of giving explanations that have proven too easy for others to discredit.

3. I agree that the power of the field visit is large -- for everyone, including myself. It takes self-restraint and familiarity with quantitative data to put things in perspective. My post was not meant to belittle or ridicule the experiences of Messrs Soros and Moon in the field. But, these brief visits alone without attention to the counterfactual can be misleading and costly, which brings us back to why this blog, and other efforts like it exist in the first place.

Thanks again for the excellent comments.


Berk Özler
October 21, 2011

And, completely agreed that other institutions, such as USAID, DfID, Gates, MCC, UN, WB and governments should get the same type of scrutiny. I know many of them are devising internal checks to make sure that these mechanisms are in place, but external scrutiny does help when these fail...

Amanda Beatty
October 21, 2011

Related to Gabriel's point, should projects that were never piloted, or were piloted but were proven effective with mediocre/questionable evidence, be scaled up? Of course most reasonable people who support any type of M&E's role in project management, even Sachs, would likely say no. But I would venture to guess that this happens for 99% of projects, *even* at the World Bank.

If Sachs or MVP donors are making the argument "why are you picking on us when this is pretty much the way of the world," I'm sympathetic. Governments and donors around the world spend billions yearly on projects for which there is no or little credible, especially IE, evidence. (I am basing this solely on my own experience working for donors, and I imagine you would agree based on your WB experience, even if you as a researcher have zero role in how loan or grant money is spent. Indeed, a lot of data goes into WB project preparation but it rarely rises to the level anyone is criticizing the MVP for not producing.)

But like Gabriel says, his point wasn't just about MVP. The hope is that there is more evaluation before scale-up, and accounting for evidence in funding decisions (USAID's DIV is an attempt at changing this). And I would argue that if you are going to profess that your project is *the* answer to eradicating poverty worldwide, then it deserves a decent amount of scrutiny. So I think the MVP attention is deserved (and am also a biased Clemens fan), but other institutions like USAID, DfID, Gates, MCC, UN, WB and governments should get it too. I know this blog and DIME and many other IEs, M and E happening across the WB are working to change that but the "I know it's working because I have seen it with my own eyes" is a compelling argument I have also heard from your (and my former) colleagues and clients.

Andrew Beath
October 27, 2011

Hi Amanda,

Greetings! As someone who was once pilloried by his friends for being the ultimate fan of the co-founder of our masters' program, I hate to say it but I am very frustrated and perplexed by the arguments being put forth by Sachs in defense of the indefensible. As someone who has spent the past 5 years implementing a large-scale RCT of a community development program not unlike MVP, but amidst all the challenges of modern-day Afghanistan (www.nsp-ie.org), I find many of the arguments particularly strange.

- Andrew

October 22, 2011

I have followed the discussion surrounding the MVPs because I am a private social investor and wanted make sure I was getting the bang for the buck. Here is my take.
I appreciate your effort in providing an independent and critical view of the project. Such perspectives are essential to improve development interventions, and to also provide information for better philanthropic investments.
However, while criticism and independent evaluation is good when constructive, to simply state that donors should stop funding such project because you believe there is no sufficient monitoring seems unrealistic and non constructive.
I am not a scientist, I am just looking into it as an investor, and for me, I have received sufficient and accurate information on the project's results. I know there might be tons of tools M&E, and I saw the WB on Impact evaluation, but as investor, I would rather have a scientific but reasonable M&E system than a super accurate high cost M&E that would provide me with the same/similar outcome; I have seem (and invested in) projects that concentrate too much attention on what is not the core (and fair enough to say some of the WB's fit into this category), spending fortunes in high cost processes that do not influence better outcomes nor provide more insights for investments decisions.
I think critical view and external evaluation of the MVPs are healthy. As long as they are constructive. But you did not give a solution to the problems raised. If you were Sachs or Soros, what would be your next steps to make the MVP work? (If your recommendation is to simply stop investing into the MVPs, this would be non-realistic).

Berk Özler
October 22, 2011

Thanks for the comments. My understanding of the debate is that people who have looked very carefully into the M&E of MVP and published their findings are not convinced that what they have seen is scientific and reasonable. Furthermore, the marginal evaluation costs of collecting some data in defensible comparison communities would presumably be a tiny fraction of the funds invested in MVP. And, thirdly, for large projects like this from which there could have been a huge public good component of knowledge, the acceptable ratio of cost of evaluation vs. cost of intervention is presumably higher.

Of course, if investors are happy with what they have seen to reinvest in a project, the incentives to do anything different on the part of the implementers is small. I guess that's why the dissenters have been trying to put public pressure on the MVP to try to do better, debate them to explain why what they're doing is sufficient, etc.

On a recommendation, I am not sure that I am in a position to do so, but let me venture two thoughts. First, I am not sure why it is not realistic to stop/pause investing in MVP villages. Second, the hope is that funders in organizations like the one Amanda raised above or private social investors like yourselves will demand that similar funding proposals in the future will devote more time into thinking about the important questions, how to assess them, and the circumstances under which a scale-up would be warranted.

Thanks again for the comments.


Gabriel Demombynes
October 22, 2011

Berk answered the question from the private social investor well. I would just add one point: in the absence of a compelling evaluation of the MVP, the experience with similar programs may offer some basis for thinking about its likely effects. As we explain in the paper, with many references, "integrated rural development" programs were the rage at the World Bank and the broader international development community in the 1970s. That approach was rejected by the mid-1980s, based on widespread evidence that it failed to deliver its objectives. It is clear that the MVP model is at least *broadly* similar to the IRD approach. The MVP's advocates argues that the MVP model is substantially different. In the paper, we summarize this position and our interpretation of what the IRD experience means for the MVP:

"The MVP (Sanchez et al. 2007) describe what they say are differences between the MVP and IRD: that MVP targets are quantitative and time-bound; that IRD projects were ‘based on insufficient experience with local agricultural systems’; that ‘5- to 10-year commitment of the MVP is longer than the 2–3 year duration of IRD projects’; the ‘decentralization and devolution of authority to local government’; the fact that the pool of available development aid is much larger now; and the fact that there have been advances in agriculture, health, and information technology."

"These differences might potentially offer some basis for optimism that the MVP may have better prospects than past model village programmes, and it important to recognise that past experiences may not be a reliable guide to the expected impact of the MVP. Nonetheless, the troubled history of model village programmes invites a degree of scepticism that future village-level package interventions can spark sustained development. This makes it particularly important that the impact of projects of this type be subject to rigorous evaluation."

Erin Trowbridge
October 24, 2011

For a response from Jeffrey Sachs, Paul Pronyk and Prabhjot Singh of the Millennium Villages Project, please see: http://2mp.tw/6k

Berk Özler
October 25, 2011

Thanks for the comments. These issues have been debated, I would say ad nauseum, in newspapers, other blogs, debates at a conference, and in a forthcoming paper, so my sense is that the marginal returns from rehashing them are small...

A Development Practitioner
October 25, 2011

In the debate, there is surprisingly little estimate of a) what would it cost to have had an effective comparison group for MDV, and b) what would be the feasibility? I am surprised that most of that commentary relates to the "is MDV good/better than zero" question, rather than "is it better than its opportunity cost" or "what aspects of it are most/least successful." The dialogue is stuck at a very low level, emphasizing what should or shouldn't have been done, not why we should care or what we can learn from similarly complex efforts, with similarly difficult concerns (e.g. MDV investment crowding out government roles, sustainability of complex intervention effects, tracking unexpected outcomes, etc).

Put simply: I'm sure a rigorous impact evaluation, had it been done (feasibility/cost aside) would reveal that several of the claims of its success are overstated, but that it has had some positive effect. So what?

To the critics, in particular, why should I care if you're right? It's quite well established, among serious development practitioners, that impact evaluation>no impact evaluation.

So for both sets of actors, critics and MDV team, how about demonstrating your mettle by putting together a sample IE protocol for a project of this complexity (rather than referring to other, inevitably simpler, projects that had IE)? Get into the details of what was done, and what questions you'd seek to answer beyond "can we say that this had any effect." Pull out some of the factors, expose implicit assumptions, and move toward a blogosphere consensus on some of the trickier bits of evaluating complex projects that are intended to spillover and to have interventions redefined by their target populations. Set us towards a better understanding of HOW to do IE for such actions in the future rather than WHETHER to do it.

Amanda Beatty
October 25, 2011

Berk, thanks much for a thoughtful and speedy reply. I agree with all of your points with just a few comments. On the pilot issue, my experience is that it is pretty common to have a pilot just to test run the project, rather than to determine whether it works or not. And my guess is that was really the point of the first few MVP sites. The problem is that the MVP team is kind of spinning this another way - that the pilots were to prove efficacy; and, like you say, the MVP team is trying to justify scale up with higher standards of evidence than they could actually produce.

I have been a bit down about the whole MVP discussion since Sachs is someone I found inspiring and learned a lot from, so here is my letter to him http://goo.gl/Eoxzo (I think you need to be logged into FB to read it.)