The Millennium Villages Project Impacts on Child Mortality

|

This page in:

This post was co-authored with Espen Beer Prydz. The findings, interpretations, and conclusions expressed in this post are entirely ours. They do not necessarily represent the views of the World Bank and its affiliated organizations.

The Lancet recently published a paper by Pronyk et al. [ungated version] which examines the effects of the Millennium Villages Project (MVP) during its first 3 years in 9 countries. The paper has generated an editorial in Nature magazine along with reactions from Lawrence Haddad, Annie Feighery at UN Dispatch, Matt Colin at Aid Thoughts, Lee Crawfurd at Roving Bandit, and Tom Murphy at View from the Cave.

The key result from the paper is that

the average rate of reduction of mortality in children younger than 5 years of age was three-times faster in Millennium Village sites than in the most recent 10-year national rural trends (7.8% vs. 2.6%).

However, when we correct for a mathematical error and use more recent comparison data, we find that under-5 mortality has fallen at just 5.9% per year at MVP sites, which is slower than the 6.4% average annual decline in under-5 child mortality in the MVP countries nationwide.

Under-5 mortality is described in both the paper and the MVP research protocol as the primary outcome. There are two flaws to the Pronyk et al. under-5 mortality analysis. The first is the mathematical error that overstates the 7.8% rate of decline at the MV sites. The second is that the 2.6% national comparison figure is based mostly on a time period that predates both the MVP and an acceleration of mortality declines in the relevant countries. The most recent rural trend data available show even more rapid declines than the national trends.

The mathematical error is straightforward. Child mortality is inherently a retrospective measure, as it is derived from the survival probabilities for some period before a given survey. As the paper’s appendix explains,

For the purposes of the analysis, the “baseline” period is defined as the 5 years before the intervention started; the “follow‐up” period is the first 3 years of implementation.

Thus the “year 0” or “baseline” mortality estimates in Pronyk et al. correspond to the 5-year period preceding the start of the intervention at “year 0.” The “year 3” or “follow-up” mortality estimates correspond to the 3-year period after the start of the intervention. The time elapsed between these two periods should be calculated from the midpoints of those two periods and is thus 4 years. This is shown graphically in the figure below. Pronyk et al. mistakenly treats this elapsed time as 3 years, yielding an average annual rate of decline of 7.8%. Using the correct elapsed time of 4 years, the true average rate of decline across the MV sites is 5.9%.

 

This somewhat subtle point may be clearer if one considers a reduction ad absurdum case. What would the correct elapsed time be for the calculation if the “follow-up” period were 3 years but the “baseline” period had been 30 years? Clearly, the elapsed time would not be 3 years, because the “baseline” period would cover the period of children’s lives and mortality risk from decades in the past, on average 15 years before the start of the intervention. (In this case, the elapsed time would be 16.5 years.) By the same logic, the mortality risk experience described by the 5-year “baseline” period took place at a point in time on average 2.5 years before the start of the intervention. The mortality risk experience described by the 3-year “follow-up” period took place at a point in time on average 1.5 years after the start of the intervention. Thus the correct elapsed time is 4 years, not 3.

The second flaw concerns the choice of comparison period for the national trends. Pronyk et al.  present an estimated annual rate of decline in rural areas of the 9 countries as 2.6% over 2001-2010.  However, this estimate is based largely on trends from the first half of the decade, before the MVP started in 2006, and the decline of child mortality accelerated dramatically in several countries around the middle of the decade. Additionally, Pronyk et al. do not use recent DHS data from Senegal and Uganda, which shows very rapid declines in under-5 mortality in those countries.

The table below shows the under-5 mortality trends at the national level, using the 2 most recent DHS in each of the 9 countries. These numbers in every case correspond to the 5-year period before the survey. (The figures come from published DHS reports and the Statcompiler tool on the DHS website. They are also in Table 1 of this paper.) On average, the annual rate of decline across the 9 countries is 6.4%, a faster rate of decline than the 5.9% across the MV sites.

Pronyk et al. use rural trend figures for each country (calculated for the 10-year period before each survey) rather than the national trend figures. For the 6 cases for which the information is available (Ghana, Kenya, Malawi, Nigeria, Senegal, and Tanzania) from the most recent DHS, and the most recent DHS was conducted after the start of the MVP, the rate of decline in rural areas is faster than at the national level, using 10-year rates.

It is possible that in the absence of the project, the experience at the MVP sites would have been substantially different than the national and rural trends. It is for this kind of reason that a rigorous impact evaluation is always based on a careful analysis of the counterfactual. Pronyk et al. do present findings from comparison sites. Earlier work co-authored by one of us, which was published in the Journal of Development Effectiveness, [ungated version] offered suggestions on improving the MVP evaluation and raised a number of concerns, including a detailed discussion of the issues regarding the validity of the comparison sites. Those concerns still apply and we will not revisit them here.  

Overall, we observe that 1) under-5 mortality has declined more slowly at the Millennium Village Project sites than nationally in the countries where the sites are located, 2) child mortality is declining more rapidly in rural areas than nationally. The observations we have noted here help to highlight the importance of rigorous impact evaluation with credible counterfactuals to inform our understanding of development projects. Any discussion of “impact” or “effects” of a project must start with a determination of how to estimate the counterfactual, i.e. what would have happened in the places the project areas if the project had never taken place. The basis for estimating the counterfactual must be taken seriously and should not be, for example, a comparison site selected years after the start of the project, or national trends during a substantially different time period than that of the project. This general lesson from the MVP experience—think seriously about how to understand the counterfactual, before starting to implement the program—is one that can inform the MVP going forward as well as many other development projects, including not least those supported by the World Bank.

 

Topics
Regions

Authors

Gabriel Demombynes

Manager of the Human Capital Project at the World Bank

Anonymous
May 11, 2012

oops, 15 co-authors, all phds, and they failed at simple math and selecting a comparison group?

Gabriel Demombynes
May 11, 2012

I have fixed that typo. I have also made one other minor correction: I added the omitted word "evaluation" after the phrase "offered suggestions on improving the MVP" in the second-to-last paragraph.

Staffan
May 14, 2012

Interesting post. The table above and the national AARR of 6,4% seems incredibly high, which is great news if true and for most of the nine countries is way higher than the collection of studies made by CME child mortality.org.

I fail to understand and value the difference between these numbers. Since these data have become the main argument agains Sachs claim of success it does matter.

You say almost 10% AARR for Senegal, CME says 4,5%
You say 6,3% for Ghana, CME says 2,8%
You say 8,44% for Kenya, CME says 2,75%
You say 6,8% for Uganda, CME says 3,45%

(CME-data is 07-10, somewhat better than the years before)

How do we explain the difference, should we trust this more than CME-data?

Gabriel
May 15, 2012

Staffan,
Thanks for the comment. The Lancet paper estimates are not from childmortality.org. They are from Demographic and Health Surveys (DHS), just as are those from my paper with Sofia Karina Trommlerova. The DHS numbers in the Lancet paper do not include some of the most recent surveys and use 10-year-retrospective rural numbers. Our DHS numbers do include the most recent surveys and use 5-year-retrospective national numbers. Although the Lancet paper doesn't provide enough information for me to reproduce their calculation, I think these two differences explain the difference between the 6.4% and 2.6% figures.

You raise a separate question: why do the CME estimates at childmortality.org look different than my DHS numbers? My understanding is that CME takes estimates from a variety of sources and fits a trend line to that data. This is a conservative approach, because a single estimate from one survey do not move the trend line that much. Because the estimates come from a variety of data sources, comparability across different surveys is a problem.

Sofia and I took the 2 most recent DHS surveys from each country for our estimates and used the national 5-year-retrospective numbers. This greatly reduces problems of comparability across surveys. We are taking the changes reported by the DHS at face value. If this was a question of a pattern observed from a single DHS pair in one country, we might worry that it was merely a result of sampling error in one country. However, since we see a consistent pattern of large drops across many countries, it is likely that this represents a genuine trend shift, which the CME's more conservative methodology has not yet fully captured.

Gabriel

Lee
May 13, 2012

You might want to speak to the website manager about getting "disqus" or some other trackback manager installed here - there was a lot of reaction to the post on twitter that there is no record of here.

Great post.

Gabriel
May 17, 2012

Bjorn,
Thanks for the 2 excellent comments.

In fact, none of the differences between the rates of decline at the MVP sites and the countries as a whole (using rural or national) are statistically significant, using either the corrected or uncorrected figures. I should have been more careful to say in the blog post that the comparison is based on the simple *point estimates* and that overall the data shows no statistically significant differences.

I have a co-authored comment coming out in the May 26 issues of the Lancet in which we discuss the points raised in the blog and discuss the issue of statistical significance.

Gabriel

Bjorn Gelders
May 16, 2012

Dear Gabriel and Espen,

Many thanks for this very insightful post.

I was wondering though if we are not neglecting the issue of sampling errors. Strictly speaking, shouldn’t we also be checking if the under-five mortality estimates for MVP sites, rural areas and the national level are statistically significantly different? There is a possibility that the attached confidence intervals are overlapping, so that the difference in U5MR may be only due to sampling errors, instead of being a real difference.

Staffan, as Gabriel already pointed out, the CME estimates are produced annually by (1) compiling all available national level data from population censuses, household surveys and vital registration systems; (2) reviewing the quality of the data; and (3) fitting a smoothed trend curve to the selected set of observations and extrapolating that trend to a common reference year. As any other technique, this approach has its drawbacks, but it does enhance international comparability between countries and serves the purpose of global monitoring (e.g. progress towards the MDGs).

Regards,

Bjorn G.

Rachael Meager
May 10, 2012

Just alerting you to a typo - the sentence before the graphic of the timeline reads "Using the correct elapsed time of 3 years, the true average rate of decline across the MV sites is 5.9%" It should say 4 years, as you point out in the post.

Great post!

Erin Trowbridge
May 18, 2012

The Millennium Villages Project thanks Gabriel Demombynes and Espen Beer Prydz for their careful review and correct criticisms of the Lancet paper. The comparison of the Millennium Villages and national trends was erroneous, and we have have corrected the paper at the Lancet. Further detail here: http://www.millenniumvillages.org/field-notes/millennium-villages-proje…

Naman
May 20, 2012

MVP has a history of shoddy evaluation work - they published one in PNAS in 2007, which just flew under the radar for the most part.

http://www.pnas.org/content/104/43/16775.full

http://topnaman.com/research/malaria-program-evalutions-part-2/

Courtesy
May 24, 2012

It is interesting that The Millennium Villages Project thanks Gabriel Demombynes and Espen Beer Prydz on the MVP website, but Paul Pronyk does not acknowledge Demombynes and Prydz's analysis in his recent correspondence in the Lancet. If Demombynes and Prydz's analysis helps Pronyk et al to recognize their mistakes, shouldn't Demombynes and Prydz be acknowledged in Lancet correction piece? Pronyk et al, after all, did not learn about their own mistakes by themselves