Syndicate content

Add new comment

Submitted by Jessica Goldberg on

When Kathleen Beegle, Emanuela Galasso, and I set out to study the large public works program in Malawi, we shared the expectations of the Government of Malawi, which ran the program, and the World Bank, which funded it, that it would likely improve food security and increase the use of fertilizer. We anticipated that our experiment would help improve the design of the program, and would allow us to study seasonality in consumption and liquidity constraints. Instead, we learned that the program just doesn’t work as intended. From our paper, “The effect of the program on the PCA index of food security is close to zero (-0.079, in column seven). The 95 percent confidence interval excludes positive impacts of greater than 0.08 standard deviations relative to the outcome in the control group. Overall, a program designed to improve food security did not: households offered the opportunity to participate in public works in November/December 2012 and January 2013 did not have better food security during the lean season than households in villages without a public works program.” (https://www.sciencedirect.com/science/article/pii/S0304387817300354)

It was hard to get either academics or policy makers to accept the results, even though this is an expensive program that is a major part of the social safety net in a very poor country. Part of that reaction is a because of a limitation of the study itself – we can’t pin down the reason that the program doesn’t work, and therefore, we can’t offer specific advice about how to fix it. I also think it’s fair to be reluctant to update strong priors on the basis of a single study, even if it’s large and well-identified. However, I think we as a profession have to be careful not to be more skeptical of null results than of positive treatment effects!