This week is the World Bank’s annual conference on development economics. One of the papers being presented is by my colleague Kate Orkin (together with co-authors Tanguy Bernard, Stefan Dercon and Alemayehu Taffesse) and takes a look at a video intervention and its impact on aspirations among poor folks in Ethiopia. In particular, what Kate and her co-authors are asking is: can we shift aspirations and behavior by showing people more of what is possible?
Several surveys of U.S. employers identify lack of soft skills as the area where young job-seekers have the largest deficiency.
For impact evaluation to inform policy, we need to understand how the intervention will work in the intended population once implemented. However impact evaluations are not always conducted in a sample representative of the intended population, and sometimes they are not conducted under implementation conditions that would exist at scale-up.
While discussing a cash transfer program, a senior government official in Nicaragua spoke for many when she worried that “husbands were waiting for wives to return in order to take the money and spend it on alcohol” [Moore 2009]. This concern around cash transfer programs comes up again and again. For at least some of the poor, some will say, “Isn’t that how they became poor in the first place?”
A couple of weeks ago, I came across a fresh World Bank working paper (Doemeland & Trevino 2014) that examined downloads and citations for World Bank policy reports. The paper reports that 31 percent of policy reports have never been downloaded and 87 percent have never been cited.
Remittances sent by migrant workers to developing countries have soared in the past two decades. According to the World Development Indicators, workers’ remittances to developing countries were just US$47 billion in 1980 (in constant 2011 dollars). After barely rising by 1990 ($49 billion), they doubled by 2000 ($102 billion), and from there, tripled by 2010 ($321 billion).
The impact evaluation of a new policy or program aims to inform the decision on wider adoption and even, perhaps, national scale-up. Yet often the practice of IE involves a study in one localized area – a sample “site” in the terminology of a newly revised working paper by Hunt Allcott. This working paper leverages a unique program roll-out in the U.S. to explore the challenges and pitfalls that arise when generalizing IE results from a handful of sites to a larger context. And the leap from applying impact estimates taken in one site to the larger world is not always straightforward