Many education investments focus on the first years of primary education or – even before that – early child education. The logic behind this is intuitive: Without a solid foundation, it’s hard for children and youth to gain later skills that use those foundations. If you can’t decipher letters, then it’s going to be tough to learn from a science textbook. Or even a math textbook. But it’s important to remember that for most “investors” (whether governments or parents or the children themselves), the most basic skills aren’t the ultimate goal. The objective is better life outcomes. Most of the justification for these early interventions are that they will translate into better lives once these children grow up.
DI: Please provide a short paragraph describing what you do in this job, and give us a sense of what a typical day or week might look like for you. My job is to conduct independent rigorous impact and performance evaluations of social programs in developing countries. Most of this work is conducted under contract to US government agencies (mostly MCC and USAID) and various foundations, who issue requests for proposals to evaluate their programs. In my eight years at Mathematica I’ve worked on evaluations in Asia, Africa, and Eastern Europe, and in topic areas including agriculture, primary education, vocational training, maternal and child health, land, and others. As senior researcher on an evaluation team I’m typically responsible for technical leadership of all aspects of an evaluation, including study design, data collection, and final analysis and reporting. Last week was fairly typical and included work on designing a randomized controlled trial of an anti-child labor program, drafting a quantitative survey of vocational education students, and planning the analysis of survey data from farmers in Morocco.
- Excellent tradetalks podcast with Dave Donaldson has a detailed discussion with him on his work looking at the impact of railroads on development in India and in U.S. economic history.
The latest Journal of Economic Perspectives includes:
- Acemoglu provides a summary of Donaldson’s work that led to him receiving the Bates Clark medal
- Several papers on risk preferences, including discussion of whether risk preferences are stable and how to think about them if they are not (interesting sidenote in this is a comment on how much measurement error there is when using incentivized lotteries – the correlations between risk premia measured for the same individual using different experimental choices can be quite low, and correlations tend to be higher for survey measures – and speculation that the measurement error may be worse in developing countries “large share of the papers that document contradictory effects of violent conflict or natural disasters use experimental data from developing countries, but these tools were typically developed in the context of high-income countries. They may be more likely to produce noisy results in samples that are less educated, partly illiterate, or less used to abstract thinking)
- a series of papers on how much the U.S. gains from trade
- Over at VoxDev, Jack & Jayachandran show how prizes can help to improve water conservation in Zambia, despite free-riding dynamics within households.
- Also at VoxDev, Clement Imbert on how NREGA changed rural and urban wages and reduced rural to urban migration in India
- With all the evidence on what microcredit doesn't do, Dave Evans summarizes new work from Burke, Bergquist, and Miguel on how microcredit to farmers at harvest time in Kenya can yield high profits. When many farmers receive the loans, the profits fall but benefits to non-borowers rise as the price fluctuations in the market are reduced.
A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.
How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?
Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.
But – you might say – that all seems quite costly (it is) and complicated (it is). Why not just ask the patients about their experience? Enter the “patient satisfaction survey,” which goes back at least to the 1980s in a clearly recognizable form. (I’m sure someone has been asking about patient satisfaction in some form for as long as there have been medical providers.) Patient satisfaction surveys have pros and cons. On the pro side, health care is a service, and a better delivered service should result in higher patient satisfaction. If this is true, then patient satisfaction could be a useful summary measure, capturing an array of elements of the service – were you treated with respect? did you have to wait too long? On the con side, patients may not be able to gauge key elements of the service (is the health professional giving good advice?), or they may value services that are not medically recommended (just give me a shot, nurse!).
Two recently published studies in Nigeria provide evidence that both gives pause to our use of patient satisfaction surveys and points to better ways forward. Here is what we’ve learned:
When people say “evidence-based policymaking” or they talk about the “credibility revolution, they are surely trying to talk about the fact that (a) we have (or trying hard to have) better evidence on impacts of various approaches to solve problems, and (b) we should use that evidence to make better decisions regarding policy and program design. However, the debate about the Haushofer and Shapiro (2018) paper on the three-year effects of GiveDirectly cash transfers in Kenya taught me that how people interpret the evidence is as important as the underlying evidence. The GiveDirectly blog (that I discussed here, and GiveDirectly posted an update here) and Justin Sandefur’s recent post on the CGD blog are two good examples.
This is a guest post by Johannes Haushofer and Jeremy Shapiro
[Update: 11:00 AM on 4/23/2018. Upon the request of the guest bloggers, this post has been updated to include GiveDirectly's updated blog post, published on their website on 4/20/2018, within the text of their post rather than within Özler’s response that follows.]
We’re glad our paper evaluating the long-term impacts of cash transfers has been discussed by GiveDirectly (the source of the transfers) itself and Berk Özler at the World Bank, among others (GiveDirectly has since updated their take on our paper). Given the different perspectives put forth, we wanted to share a few clarifications and our view of the big picture implications.
From the DIME Analytics Weekly newsletter (which I recommend subscribing to): applyCodebook – One of the biggest time-wasters for research assistants is typing "rename", "recode", "label var", and so on to get a dataset in shape. Even worse is reading through it all later and figuring out what's been done. Freshly released on the World Bank Stata GitHub thanks to the DIME Analytics team is applyCodebook, a utility that reads an .xlsx "codebook" file and applies all the renames, recodes, variable labels, and value labels you need in one go. It takes one line in Stata to use, and all the edits are reviewable variable-by-variable in Excel. If you haven't visited the GitHub repo before, don't forget to browse all the utilities on offer and feel free to fork and submit your own on the dev branch. Happy coding!
Is it possible to speed up a justice system? On the Let's Talk Development blog, Kondylis and Corthay document a reform in Senegal that gave judges tools to speed up decisions, to positive effect. The evaluation then led to further legal reform.
"Reviewing thousands of evaluation studies over the years has also given us a profound appreciation of how challenging it is to find interventions...that produce a real improvement in people’s lives." Over at Straight Talk on Evidence, the team highlights the challenge of finding impacts at scale, nodding to Rossi's iron law of evaluation ("The expected value of any net impact assessment of any large scale social program is zero") and the "stainless steel law of evaluation" ("the more technically rigorous the net impact assessment, the more likely are its results to be zero – or no effect"). They give evidence across fields – business, medicine, education, and training. They offer a proposed solution in another post, and Chris Blattman offers a critique in a Twitter thread.
Kate Cronin-Furman and Milli Lake discuss ethical issues in doing fieldwork in fragile and violent conflicts.
"What’s the latest research on the quality of governance?" Dan Rogger gives a quick round-up of research presented at a recent conference at Stanford University.
In public procurement, lower transaction costs aren't always better. Over at VoxDev, Ferenc Szucs writes about what procurement records in Hungary teach about open auctions versus discretion. In short, discretion means lower transaction costs, more corruption, higher prices, and inefficient allocation.
Justin Sandefur seeks to give a non-technical explanation of the recent discussion of longer term benefits of cash transfers in Kenya (1. Cash transfers cure poverty. 2. Side effects vary. 3. Symptoms may return when treatment stops.) This is at least partially in response to Berk Özler's dual posts, here and here. Özler adds some additional discussion in this Twitter thread.