When people say “evidence-based policymaking” or they talk about the “credibility revolution, they are surely trying to talk about the fact that (a) we have (or trying hard to have) better evidence on impacts of various approaches to solve problems, and (b) we should use that evidence to make better decisions regarding policy and program design. However, the debate about the Haushofer and Shapiro (2018) paper on the three-year effects of GiveDirectly cash transfers in Kenya taught me that how people interpret the evidence is as important as the underlying evidence. The GiveDirectly blog (that I discussed here, and GiveDirectly posted an update here) and Justin Sandefur’s recent post on the CGD blog are two good examples.
This is a guest post by Johannes Haushofer and Jeremy Shapiro
[Update: 11:00 AM on 4/23/2018. Upon the request of the guest bloggers, this post has been updated to include GiveDirectly's updated blog post, published on their website on 4/20/2018, within the text of their post rather than within Özler’s response that follows.]
We’re glad our paper evaluating the long-term impacts of cash transfers has been discussed by GiveDirectly (the source of the transfers) itself and Berk Özler at the World Bank, among others (GiveDirectly has since updated their take on our paper). Given the different perspectives put forth, we wanted to share a few clarifications and our view of the big picture implications.
From the DIME Analytics Weekly newsletter (which I recommend subscribing to): applyCodebook – One of the biggest time-wasters for research assistants is typing "rename", "recode", "label var", and so on to get a dataset in shape. Even worse is reading through it all later and figuring out what's been done. Freshly released on the World Bank Stata GitHub thanks to the DIME Analytics team is applyCodebook, a utility that reads an .xlsx "codebook" file and applies all the renames, recodes, variable labels, and value labels you need in one go. It takes one line in Stata to use, and all the edits are reviewable variable-by-variable in Excel. If you haven't visited the GitHub repo before, don't forget to browse all the utilities on offer and feel free to fork and submit your own on the dev branch. Happy coding!
Is it possible to speed up a justice system? On the Let's Talk Development blog, Kondylis and Corthay document a reform in Senegal that gave judges tools to speed up decisions, to positive effect. The evaluation then led to further legal reform.
"Reviewing thousands of evaluation studies over the years has also given us a profound appreciation of how challenging it is to find interventions...that produce a real improvement in people’s lives." Over at Straight Talk on Evidence, the team highlights the challenge of finding impacts at scale, nodding to Rossi's iron law of evaluation ("The expected value of any net impact assessment of any large scale social program is zero") and the "stainless steel law of evaluation" ("the more technically rigorous the net impact assessment, the more likely are its results to be zero – or no effect"). They give evidence across fields – business, medicine, education, and training. They offer a proposed solution in another post, and Chris Blattman offers a critique in a Twitter thread.
Kate Cronin-Furman and Milli Lake discuss ethical issues in doing fieldwork in fragile and violent conflicts.
"What’s the latest research on the quality of governance?" Dan Rogger gives a quick round-up of research presented at a recent conference at Stanford University.
In public procurement, lower transaction costs aren't always better. Over at VoxDev, Ferenc Szucs writes about what procurement records in Hungary teach about open auctions versus discretion. In short, discretion means lower transaction costs, more corruption, higher prices, and inefficient allocation.
Justin Sandefur seeks to give a non-technical explanation of the recent discussion of longer term benefits of cash transfers in Kenya (1. Cash transfers cure poverty. 2. Side effects vary. 3. Symptoms may return when treatment stops.) This is at least partially in response to Berk Özler's dual posts, here and here. Özler adds some additional discussion in this Twitter thread.
Labor-intensive public works (LIPW) programs are a popular policy intended to provide temporary employment opportunities to vulnerable populations through work-intensive projects, such as the development and maintenance of local infrastructure, that do not require special skills. For a review of LIPW programs (design, evidence and implementation), see Subbarao et al. here. In fragile states, LIPW programs are also presumed to contribute to social and political stability. The developed infrastructure allows for the implementation of other development and peacekeeping activities, while employment opportunities may help prevent at-risk youth from being recruited by armed groups. Despite their popularity and presumed impact on beneficiaries, the evidence base of LIPW programs has been surprisingly weak.
The Development Impact Evaluation (DIME) unit, in collaboration with the Fragility, Conflict and Violence Cross Cutting Solutions Area (FCV-CSSA) and the Social Protection and Labor Global Practice (SPL-GP), is carrying out a multi-country set of 7 Randomized Control Trials (RCTs) of LIPW programs targeting around 40,000 households across 5 countries: Comoros, the Democratic Republic of Congo, Côte d’Ivoire, Egypt, and Tunisia. This initiative is part of a broader research program on Fragility, Conflict and Violence (FCV) — a portfolio of 35 impact evaluations in over 25 countries that focuses on 5 key priority areas: (i) jobs for the poor and at-risk youth; (ii) public sector governance/civil service reforms; (ii) political economy of post-conflict reconstruction; (iv) gender-based violence; and (v) urban crime and violence.
- In the Atlantic – are Jupyter notebooks going to replace pdfs for scientific papers? Konrad Hinsen discusses, noting that it seems the future isn’t here yet.
- On VoxDev, Daniel Bennett discusses how traditional medicine beliefs can hamper hygiene campaigns, and the results of an experiment which used microscopes to actually show people the microbes in standing water and buffalo dung – this led to improvements in hygiene and child health, but only for those without strong beliefs in traditional medicine in Pakistan.
- From Ifpri, a nice couple of summary notes by Berber Kramer and co-authors on testing picture-based crop insurance – where farmers take regular pictures of their crops throughout the growing cycle, and then these are used to assess damage: there is one note on willingness to pay, adverse selection and moral hazard, and one on the practical feasibility of the approach.
When John Maynard Keynes wrote that “In the long run we are all dead,” he probably didn’t mean a few days or months, notwithstanding a recent “long-term experimental” social psychology study that shows results over a whopping three days. Keynes lived an additional 23 years after publishing his famous statement, so I’ll call 23 years the “Keynes test” for long-run impacts.
In development economics, how long is the long run? I identified every article in three development economics journals that used the term “long run” in its title. The journals were the Journal of Development Economics, Economic Development and Cultural Change, and the World Bank Economic Review. 38 articles used the term – excluding two book reviews, of which 23 articles had empirical analysis. (It’s easy to talk about long run impacts when you’re only speaking theoretically.) Of those 23, 10 were micro and 13 were macro. So this is a small sample. Proceed with caution!
- AEA journals now require registration in the RCT registry: - the AEA journals' submission instructions now include: “The American Economic Association operates a Registry for Randomized Controlled Trials (RCTs). In January of 2018, the AEA Executive Committee passed motion requiring the registration of RCTs for all applicable submissions. If the research in your paper involves a RCT, please register (registration is free), prior to submitting. In the online submission form, you will be required to provide the registration number issued by the Registry. We also kindly ask you to acknowledge compliance by including your number in the introductory footnote of your manuscript.” – note this registration can still be post-trial registration at this stage, but this definitely should encourage you to register new trials as you start them.
- Marginal revolution notes a newly published meta-analysis paper that compares RD estimates to RCT estimates on the same data, showing both internal and some external validity of the RD method.