- Pre-registration should be a plan, not a prison – from the Center for Open Science
- the Atlantic on how female mentors help female engineering students based on a paper forthcoming in PNAS – study only has n=150 at one college, assigned to male mentors, female mentors, or no mentors: 100% of women with female mentors remained in engineering majors at the end of year 1 compared with 82% with male mentors, and 89% without mentors
- Eva Vivalt gives four reasons your study should collect priors
“Just because it worked in Brazil doesn’t mean it will work in Burundi.” That’s true. And hopefully obvious. But some version of this critique continues to be leveled at researchers who carry out impact evaluations around the world. Institutions vary. Levels of education vary. Cultures vary. So no, an effective program to empower girls in Uganda might not be effective in Tanzania.
Of course, policymakers get this. As Markus Goldstein put it, “Policy makers are generally not morons. They are acutely aware of the contexts in which they operate and they generally don’t copy a program verbatim. Instead, they usually take lessons about what worked and how it worked and adapt them to their situation.”
In the latest Stanford Social Innovation Review, Mary Ann Bates and Rachel Glennerster from J-PAL propose a four-step strategy to help policy makers through that process of appropriate adaptation of results from one context to another.
- external validity
The papers and proceedings issue of the AER has several papers of interest to development economists, including:
- Esther Duflo’s lecture of “The Economist as Plumber” – “details that we as economists might consider relatively uninteresting are in fact extraordinarily important in determining the final impact of a policy or a regulation, while some of the theoretical issues we worry about most may not be that relevant”…” an economist who cares about the details of policy implementation will need to pay attention to many details and complications, some of which may appear to be far below their pay grade (e.g., the font size on posters) or far beyond their competence level (e.g., the intricacy of government budgeting in a federal system).”
- Sandip Sukhtankar has a paper on replications in development economics, part of two sessions on replication in economics.
- Shimeles et al. on tax auditing and tax compliance experiments in Ethiopia: “Businesses subject to threats increased their profit tax payable by 38 percent, while those that received a persuasion letter increased by 32 percent, compared to the control group.”
- 4 papers on maternal and child health in developing countries (Uganda, Kenya, India, Zambia).
- Following up on Berk’s post on list experiments, 538 provides another example, using list experiments to identify how many Americans are atheists.
- The Economist on how governments are using nudges – with both developed and developing country examples.
- The equivalent to an EGOT for economists? Dave and Markus have come up with the EJAQ or REJAQ for economists who have published in all the top-4 or top-5 journals.
- Call for papers: TCD/LSE/CEPR conference on Development economics to be held at Trinity College, Dublin on September 18-19. Imran Rasul and I are keynote speakers.
About a year ago I reviewed Angela Duckworth’s book on grit. At the time I noted that there were compelling ideas, but that two big issues were that her self-assessed 10-item Grit scale could be very gameable, and that there was really limited rigorous evidence as to whether efforts to improve grit have lasting impacts.
A cool new paper by Sule Alan, Teodora Boneva, and Seda Ertac makes excellent progress on both fronts. They conduct a large-scale experiment in Turkey with almost 3000 fourth-graders (8-10 year olds) in over 100 classrooms in 52 schools (randomization was at the school level, with 23 schools assigned to treatment).
- On the Future Development blog, Steve Radelet provides a summary set of responses to the blanket “aid doesn’t work” critique
- Berk’s post this week on list randomization experiments had some of the best comments and discussion we have had for a while. Thanks to our readers! Clearly we just need to make our posts even more geeky than usual to get good discussion going.
- The New York Times discusses the Suri and Jack work on M-Pesa, and then the different innovations that have followed M-Pesa and make use of its payment infrastructure.
- Tim Taylor on the economics of the ‘stans.
About a year ago, I wrote a blog post on issues surrounding data collection and measurement. In it, I talked about “list experiments” for sensitive questions, about which I was not sold at the time. However, now that I have a bunch of studies going to the field at different stages of data collection, many of which are about sensitive topics in adolescent female target populations, I am paying closer attention to them. In my reading and thinking about the topic and how to implement it in our surveys, I came up with a bunch of questions surrounding the optimal implementation of these methods. In addition, there is probably more to be learned on these methods to improve them further, opening up the possibility of experimenting with them when we can. Below are a bunch of things that I am thinking about and, as we still have some time before our data collection tools are finalized, you, our readers, have a chance to help shape them with your comments and feedback.
- At VoxEU, Martin Ravallion discusses how many of the arguments against universal basic income are really about strawmen that overstate the effectiveness of targeted transfers.
- Bruce Wydick on fake news, narrative, science and truth.
- The new issue of the Journal of Economic Perspectives has a symposium on recent ideas in econometrics, including (among others) Athey and Imbens on Causality and Policy Evaluations; Low and Meghir on Structural Models; and Mullainathan and Spiess on machine learning.
What is the signal we should infer from a paper using a novel method that is marketed as a way to improve transparency in research?
I got to thinking about this issue when seeing a lot of reactions on twitter like “Awesome John List!”, “This is brilliant”,etc. about a new paper by Luigi Butera and John List that investigates in a lab experiment how cooperation in an allocation game is affected by Knightian uncertainty/ambiguity. Contrary to what the authors had expected, they find adding uncertainty increases cooperation. The bit they are getting plaudits for is then the following in the introduction: