This is a guest post by Andy Foster, Dean Karlan, and Ted Miguel.
The world is a messy place. What happens when the results of an empirical study are mushy or inconsistent with prevailing theories? Unfortunately, papers with unclear or null results often go unpublished, even if they have rigorous research designs and good data. In such cases, the research community is typically only left to consider the papers that tell a “neat” and clean story. When economic and social policy relies on academic knowledge, this publication bias can be costly to society.
This is a guest post by Andy Foster, Dean Karlan, and Ted Miguel.
- Among the many posts on international women’s day, I thought our readers might find most useful this one on measurement of poverty and gender by Carolina Sanchez and Ana-Maria Munoz-Boudet “No, 70% of the world’s poor aren’t women, but this doesn’t mean poverty isn’t sexist”
- Emergency loans that are automatically given out when disaster hits as a substitute for microinsurance – summarized by Feed the Future – “Results ... show that the availability of emergency loans has had a big effect on how these farmers manage risk. Households who knew they were pre-qualified planted about 25 percent more rice than households who were not offered the emergency loan” (h/t Mushfiq Mobarak).
- Video and slides from Ana Fernandes’ policy research talk on exporter dynamics, superstar firms, and trade policy – it is stunning how large a share of exports from many developing countries comes from the top 1% or even top 5 exporters.
- Have you questioned your life choices enough lately? If not, Video of Lant Pritchett’s talk last month at NYU’s DRI on “The Debate about RCTs in Development is over. We won. They lost”
Last Thursday I attended a conference on AI and Development organized by CEGA, DIME, and the World Bank’s Big Data groups (website, where they will also add video). This followed a World Bank policy research talk last week by Olivier Dupriez on “Machine Learning and the Future of Poverty Prediction” (video, slides). These events highlighted a lot of fast-emerging work, which I thought, given this blog’s focus, I would try to summarize through the lens of thinking about how it might help us in designing development interventions and impact evaluations.
A typical impact evaluation works with a sample S to give them a treatment Treat, and is interested in estimating something like:
Y(i,t) = b(i,t)*Treat(i,t) +D’X(i,t) for units i in the sample S
We can think of machine learning and artificial intelligence as possibly affecting every term in this expression:
- On VoxDev, Tanguy Bernard and co-authors on an experiment that provided quality certification for onions in Senegal, causing farmers to invest more in quality and raising farmer incomes...but with the sad post-note “In this particular case, the reform was discontinued under pressure from the long-distance middlemen who gain from the lack of transparency on markets.”
- Following on the heels of Berk’s post, Science has a story “researchers debate whether journals should publish signed peer reviews” which discusses how this debate is also taking place in other fields.
In Gaile Parkin's novel Baking Cakes in Kigali, two women living in Kigali, Rwanda – Angel and Sophie – argue over the salary paid to a development worker: "Perhaps these big organisations needed to pay big salaries if they wanted to attract the right kind of people; but Sophie had said that they were the wrong kind of people if they would not do the work for less. Ultimately they had concluded that the desire to make the world a better place was not something that belonged in a person's pocket. No, it belonged in a person's heart."
It's not a leap to believe – like Angel and Sophie – that teachers should want to help students learn, health workers who want help people heal, and other workers in service delivery should want to deliver that service. But how do you attract and motivate those passionate public servants? Here is some recent research that sheds light on the topic.
Today, I come to our readers with a request. I have a ton of experience with household and individual survey data collection. Ditto with biomarkers, assessments/tests at home, etc. However, I have less experience with facility-based data collection, especially when it is high frequency. For example, we do have a lot of data from the childcare centers in our study in Malawi, but we had to visit each facility once at each round of data collection and spend a day to collect all the facility-level data, including classroom observations, etc. What would you do if you needed high frequency data (daily, weekly, or monthly) that is a bit richer that what the facility collects themselves for their own administrative purposes that would not break the bank?
- survey methods
- On the Voices blog, Arianna Legovini discusses DIME’s program of work on edutainment, and why information interventions that “lacked inspiring narratives, and were communicated through outdated and uninteresting outlets such as billboards and leaflets” may need to get replaced by working with professional storytellers. [edit: they seem to be changing the link for this, here is an alternative link]
- On VoxDev, Morgan Hardy discusses some of her new work with Gisella Kagy on how female-owned garment firms in Ghana are demand-constrained.
- On the Econ that matters blog, Nouhoum Traore and Jeremy Foltz look at the impact of high temperatures on firms in Cote d’Ivoire, finding years in which there are more days above 27C are correlated with lower firm revenues and profits, and more firm exit.
Between 2008 and 2010, we hired a multinational consulting firm to implement an intensive management intervention in Indian textile weaving plants. Both treatment and control firms received a one-month diagnostic, and then treatment firms received four months of intervention. We found (ungated) that poorly managed firms could have their management substantially improved, and that this improvement resulted in a reduction in quality defects, less excess inventory, and an improvement in productivity.
Should we expect this improvement in management to last? One view is the “Toyota way”, with systems put in place for measuring and monitoring operations and quality launch a continuous cycle of improvement. But an alternative is that of entropy, or a gradual decline back into disorder – one estimate by a prominent consulting firm is that two-thirds of transformation initiatives ultimately fail. In a new working paper, Nick Bloom, Aprajit Mahajan, John Roberts and I examine what happened to the firms in our Indian management experiment over the longer-term.