Last Thursday I attended a conference on AI and Development organized by CEGA, DIME, and the World Bank’s Big Data groups (website, where they will also add video). This followed a World Bank policy research talk last week by Olivier Dupriez on “Machine Learning and the Future of Poverty Prediction” (video, slides). These events highlighted a lot of fast-emerging work, which I thought, given this blog’s focus, I would try to summarize through the lens of thinking about how it might help us in designing development interventions and impact evaluations.
A typical impact evaluation works with a sample S to give them a treatment Treat, and is interested in estimating something like:
Y(i,t) = b(i,t)*Treat(i,t) +D’X(i,t) for units i in the sample S
We can think of machine learning and artificial intelligence as possibly affecting every term in this expression:
- On VoxDev, Tanguy Bernard and co-authors on an experiment that provided quality certification for onions in Senegal, causing farmers to invest more in quality and raising farmer incomes...but with the sad post-note “In this particular case, the reform was discontinued under pressure from the long-distance middlemen who gain from the lack of transparency on markets.”
- Following on the heels of Berk’s post, Science has a story “researchers debate whether journals should publish signed peer reviews” which discusses how this debate is also taking place in other fields.
In Gaile Parkin's novel Baking Cakes in Kigali, two women living in Kigali, Rwanda – Angel and Sophie – argue over the salary paid to a development worker: "Perhaps these big organisations needed to pay big salaries if they wanted to attract the right kind of people; but Sophie had said that they were the wrong kind of people if they would not do the work for less. Ultimately they had concluded that the desire to make the world a better place was not something that belonged in a person's pocket. No, it belonged in a person's heart."
It's not a leap to believe – like Angel and Sophie – that teachers should want to help students learn, health workers who want help people heal, and other workers in service delivery should want to deliver that service. But how do you attract and motivate those passionate public servants? Here is some recent research that sheds light on the topic.
Today, I come to our readers with a request. I have a ton of experience with household and individual survey data collection. Ditto with biomarkers, assessments/tests at home, etc. However, I have less experience with facility-based data collection, especially when it is high frequency. For example, we do have a lot of data from the childcare centers in our study in Malawi, but we had to visit each facility once at each round of data collection and spend a day to collect all the facility-level data, including classroom observations, etc. What would you do if you needed high frequency data (daily, weekly, or monthly) that is a bit richer that what the facility collects themselves for their own administrative purposes that would not break the bank?
- survey methods
- On the Voices blog, Arianna Legovini discusses DIME’s program of work on edutainment, and why information interventions that “lacked inspiring narratives, and were communicated through outdated and uninteresting outlets such as billboards and leaflets” may need to get replaced by working with professional storytellers. [edit: they seem to be changing the link for this, here is an alternative link]
- On VoxDev, Morgan Hardy discusses some of her new work with Gisella Kagy on how female-owned garment firms in Ghana are demand-constrained.
- On the Econ that matters blog, Nouhoum Traore and Jeremy Foltz look at the impact of high temperatures on firms in Cote d’Ivoire, finding years in which there are more days above 27C are correlated with lower firm revenues and profits, and more firm exit.
Between 2008 and 2010, we hired a multinational consulting firm to implement an intensive management intervention in Indian textile weaving plants. Both treatment and control firms received a one-month diagnostic, and then treatment firms received four months of intervention. We found (ungated) that poorly managed firms could have their management substantially improved, and that this improvement resulted in a reduction in quality defects, less excess inventory, and an improvement in productivity.
Should we expect this improvement in management to last? One view is the “Toyota way”, with systems put in place for measuring and monitoring operations and quality launch a continuous cycle of improvement. But an alternative is that of entropy, or a gradual decline back into disorder – one estimate by a prominent consulting firm is that two-thirds of transformation initiatives ultimately fail. In a new working paper, Nick Bloom, Aprajit Mahajan, John Roberts and I examine what happened to the firms in our Indian management experiment over the longer-term.
- Interesting blog from the Global Innovation Fund, discussing results from an attempt to replicate the Kenyan sugar daddies RCT in Botswana, why they got different results, and how policy is reacting to this. “At some point, every evidence-driven practitioner is sure to face the same challenge: what do you do in the face of evaluation results that suggest that your program may not have the impact you hoped for? It’s a question that tests the fundamental character and convictions of our organizations. Young 1ove answered that question, and met that test, with tremendous courage. In the face of ambiguous results regarding the impact of No Sugar, they did something rare and remarkable: they changed course, and encouraged government partners and donors to do so as well”
- How to help farmers to access agricultural extension information via mobile phone? Shawn Cole (Harvard Business School) and Michael Kremer (Harvard University) gave a recent talk on this, drawing on work they’ve been doing in India, Kenya, Rwanda, and elsewhere. Video here and paper on some of the India results here.
Cash transfers seem to be everywhere. A recent statistic suggests that 130 low- and middle-income countries have an unconditional cash transfer program, and 63 have a conditional cash transfer program. We know that cash transfers do good things: the children of beneficiaries have better access to health and education services (and in some cases, better outcomes), and there is some evidence of positive longer run impacts. (There is also some evidence that long-term impacts are quite modest, and even mixed evidence within one study, so the jury’s still out on that one.)
In our conversations with government about cash transfers, one of the concerns that arose was how they would affect the social fabric. Might cash transfers negatively affect how citizens interact with each other, or with their government? In our new paper, “Cash Transfers Increase Trust in Local Government” (can you guess the finding from the title?) – which we authored together with Brian Holtemeyer – we provide evidence from Tanzania that cash transfers increase the trust that citizens have in government. They may even help governments work a little bit better.