Syndicate content

Blogs

How can machine learning and artificial intelligence be used in development interventions and impact evaluations?

David McKenzie's picture

Last Thursday I attended a conference on AI and Development organized by CEGA, DIME, and the World Bank’s Big Data groups (website, where they will also add video). This followed a World Bank policy research talk last week by Olivier Dupriez on “Machine Learning and the Future of Poverty Prediction” (video, slides). These events highlighted a lot of fast-emerging work, which I thought, given this blog’s focus, I would try to summarize through the lens of thinking about how it might help us in designing development interventions and impact evaluations.

A typical impact evaluation works with a sample S to give them a treatment Treat, and is interested in estimating something like:
Y(i,t) = b(i,t)*Treat(i,t) +D’X(i,t) for units i in the sample S
We can think of machine learning and artificial intelligence as possibly affecting every term in this expression:

Weekly links March 2: quality onions, don’t just try to prove something you already know, jobs cost a lot to create, and more...

David McKenzie's picture

How to attract and motivate passionate public service providers

David Evans's picture

In Gaile Parkin's novel Baking Cakes in Kigali, two women living in Kigali, Rwanda – Angel and Sophie – argue over the salary paid to a development worker: "Perhaps these big organisations needed to pay big salaries if they wanted to attract the right kind of people; but Sophie had said that they were the wrong kind of people if they would not do the work for less. Ultimately they had concluded that the desire to make the world a better place was not something that belonged in a person's pocket. No, it belonged in a person's heart."
 
It's not a leap to believe – like Angel and Sophie – that teachers should want to help students learn, health workers who want help people heal, and other workers in service delivery should want to deliver that service. But how do you attract and motivate those passionate public servants? Here is some recent research that sheds light on the topic.
 

Facility-based data collection: a data methods bleg

Berk Ozler's picture

Today, I come to our readers with a request. I have a ton of experience with household and individual survey data collection. Ditto with biomarkers, assessments/tests at home, etc. However, I have less experience with facility-based data collection, especially when it is high frequency. For example, we do have a lot of data from the childcare centers in our study in Malawi, but we had to visit each facility once at each round of data collection and spend a day to collect all the facility-level data, including classroom observations, etc. What would you do if you needed high frequency data (daily, weekly, or monthly) that is a bit richer that what the facility collects themselves for their own administrative purposes that would not break the bank?

Weekly links February 23: tell better stories, hot days = lower profits, women need more customers, and more...

David McKenzie's picture

If you pay your survey respondents, you just might get a different answer

Markus Goldstein's picture
When I was doing my dissertation fieldwork, the professor I was working with and I had a fair number of conversations about compensating the respondents in our 15 wave panel survey.   We were taking a fair amount of people’s time and it seemed like not only the right thing to do, but also a way to potentially help grow the trust between our enumerators and the respondents. 
 

The Toyota way or Entropy? What did we find when we went back 8-9 years after improving management in Indian factories?

David McKenzie's picture

Between 2008 and 2010, we hired a multinational consulting firm to implement an intensive management intervention in Indian textile weaving plants. Both treatment and control firms received a one-month diagnostic, and then treatment firms received four months of intervention. We found (ungated) that poorly managed firms could have their management substantially improved, and that this improvement resulted in a reduction in quality defects, less excess inventory, and an improvement in productivity.

Should we expect this improvement in management to last? One view is the “Toyota way”, with systems put in place for measuring and monitoring operations and quality launch a continuous cycle of improvement. But an alternative is that of entropy, or a gradual decline back into disorder – one estimate by a prominent consulting firm is that two-thirds of transformation initiatives ultimately fail. In a new working paper, Nick Bloom, Aprajit Mahajan, John Roberts and I examine what happened to the firms in our Indian management experiment over the longer-term.

Weekly links Feb 16: when scale-ups don’t pan out the way you hoped, syllabi galore, do you suffer from this mystery illness? and more...

David McKenzie's picture
  • Interesting blog from the Global Innovation Fund, discussing results from an attempt to replicate the Kenyan sugar daddies RCT in Botswana, why they got different results, and how policy is reacting to this. “At some point, every evidence-driven practitioner is sure to face the same challenge: what do you do in the face of evaluation results that suggest that your program may not have the impact you hoped for? It’s a question that tests the fundamental character and convictions of our organizations. Young 1ove answered that question, and met that test, with tremendous courage. In the face of ambiguous results regarding the impact of No Sugar, they did something rare and remarkable: they changed course, and encouraged government partners and donors to do so as well”
  • How to help farmers to access agricultural extension information via mobile phone? Shawn Cole (Harvard Business School) and Michael Kremer (Harvard University) gave a recent talk on this, drawing on work they’ve been doing in India, Kenya, Rwanda, and elsewhere. Video here and paper on some of the India results here.

Cash Transfers Increase Trust in Local Government

David Evans's picture

Cash transfers seem to be everywhere. A recent statistic suggests that 130 low- and middle-income countries have an unconditional cash transfer program, and 63 have a conditional cash transfer program. We know that cash transfers do good things: the children of beneficiaries have better access to health and education services (and in some cases, better outcomes), and there is some evidence of positive longer run impacts. (There is also some evidence that long-term impacts are quite modest, and even mixed evidence within one study, so the jury’s still out on that one.)

In our conversations with government about cash transfers, one of the concerns that arose was how they would affect the social fabric. Might cash transfers negatively affect how citizens interact with each other, or with their government? In our new paper, “Cash Transfers Increase Trust in Local Government” (can you guess the finding from the title?) – which we authored together with Brian Holtemeyer – we provide evidence from Tanzania that cash transfers increase the trust that citizens have in government. They may even help governments work a little bit better.

Pages