Syndicate content

Facility-based data collection: a data methods bleg

Berk Ozler's picture

Today, I come to our readers with a request. I have a ton of experience with household and individual survey data collection. Ditto with biomarkers, assessments/tests at home, etc. However, I have less experience with facility-based data collection, especially when it is high frequency. For example, we do have a lot of data from the childcare centers in our study in Malawi, but we had to visit each facility once at each round of data collection and spend a day to collect all the facility-level data, including classroom observations, etc. What would you do if you needed high frequency data (daily, weekly, or monthly) that is a bit richer that what the facility collects themselves for their own administrative purposes that would not break the bank?

Weekly links February 23: tell better stories, hot days = lower profits, women need more customers, and more...

David McKenzie's picture

If you pay your survey respondents, you just might get a different answer

Markus Goldstein's picture
When I was doing my dissertation fieldwork, the professor I was working with and I had a fair number of conversations about compensating the respondents in our 15 wave panel survey.   We were taking a fair amount of people’s time and it seemed like not only the right thing to do, but also a way to potentially help grow the trust between our enumerators and the respondents. 

The Toyota way or Entropy? What did we find when we went back 8-9 years after improving management in Indian factories?

David McKenzie's picture

Between 2008 and 2010, we hired a multinational consulting firm to implement an intensive management intervention in Indian textile weaving plants. Both treatment and control firms received a one-month diagnostic, and then treatment firms received four months of intervention. We found (ungated) that poorly managed firms could have their management substantially improved, and that this improvement resulted in a reduction in quality defects, less excess inventory, and an improvement in productivity.

Should we expect this improvement in management to last? One view is the “Toyota way”, with systems put in place for measuring and monitoring operations and quality launch a continuous cycle of improvement. But an alternative is that of entropy, or a gradual decline back into disorder – one estimate by a prominent consulting firm is that two-thirds of transformation initiatives ultimately fail. In a new working paper, Nick Bloom, Aprajit Mahajan, John Roberts and I examine what happened to the firms in our Indian management experiment over the longer-term.

Weekly links Feb 16: when scale-ups don’t pan out the way you hoped, syllabi galore, do you suffer from this mystery illness? and more...

David McKenzie's picture
  • Interesting blog from the Global Innovation Fund, discussing results from an attempt to replicate the Kenyan sugar daddies RCT in Botswana, why they got different results, and how policy is reacting to this. “At some point, every evidence-driven practitioner is sure to face the same challenge: what do you do in the face of evaluation results that suggest that your program may not have the impact you hoped for? It’s a question that tests the fundamental character and convictions of our organizations. Young 1ove answered that question, and met that test, with tremendous courage. In the face of ambiguous results regarding the impact of No Sugar, they did something rare and remarkable: they changed course, and encouraged government partners and donors to do so as well”
  • How to help farmers to access agricultural extension information via mobile phone? Shawn Cole (Harvard Business School) and Michael Kremer (Harvard University) gave a recent talk on this, drawing on work they’ve been doing in India, Kenya, Rwanda, and elsewhere. Video here and paper on some of the India results here.

Cash Transfers Increase Trust in Local Government

David Evans's picture

Cash transfers seem to be everywhere. A recent statistic suggests that 130 low- and middle-income countries have an unconditional cash transfer program, and 63 have a conditional cash transfer program. We know that cash transfers do good things: the children of beneficiaries have better access to health and education services (and in some cases, better outcomes), and there is some evidence of positive longer run impacts. (There is also some evidence that long-term impacts are quite modest, and even mixed evidence within one study, so the jury’s still out on that one.)

In our conversations with government about cash transfers, one of the concerns that arose was how they would affect the social fabric. Might cash transfers negatively affect how citizens interact with each other, or with their government? In our new paper, “Cash Transfers Increase Trust in Local Government” (can you guess the finding from the title?) – which we authored together with Brian Holtemeyer – we provide evidence from Tanzania that cash transfers increase the trust that citizens have in government. They may even help governments work a little bit better.

Your go-to regression specification is biased: here’s the simple way to fix it

Berk Ozler's picture

Today, I am writing about something many of you already know. You’ve probably been hearing about it for 5-10 years. But, you still ignore it. Well, now that the evidence against it has mounted enough and the fix is simple enough, I am here to urge you to tweak your regression specifications in your program evaluations.

Weekly links Feb 9: tracking Ghanaian youth as they age, envying Danish data, coding better, communicating less badly, and more....

David McKenzie's picture
  • DEC has a fantastic lecture series going on at the moment. This week we had Pascaline Dupas. Videos of the talks are online. Of particular interest to our readers, will be her discussion of the techniques used for how they managed to re-interview 95% of Ghanaian youth after 10 years; and of how they messed up asking about labor market outcomes the first time they tried due to the sporadic nature of work for many youth (and something I hadn’t thought about – people working for the government whose payments have been delayed, so are owed back wages, but didn’t actually get paid in the last month).
  • In VoxEU, revealed vs reported preference – when asked if they saved or spent their stimulus payments, people’s answers were qualitatively informative of actual behavior seen from observed spending data; and when asked how much they spent, gives a reasonable measure of average spending propensity – but these questions aren’t so good at capturing which households respond more.

Beyond the trite “I was there” photo: Using photos and videos to communicate your research

David McKenzie's picture

One signature feature of many academic presentations by development economists is the use of photos. Go to a labor or health economics seminar and you will almost never see a photo of a U.S. worker or U.S. family participating in some early childhood program, but go to a development seminar and odds are incredibly high that you will see shiny happy people holding hands. This is often the source of much eye-rolling among non-development economists (and even among ourselves), so I thought I’d speak up a little in defense of the use of photos, as well as share some recent experiences with trying to use them better.