Syndicate content

attrition

Dealing with attrition in field experiments

Berk Ozler's picture

Here is a familiar scenario for those running field experiments: You’re conducting a study with a treatment and a comparison arm and measuring your main outcomes with surveys and/or biomarker data collection, meaning that you need to contact the subjects (unlike, say, using administrative data tied to their national identity numbers) – preferably in person. You know that you will, inevitably, lose some subjects from both groups to follow-up: they will have moved, be temporarily away, refuse to answer, died, etc. In some of these cases there is nothing more you can do, but in others you can try harder: you can wait for them to come back and revisit; you can try to track them to their new location, etc. You can do this at different intensities (try really hard or not so much), different boundaries (for everyone in the study district, region, or country, but not for those farther away), and different samples (for everyone or for a random sub-sample).

Question: suppose that you decide that you have the budget to do everything you can to find those not interviewed during the first pass through the study areas (doesn’t matter if you have enough budget for a randomly chosen sub-sample or everyone), i.e. an intense tracking exercise to reduce the rate of attrition. In addition to everything else you can do to track subjects from both groups, you have a tool that you can use for those only in the treatment arm (say, your treatment was group-based therapy for teen mums and you think that the mentors for these groups may have key contact information for subjects who moved in the treatment group. There were no placebo groups in control, i.e. no counterpart mentors). Do you use this source to track subjects – even if it is only available for the treatment group?

The Iron Law of ALMPs: Offer a Program to 100 People, maybe 2 get jobs

David McKenzie's picture

I have just finished writing up and expanding my recent policy talk on active labor market policies (ALMPs) into a research paper (ungated version) which provides a critical overview of impact evaluations on this topic. While my talk focused more on summarizing a lot of my own work on this topic, for this review paper I looked a lot more into the growing number of randomized experiments evaluating these policies in developing countries. Much of this literature is very new: out of the 24 RCTs I summarize results from in several tables, 16 were published in 2015 or later, and only one before 2011.

I focus on three main types of ALMPs: vocational training programs, wage subsidies, and job search assistance services like screening and matching. I’ll summarize a few findings and implications for evaluations that might be of most interest to our blog readers – the paper then, of course, provides a lot more detail and discusses more some of the implications for policy and for other types of ALMPs.

Dialing for Data: Enterprise Edition

Markus Goldstein's picture
Surveys are expensive.   And, in sub-Saharan Africa in particular, a big part of that cost is logistics – fuel, car-hire and the like.   So with the increasing mobile phone coverage more folks are thinking about, and actually using, phones in lieu of in person interviews to complete surveys.   The question is: what does that do to data quality?  

A curated list of our postings on Measurement and Survey Design

David McKenzie's picture
This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations.
Measurement

Help for attrition is just a phone call away – a new bounding approach to help deal with non-response

David McKenzie's picture

Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.