Published on Development Impact

Survey attrition and Freudenschade

This page in:
No, dear reader it’s not a typo.   Bear with me and I’ll explain.  

This week I was reading a forthcoming paper (gated) by Guy Steklov, Alexander Weinreb and Paul Winters on survey attrition in Mexico.   Their idea starts with the fact that some programs are targeted, i.e. they’ll pick some people in the community for benefits, while excluding others.   And just maybe, the folks who aren’t included feel envy or resentment when they watch others benefiting.   This is freudenschade.   And it can make it harder to do impact evaluation.

The setting:  PROGRESA (a large cash transfer program in Mexico).    The way PROGRESA worked was to target communities based on poverty, and then within community to use proxy means (assets, etc.) to target individuals.   The targeting criteria weren’t public because the government didn’t want folks to game the system.    And there was a randomized evaluation (at the community level).  So the distribution of individuals looks like this (adapted from their paper):
Image
Groups 2 and 3 are important.   Group (2) is excluded because their community was randomized out – and when I do impact evaluations I am often worried these folks won’t answer our survey because they are annoyed that they didn’t win the program lottery (Stecklov and co. use the term statistical exclusion for these folks)   Group (3) are folks who are excluded because the program thought they didn’t need the benefits as much as others (needs-based exclusion).   But, they are often folks we will survey anyhow to look for things like spillover or general equilibrium effects.   

Using data from the PROGRESA evaluation, Stecklov and co. look at how there might be differential attrition across these groups.   They start with the very early data: the 1998 baseline and the 1999 first follow up.   Looking at overall attrition, treatment status didn’t matter (the standard examination, and a good outcome).   But when they unpack the ineligible households, we see status start to matter.   Ineligible households in control communities were 5.6 percentage points more likely to attrit.   And it was worse for the treatment group – ineligibles in treatment communities were an additional 4.3 percentage points (significant at 10 percent) more likely to attrit. 
Steklov and co. go on to look at what happens in the next survey round (2000) as control communities (and the eligible households within them) were phased in.   Attrition among the ineligibles in control communities goes up (by 6.6 percentage points) and the ineligibles in treatment and former control communities converge in terms of attrition.   Bottom line, in the case of PROGRESA, needs based exclusion engenders a higher degree of survey attrition than the fact that your community was randomized out. 

Why?   Steklov and co turn to qualitative work by Michelle Adato.   The quote they use captures it well as folks said it was “not fair or nice to give something to some people and not to others; that beneficiaries eat better and buy clothing for their children, while non-beneficiaries watch, unable to buy the same food or clothing…”   Couple that with non-transparent (for good reason) selection criteria and its beginning to smell a lot like freudenschade.  For additional exploration, Stecklov and co. turn to the metadata and look at what reason the enumerators recorded for these refusals.   It turns out that ineligibles in treatment communities were much more likely to outright refuse an interview. 

It also turns out that the non-response was higher among the wealthier ineligibles.   Steklov and co. regress a wealth index against survey response in the ineligible sample and find that it significantly lowers response.  Their argument: these folks knew they were never going to get the program and weren’t going to participate in the survey on the off-chance that it increased their chances of participation later. 
To me this study raises serious concerns about possible biases when we are looking at spillover effects of programs.   It would be worth looking at some other settings to see how widespread it is:  some clear axes of examination include looking at variations in the transparency of exclusion by the program as well as the relative size, visibility, and desirability of the benefits.   Steklov and co. also suggest that giving a gift at the time of the survey might help mitigate the freudenschade.    As for me, I am going to go out and work on practicing Mudita
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000