Published on Development Impact

If you pay your survey respondents, you just might get a different answer

This page in:
When I was doing my dissertation fieldwork, the professor I was working with and I had a fair number of conversations about compensating the respondents in our 15 wave panel survey.   We were taking a fair amount of people’s time and it seemed like not only the right thing to do, but also a way to potentially help grow the trust between our enumerators and the respondents. 
 
These arguments are laid out in more depth in an interesting new paper (gated) by Guy Stecklov, Alexander Weinreb and Gero Carletto.   Stecklov and co. lay out a number of reasons why you might want to compensate respondents for their time.    First up: it might reduce non-response.   Some folks are more likely to say yes to spend an hour with you when you offer them a thank you present.   Indeed, that’s the impact where most of the existing work on this (almost all of it in more developed countries) has focused.   And second, it might engender better quality answers.   As Stecklov and co. elegantly put it “by providing an incentive, the interviewer attempts to qualify as someone who is worthy of receiving closely guarded information of the type that is usually withheld from ‘strangers’.”  
 
But Stecklov and co. also give us some arguments why this could go the other way.   First of all, since the enumerator is now giving them a gift, respondents may be more inclined to try and find the answers they think the enumerator is looking for.   For their second reason, they cite the experimental work on extrinsic versus intrinsic motivation and how turning to this financial extrinsic motivation may lower accuracy/effort on the part of the respondents.    (In this vein, it’s key to note that most national surveys do not compensate respondents, and I think part of their logic is that responding and responding accurately is basic civic duty (I know that’s what I think when I respond to the census)).   This extrinsic motivation effect can be compounded in a developing country context where the enumerators are usually more educated (and wealthy) than the respondents and may represent a project (or government) with some potential benefit to the respondent in the future.  
 
So maybe it’s not clear whether incentives are a good idea or not.    Let’s see what the evidence has to say.    But, before getting to Stecklov and co.’s experiment, it was really striking to me how limited this literature is.   While there is a literature on incentives increasing response rates and (maybe) data quality in more developed countries, there’s nothing for less developed countries.   Stecklov and co. argue that part of the reason for this may be that there are fairly high response rates to start with – for example the Demographic and Health Surveys usually come in around 95 percent.  
 
Stecklov and co. are working in India, specifically in two urban centers in Karnataka.   The survey they use to examine the effect of incentives is administered to 2333 households as part of a project on urban property records (only house owners were included, so these are not the poorest respondents).   They randomly assign blocks to either receive an incentive (5 dollars, roughly a day’s wage for manual work) or not.    The incentive was announced at the start of the interview, but not given until the interview was successfully completed.  
 
So, what do they find?    The incentive successfully increased the response rate with a 99.9 percent response rate in the incentive group against 96 percent in the no-incentive group.   This was completely driven by one of the two municipalities – in the other the response rate was equal (and extremely high).  
 
Then it gets really interesting.   Stecklov and co. look at how the incentives may have generated systematically different answers across a range of domains.   The first domain that they look at is demographics and social characteristics.    There are no significant differences here, including whether respondents report being from a scheduled caste.  
 
The next realm is political attitudes.   Here questions cover topics such as “how easy is it to hold current elected officials accountable for the duties that they are supposed to perform?” with respondents responding on a scale of 1 to 5.   In this case, the incentive didn’t push respondents to be more or less positive, but it did push them to be more extreme in their answers (e.g. more 1s than 3s).   Stecklov and co. speculate that the incentive is getting people to say more how they really feel rather than giving the easy middle answer.  
 
On to household decision making.   Here the incentives don’t result in any significant difference in answers.  
 
Fourth, Stecklov and co. look at attitudes towards the project – both their prior knowledge and expectations of what the project will do for them.    Here we might really expect some urge to please the enumerator to show through.   But no, there is no sign of this – incentives don’t change the answer either on knowledge of the project or expectations.  
 
Finally, Stecklov and co.  look at income, consumption and assets.   Boom! here the bias comes out.   Folks getting the incentive report 11.6 percent lower monthly income.  They also report significantly lower consumption, which is driven by a 12 percent lower level of reported luxury expenditures.  And the incentive group also reports 15 percent less assets.   Stecklov and co. take a separate look at clearly observable assets (housing materials, toilet connection, and piped water) and here it’s interesting to see that there is no significant difference across the incentive and no-incentive group.   So maybe, in the dimensions of things the enumerator can see, the incentive folks don’t misreport.   But in all other dimensions of wealth, it looks like the incentive group is trying to appear poorer to the enumerator.  
 
So the final score seems to be: incentives create no urge among respondents to ingratiate themselves with the project, get us a bit more extreme political views, and create a serious incentive to look poorer.   (Technical note: if you’re worried about sample selection driving the reporting, don’t be – Stecklov and co. show us both bounds and that the results hold for the community in which there is no difference in response rates).
 
This is a really interesting and provocative result.    So what’s next?   Stecklov and co. suggest a number of further tests to be done and I’ll throw a couple in too (aside from the obvious of getting this done in another context).    First, we could vary the amount -- how much of an incentive makes a difference?   Second, I’ve often done gifts in kind rather than cash - does cash versus kind matter?    Third, does it matter when in the interview you give the incentive – at the start or at the end?   Fourth, how does this play out in panel versus one-off surveys?  Fifth, many surveys I’ve participated in enter you in a raffle for some prize (which I never seem to win!) does that generate a different pattern than a payment for sure?  
 
Let’s see where this goes, but in the meantime, don’t be surprised when your incentivized survey has different wealth levels than the nationally representative, unincentivized survey.    
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000