If you pay your survey respondents, you just might get a different answer

|

This page in:

When I was doing my dissertation fieldwork, the professor I was working with and I had a fair number of conversations about compensating the respondents in our 15 wave panel survey.   We were taking a fair amount of people’s time and it seemed like not only the right thing to do, but also a way to potentially help grow the trust between our enumerators and the respondents. 
 
These arguments are laid out in more depth in an interesting new paper (gated) by Guy Stecklov, Alexander Weinreb and Gero Carletto.   Stecklov and co. lay out a number of reasons why you might want to compensate respondents for their time.    First up: it might reduce non-response.   Some folks are more likely to say yes to spend an hour with you when you offer them a thank you present.   Indeed, that’s the impact where most of the existing work on this (almost all of it in more developed countries) has focused.   And second, it might engender better quality answers.   As Stecklov and co. elegantly put it “by providing an incentive, the interviewer attempts to qualify as someone who is worthy of receiving closely guarded information of the type that is usually withheld from ‘strangers’.”  
 
But Stecklov and co. also give us some arguments why this could go the other way.   First of all, since the enumerator is now giving them a gift, respondents may be more inclined to try and find the answers they think the enumerator is looking for.   For their second reason, they cite the experimental work on extrinsic versus intrinsic motivation and how turning to this financial extrinsic motivation may lower accuracy/effort on the part of the respondents.    (In this vein, it’s key to note that most national surveys do not compensate respondents, and I think part of their logic is that responding and responding accurately is basic civic duty (I know that’s what I think when I respond to the census)).   This extrinsic motivation effect can be compounded in a developing country context where the enumerators are usually more educated (and wealthy) than the respondents and may represent a project (or government) with some potential benefit to the respondent in the future.  
 
So maybe it’s not clear whether incentives are a good idea or not.    Let’s see what the evidence has to say.    But, before getting to Stecklov and co.’s experiment, it was really striking to me how limited this literature is.   While there is a literature on incentives increasing response rates and (maybe) data quality in more developed countries, there’s nothing for less developed countries.   Stecklov and co. argue that part of the reason for this may be that there are fairly high response rates to start with – for example the Demographic and Health Surveys usually come in around 95 percent.  
 
Stecklov and co. are working in India, specifically in two urban centers in Karnataka.   The survey they use to examine the effect of incentives is administered to 2333 households as part of a project on urban property records (only house owners were included, so these are not the poorest respondents).   They randomly assign blocks to either receive an incentive (5 dollars, roughly a day’s wage for manual work) or not.    The incentive was announced at the start of the interview, but not given until the interview was successfully completed.  
 
So, what do they find?    The incentive successfully increased the response rate with a 99.9 percent response rate in the incentive group against 96 percent in the no-incentive group.   This was completely driven by one of the two municipalities – in the other the response rate was equal (and extremely high).  
 
Then it gets really interesting.   Stecklov and co. look at how the incentives may have generated systematically different answers across a range of domains.   The first domain that they look at is demographics and social characteristics.    There are no significant differences here, including whether respondents report being from a scheduled caste.  
 
The next realm is political attitudes.   Here questions cover topics such as “how easy is it to hold current elected officials accountable for the duties that they are supposed to perform?” with respondents responding on a scale of 1 to 5.   In this case, the incentive didn’t push respondents to be more or less positive, but it did push them to be more extreme in their answers (e.g. more 1s than 3s).   Stecklov and co. speculate that the incentive is getting people to say more how they really feel rather than giving the easy middle answer.  
 
On to household decision making.   Here the incentives don’t result in any significant difference in answers.  
 
Fourth, Stecklov and co. look at attitudes towards the project – both their prior knowledge and expectations of what the project will do for them.    Here we might really expect some urge to please the enumerator to show through.   But no, there is no sign of this – incentives don’t change the answer either on knowledge of the project or expectations.  
 
Finally, Stecklov and co.  look at income, consumption and assets.   Boom! here the bias comes out.   Folks getting the incentive report 11.6 percent lower monthly income.  They also report significantly lower consumption, which is driven by a 12 percent lower level of reported luxury expenditures.  And the incentive group also reports 15 percent less assets.   Stecklov and co. take a separate look at clearly observable assets (housing materials, toilet connection, and piped water) and here it’s interesting to see that there is no significant difference across the incentive and no-incentive group.   So maybe, in the dimensions of things the enumerator can see, the incentive folks don’t misreport.   But in all other dimensions of wealth, it looks like the incentive group is trying to appear poorer to the enumerator.  
 
So the final score seems to be: incentives create no urge among respondents to ingratiate themselves with the project, get us a bit more extreme political views, and create a serious incentive to look poorer.   (Technical note: if you’re worried about sample selection driving the reporting, don’t be – Stecklov and co. show us both bounds and that the results hold for the community in which there is no difference in response rates).
 
This is a really interesting and provocative result.    So what’s next?   Stecklov and co. suggest a number of further tests to be done and I’ll throw a couple in too (aside from the obvious of getting this done in another context).    First, we could vary the amount -- how much of an incentive makes a difference?   Second, I’ve often done gifts in kind rather than cash - does cash versus kind matter?    Third, does it matter when in the interview you give the incentive – at the start or at the end?   Fourth, how does this play out in panel versus one-off surveys?  Fifth, many surveys I’ve participated in enter you in a raffle for some prize (which I never seem to win!) does that generate a different pattern than a payment for sure?  
 
Let’s see where this goes, but in the meantime, don’t be surprised when your incentivized survey has different wealth levels than the nationally representative, unincentivized survey.    
 
Countries

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Anonymous
February 22, 2018

Great summary and discussion! Thanks Marcus

Alexander A Weinreb
February 22, 2018

An important question, Jo. We can't know for sure since, like almost every researcher, we don't directly measure income streams. But there are two very strong signs that we're right. In the group that didn't receive an incentive, we found (1) a much stronger correlation between income and expenditures, and (2) a stronger correlation beween visible and non-visible assets.

kay
March 12, 2018

Can you explain how you measure a stronger correlation between income and expenditures if you are not measuring income. Likewise with non-visible assets - how are you measuring this. I assume that visible assets are the three mentioned in the article.

Jo Sanson
February 22, 2018

Thanks, this is really interesting. But it's not clear to me how it was established that the incentivized group was the one that misreported assets and income, rather than it being the unincentivized group that was misreporting up. How do you know that the group without the incentives didn't feel compelled to overreport their income?

Nic Owsley
March 10, 2018

Interesting. Receiving the money could have even primed 'feeling richer'.

Guy Stecklov
February 22, 2018

We appreciate the attention given to our paper. Also agree with many of his suggestions offered by Markus for future directions. Regarding the concern just raised on how we can know this is under- and not over-reporting - we took this concern seriously. Several basic arguments are made in the paper although we acknowledge these are only validated indirectly. The most pertinent I’ll copy more or less directly from the (sadly) gated paper, which I’m happy to share with direct email to me:
(a) In general, evidence suggests that income and expenditures tend to be underreported in Indian survey data (Bakshi, 2008; Deaton and Kozel, 2005). Furthermore, earlier evidence argues that income is typically more prone to misreporting and underreporting than are expenditures (Deaton, 1997). This suggests that the level of correspondence between income and expenditures can be viewed as a marker of data consistency… the gap between income and expenditures is greater in treatment households suggests that incentives are leading to more underreporting of income…
(b) If the promise of an incentive is driving a desire to present oneself as needy and deserving of further assistance, we would expect to find that the effect is largest for reports on non- essential expenditures. This is exactly what we show in Table 8: there is little influence of incentives on essentials or basic commodities, but expenditures on luxury items are strongly affected by the incentive. They drive much of the results in terms of expenditures. The same can be said about the least visible asset categories such as appliances, jewellery and recreation (see Table 9).
(c) Incentives have no influence on how visible indicators of wealth are reported (Table 10) despite the significant reduction in reported income, expenditures and non-visible assets.
(d) Finally, the lack of any clear directional effect of incentives across a wide range of demo- graphic, political and socially sensitive variables means that incentives are not generating a broadly felt increase in exchange motivations: one of the suggested mechanisms through which it should lead to more honest data reporting. This itself reduces the likelihood that exchange or obligations arising from the promise of payment are central factors in how incentives affect behaviour.

Osman Siddiqi
February 23, 2018

I would add varying the type of in kind incentive. Lentils/rice not the same as squeaky toys.

Guy Steckov
February 23, 2018

Agreed. This was part of the original plan but had to be put aside. Worth noting that a 2017 paper in the journal Social Science Computer Review by Mueleman and co-authors offers one hint on what you might expect: size of incentive matters. Their study is based on a web-based survey in Ghana so hard to know how replicable it might be to surveys in the field that involve direction social interaction.

Bob
July 22, 2020

This is a great article on survey incentives and result outcome. However, my opinion is we should pay respondents as it takes significant time and effort from them to answer these surveys.

We @ https://www.awwro.com/ pay our respondents. If the client is paying why shouldn't the respondents get paid?

Elias
March 11, 2021

A great summary indeed - thanks! I wonder if you have come across additional evidence in recent years, since writing this blog? Perhaps the WB was able to test for effects elsewhere?