Syndicate content

Notes from the Field: How to incentivize your survey team

Markus Goldstein's picture

Recently I was spending some time with a survey firm in Tanzania, pre-testing a survey. I got to talking with one of the folks working at the firm about how they compensated their enumerators.    He made it clear that they follow a fixed efficiency wage (i.e. x amount per week, above the market clearing wage)  – with some pretty close supervision.    On the other hand, I’ve worked with firms that pay a clear piece rate, again with some supervision.   While I lean towards the efficiency wage approach, it’s not clear to me which is best.  

This boils down to a principal-agent problem, one where effort is unobserved and the outcome may not be apparent until it is too late for you to do anything about it.    Let’s take a bit more at the pros and cons of each approach.   The fixed efficiency wage approach makes it clear that you value this work and makes the interviewer more cautious to mess things up – not only in individual survey implementation, but in doing things like fudging the overall interview. However, as Dale Whittington points out in a nice discussion of some of these issues, you don’t want to set the wage so high as for the enumerators to think you are an idiot (particularly if you are doing a labor survey).   Chris Udry and I adopted the efficiency (fixed) wage approach when we did a survey in Ghana and I think Whittington has a point in that the non-monetary incentives also matter a lot in getting higher quality even when you use this approach. We read all of the questionnaires, pretty soon after the enumerators brought them in, and sat down with them individually when we had questions. We ran repeated rounds, and I think this process helped get the enumerators into a frame of mind where they would help suggest questions or provide feedback on others that weren’t working with more willingness  than if we paid them a piece rate.   Overall, the right level of efficiency wage, with some close attention and appreciation of the quality of the work the enumerators are doing is perhaps more likely to build a sense of a team – with some potential benefits.

However, the fixed efficiency wage definitely can remove the incentive for speed. So of course, the natural alternative is the piece rate, which I have seen combined with some kind of payment for low error rates (with errors defined as missed questions, messing up skip codes, and other mechanical problems).   While this clearly gets you a core set of data in a more timely fashion, it runs the risk of skipping respondents who are more difficult to track down and it cuts down on the enumerator’s incentive to probe (indeed, in the more extreme case when they discover the skip code that gets them out of 10 pages, you can bet that will be a common answer). One variant on the individual piece rate incentive would be to move to team based incentives – but these don’t solve your difficult to find respondent nor probing problems.   Of course you could specify these in the incentives – but then you head towards what is a fundamental issue with this approach – you need to spend a fair amount of time specifying a range of parameters for the contract.   As an important aside, this all gets rather tricky when you think about competitive bidding for a survey. Since labor costs will be one of the two big costs (transport being the other), a firm could pay a piece rate that is right at the market clearing wage and win on price – but quality will suffer.   The main problem is that “quality” at this point is rather hard to quantify – and this makes it hard to convince those overseeing the competitive bidding.  

Both the piece rate and efficiency wage approaches have some common elements to consider.   The first of these is “trust but verify.”   That is, to what extent do you want to revisit households to check the answers on the survey? There are good arguments to do this under both approaches, and they are stronger under some type of piece rate contract.   In the case of the efficiency wage, if it’s done in an explicit “I am monitoring you” fashion, then to some degree it undermines the potential team spirit thing you might have going on.   But it might be worth it for the strong incentive not to mess up. However, my experience with call-backs is that aside from the clear check as to whether the enumerator ever showed up, a lot of other answers are sufficiently exposed to the vagaries of respondent moods and errors that nailing down enumerator effort is tough (not impossible, but tough). Regardless of the degree of monitoring, one penalty I have seen on almost every survey I’ve worked on is that people get fired for making up data.  

The second thing that runs common to most contracts is retention and longer term incentives.  In my experience, whether piece rate or efficiency wage, part of the salary is kept until the end (be it explicitly linked to performance or not).   This has two obvious benefits: 1) It’s a real pain to train enumerators in the middle of fielding surveys (or to see if that substitute is still unemployed – which may be a bad sign by itself), and 2) it sets up another reward for not committing significant survey malfeasance.   Side story: One of my favorite enumerators on our Ghana survey figured out we were holding on to our grant funds in dollars. He also figured out that at that point Ghanaian treasury bills were running at 30 percent while inflation was down to about 11 percent.   He asked Chris and I for his survey end bonus early on and put it in T-bills (we had to respect his economic logic over our potential incentives).   Now he’s a forensic accountant for the US Army.  

Ultimately, the type of contract you opt for and the incentives you put in place will depend on what kind of data you’re after.   The more the answers depend on probing, on establishing a good rapport with respondents and careful attention to nuance and detail, the more carefully you want to think about the trade-offs between a piece rate type of approach and a fixed efficiency wage approach. Anyhow, this might be a good place to start a discussion on tips/thoughts/observations – any ideas?

Comments

Submitted by Nathan on
I saw this paper, which experimentally tests part of the question you are asking, presented recently. Goals (th)at work Sebastian J. Goerg and Sebastian Kube Abstract: A randomized field experiment is used to investigate the connection between work goals, monetary incentives and work performance. Workers are observed in a natural work environment where they have to do a simple but effort-intense task. Output is perfectly observable and workers are paid according to a piece-rate contract. While a regular piece rate serves as a benchmark, in some treatments the piece rate is paid conditional on reaching a pre-specified goal. We observe that the additional introduction of personal work goals leads to a significant output increase. Interestingly, the effect persists even if meeting the output goal is not connected with monetary consequences. The positive effect of goals does not only prevail if they are endogenously chosen by the workers, but also if goals are set exogenously by the principal - although in the latter case, the exact size of the goal plays a crucial role.

Thanks Nathan. The Whittington piece i linked to talks a lot about the importance of non-monetary goals -- he has a nice example of a chart showing progress which he put on the wall. My concern here is that with surveys output is observable, but important dimensions of quality aren't observable until later (unless you want to spend days thinking up higher order validity checks)

Submitted by Alexis on
In some DHS datasets there is a bump for females at the age of 50-54 that may be due to the fact that interviewers have to fill in a shorter questionnaire for women over 50. See Armenia, Ethiopia and Nepal released lately.

what follows is an email exchange I had with Marcus Bohme at the Kiel Institute for the World Economy: I read your post “Notes from the Field: How to incentivize your survey team“ and would like to make one comment based on the experiences we had with a large scale household survey in Moldova that investigates the impact of migration on children and elderly left behind (CELB) in migrant families. We interviewed 3568 household heads and based on the household roster we interviewed individually 3375 caregivers, 1177 children 3375 caregiver interviews, 1177 child interviews and 2170 elderly and 2170 elderly. In order to decrease the data-entry cost and to allow close monitoring of the survey as well as almost instant feedback to interviewers CAPIs were implemented using inexpensive netbooks. We paid a piece rate and I have confirm two observations you made in your post: (1) piece rates kept the skipping rate of possible respondents very low and (2) monitoring was immensely important. To monitor the quality of the data we used classic indicators (e.g. missing values and consistency checks etc.) but the CAPI format of the survey also allowed us to check the duration of each section of each questionnaire. This meta-data allowed us to identify very quickly falsification and low performance and talk to the interviewer about it. During the analysis of the duration data we realized that there was a stark learning curve present in the interview duration. We found a decrease of interview duration of almost 50 percent which translates into a significant increase of the average hourly wage the interviewers receive. Thus when talking about the piece rate and the market clearing wage one should talk into account that the effective piece rate wage will increase over time. (we summarized these findings in a little research note: http://www.ifw-members.ifw-kiel.de/publications/guidelines-for-the-use-of-household-interview-duration-analysis-in-capi-survey-management/KWP_1779.pdf ) MG: one question -- when you say that the piece rate led to low skipping of possible respondents -- i assume you mean that they are more likely to find respondents? Two thoughts: 1. I am worried that piece rate leads to skipping of questions within the survey -- i.e. if there is a skip code that you know cuts out a whole lot of questions the enumerators will do what they can to make that skip happens 2. With a piece rate, the incentive is to find the easy respondents and generate non-response reasons for the more remote ones -- and i think there are some studies which show this... MB: 1. You are right the single interviews were based on the household roster. Thus to get around a high single interview non-response rate the interviewers would have had to simply leave out e.g. one child to not have to do the single interview. But since our sampling was based on the Moldovan LFS we knew how many households members there were and we also knew certain characteristics (such as migrant status etc). There were some deviations but we called these problematic households to check and it turned out to be fine. 2. We also thought about the selection of the "easy" respondents by the interviewers. To partly address this problem we skewed the payment scheme a little bit. We set the piece rate 50% higher than for the roster. Thus we converted the individual interviews into their cash cow which could not be milked without the roster. Overall the low non-response rates are probably also due to our close monitoring since most of them got feedback with 24-48 hours and were not paid until we accepted the quality of each interview. We made that also very clear during the training.

Submitted by TC on
Hi Markus--An interesting piece. I stumbled upon it while doing some preliminary research on teacher evaluation and incentives. I see many parallels between the challenges you identify in your post and the challengers researchers in my field face in identifying appropriate and effective policies. One question for you--and I apologize if the answer is an obvious one--how often, if ever, are survey "teams" as you've called them actually organized and incentivized as teams? In other words, the piece rates and efficiency rates are applied to individuals on your research teams, rather than to the teams as a group. We see a similar pattern in education. Would it ever be feasible to tell a team of evaluators that NONE gets a monetary bonus unless ALL qualify cumulatively? The idea, of course, would be to create some peer pressure that would drive quality, efficiency, etc. Or, could you combine incentives (e.g. some for individual quality, others for group efficiency?)? Has this been done? Given the geography, would these approaches be too difficult? Masterlock did this in Mexico with great results a few years ago... I'm exploring the potential for this model in educational contexts and finding a mix of encouraging examples and discouraging obstacles. I'd be curious to hear your thoughts. Thanks for an interesting and thought provoking piece!