Syndicate content

Add new comment

Submitted by Jeff Weaver on

Great post, and one that most grad students starting out in development research could really benefit from. Something that this post brings to mind, and that researchers might do well to note, is that it can be really beneficial to check the distribution of responses to questions by surveyor for all questions, not just the ones that activate skip patterns or might be prone to fraud. There are some questions where a certain amount of probing is required to get a proper answer, such as "how many loans have you taken in the last month?" or "If you needed a quick loan of $100 who could you ask?". While we try to avoid that as much as possible in survey design, it is usually unavoidable. Surveyors may exert different amounts of effort in getting a response, ranging from too little (no loans) to too much (respondent feels pressured to make up loans). Probing too much or too little isn't necessarily a sign of fraud, but something that you want to standardize across surveyors; looking at your data after a week can help you figure who which surveyors you need to retrain in doing more or less. A second benefit is that surveyors sometimes misunderstand subtle distinctions in questions without deliberately meaning to commit fraud. For example, I had a survey where we were asked women if they had received antenatal care. When we looked at our data, we realized that one surveyor was putting "yes" much more often than others because she had misunderstood what antenatal care entailed. Our supervisors/backcheckers hadn't yet picked this up, since the misunderstanding only was apparent some of the time. Finally, this can help in giving specific feedback to surveyors in a way that both lets them know that you are checking on them carefully, helping to lower incidence of fraud, but is more constructive than conversations revolving around backchecks, which in my experience often create a lot of tension.

In general, I think that this is one the main benefits of electronic surveying: getting the data back instantly allows you to find patterns that otherwise could be missed in scrutiny and even backchecking. With backchecking, the sample size is usually too small to detect these subtle errors, especially since backcheck forms often omit the questions where we know the responses are likely to be unstable. And researchers are looking into ways to improve fraud detection within survey software in some pretty interesting ways, e.g. http://dl.acm.org/citation.cfm?id=2481404