Published on Development Impact

Getting to better data: who does the editing?

This page in:

In a previous post I talked about some issues with collecting gender disaggregating data in practice. Susan Watkins helpfully pointed me to a number of papers which provide more systematic and thoughtful evidence on data collection issues that a lot of us face and I thought it would be useful to summarize some of them here.  

This week I start with a nice paper by Mariano Sana and Alexander Weinreb (gated version here) who tackle the question about who is best equipped to edit data, including some theory and an experiment.  

We all know this problem -- you are going through your data and checking the summary statistics. And you discover a farmer's field where if he actually used that much fertilizer, the field would be 4 feet deep in manure.   How on earth did this observation get here? And, more immediately, what to do about it? 

As Sana and Weinreb explain, there are two schools of thought about this.   On one hand you have the folks who think the data is what it is, and the analyst (or the organization releasing the data) ought to make the changes.   On the other hand are folks (including me) who think these things ought to be resolved in the field.   What we usually do is put in place some form of data editing -- i.e. someone who reviews the data before the team comes out of the field.    This can include an in-country back office looking at entered questionnaires and sending the unacceptable ones back out into the field, or, as I prefer to do, a field supervisor out with the team who looks over questionnaires as soon as possible after they are collected (preferably each evening, especially when the survey will cover multiple locations).    If an inconsistency is discovered, the enumerator is usually sent back out to resolve it.   The tricky bit, of course, is to do this in such a way as to get the enumerator to actually resolve it with the respondent (if possible) and not make the answer up while, at the same time, maintaining the incentives for some sort of timely data collection. And of course, sometimes the enumerator cannot find the respondent and/or has problems getting a straight answer.   In this case we might want the enumerator to come up with an educated guess as to what the answer is, perhaps in consultation with the field supervisor.

Why might this be a bad idea? Sana and Weinreb lay out a number of arguments.    First, this results in a real case-by-case approach.   And since science is all about replication -- including procedural replication -- this is a deviation.   Second, the field workers usually have less education and for sure less statistical training than the researchers, so they might not be the best equipped. Finally, the field worker might be trying to please the researcher, so he or she would push the answer to a place where it introduces a confirmatory bias to the response. 

In the other corner, Sana and Weinreb lay out some arguments in favor of field vs. "outsider" editing. The first argument is that while the outsider will be more consistent (e.g. writing some code to correct inconsistencies) it is not clear that is more valid (they use a nice example of inconsistencies between a question on pregnancy and virginity -- the geek in Washington has no idea whether the virgin answer was wrong or the pregnant answer was wrong).   Tackling the higher education of the researcher vs. the fieldworker argument is easy -- cluelessness is not a function of education.   Moreover, a good fieldworker is going to have picked up a bunch of stuff in the interview that didn't make it on the questionnaire and hence has a distinct informational advantage.   Finally, slanting the data in favor of hypotheses is not limited to fieldworkers – either consciously or unconsciously this can affect the editing by researchers as well.   

So this leaves Sana and Weinreb with two questions: 1) are insiders (the fieldworkers) better editors than outsiders? and 2) "Does editing ability improve with additional input?" So to resolve this, they set up an experiment. Working with a survey team with already collected data, they tackle question 1 by introducing a set of inconsistencies across questions in a questionnaire by changing only one of a set of paired questions.    To tackle question 2, they restrict the information that participants can see to only parts of the questionnaire in an initial edit.    They then, for a second edit,  allow participants access to the full questionnaire, as well as other households in the community.  

They conduct this experiment with a small, but multi-level group of folks who worked on the survey:   4 interviewers, 2 field supervisors, 2 data managers, and 2 data users.   The first two groups were based in the field, and the last two were based in the US. All folks were asked to edit the questionnaires using a computer -- so no revisits to the field were possible (and indeed, it was statistically unlikely that the field staff had actually worked on these individual questionnaires).    So relative to how this often works in practice, the field staff are likely to underperform since they can't go back to the household.   

What do they find? One interesting result is that there is clear within group heterogeneity -- as anyone who has managed a field team knows.   But the cross group differences are bigger.   And, perhaps not surprisingly, the researchers are the worst at fixing the data. But they're really not good -- with restricted information, they get less than half of the edits right.    Even with full access to information, they still come in below 60 percent.   Interestingly, interviewers and field supervisors perform about the same -- getting about 70 percent right with restricted information and 80 percent right with unrestricted information. Data managers do worse than the supervisors and interviewers with restricted information, but close this gap with unrestricted information. 

So this provides some things to think about.   While revisiting the respondent is a first best, it seems that good inference about what is going can come from the field staff even when they are using the data in the same way that a researcher would (looking through it on the computer). Of course, it also confirms the importance of getting whomever is editing an individual question access to as much surrounding information as possible, and the fact that the field folks have more of a clue about what the accurate answers are likely to be than us researchers.  


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000