Syndicate content

Getting to better data: who does the editing?

Markus Goldstein's picture

In a previous post I talked about some issues with collecting gender disaggregating data in practice. Susan Watkins helpfully pointed me to a number of papers which provide more systematic and thoughtful evidence on data collection issues that a lot of us face and I thought it would be useful to summarize some of them here.  

This week I start with a nice paper by Mariano Sana and Alexander Weinreb (gated version here) who tackle the question about who is best equipped to edit data, including some theory and an experiment.  

We all know this problem -- you are going through your data and checking the summary statistics. And you discover a farmer's field where if he actually used that much fertilizer, the field would be 4 feet deep in manure.   How on earth did this observation get here? And, more immediately, what to do about it? 

As Sana and Weinreb explain, there are two schools of thought about this.   On one hand you have the folks who think the data is what it is, and the analyst (or the organization releasing the data) ought to make the changes.   On the other hand are folks (including me) who think these things ought to be resolved in the field.   What we usually do is put in place some form of data editing -- i.e. someone who reviews the data before the team comes out of the field.    This can include an in-country back office looking at entered questionnaires and sending the unacceptable ones back out into the field, or, as I prefer to do, a field supervisor out with the team who looks over questionnaires as soon as possible after they are collected (preferably each evening, especially when the survey will cover multiple locations).    If an inconsistency is discovered, the enumerator is usually sent back out to resolve it.   The tricky bit, of course, is to do this in such a way as to get the enumerator to actually resolve it with the respondent (if possible) and not make the answer up while, at the same time, maintaining the incentives for some sort of timely data collection. And of course, sometimes the enumerator cannot find the respondent and/or has problems getting a straight answer.   In this case we might want the enumerator to come up with an educated guess as to what the answer is, perhaps in consultation with the field supervisor.

Why might this be a bad idea? Sana and Weinreb lay out a number of arguments.    First, this results in a real case-by-case approach.   And since science is all about replication -- including procedural replication -- this is a deviation.   Second, the field workers usually have less education and for sure less statistical training than the researchers, so they might not be the best equipped. Finally, the field worker might be trying to please the researcher, so he or she would push the answer to a place where it introduces a confirmatory bias to the response. 

In the other corner, Sana and Weinreb lay out some arguments in favor of field vs. "outsider" editing. The first argument is that while the outsider will be more consistent (e.g. writing some code to correct inconsistencies) it is not clear that is more valid (they use a nice example of inconsistencies between a question on pregnancy and virginity -- the geek in Washington has no idea whether the virgin answer was wrong or the pregnant answer was wrong).   Tackling the higher education of the researcher vs. the fieldworker argument is easy -- cluelessness is not a function of education.   Moreover, a good fieldworker is going to have picked up a bunch of stuff in the interview that didn't make it on the questionnaire and hence has a distinct informational advantage.   Finally, slanting the data in favor of hypotheses is not limited to fieldworkers – either consciously or unconsciously this can affect the editing by researchers as well.   

So this leaves Sana and Weinreb with two questions: 1) are insiders (the fieldworkers) better editors than outsiders? and 2) "Does editing ability improve with additional input?" So to resolve this, they set up an experiment. Working with a survey team with already collected data, they tackle question 1 by introducing a set of inconsistencies across questions in a questionnaire by changing only one of a set of paired questions.    To tackle question 2, they restrict the information that participants can see to only parts of the questionnaire in an initial edit.    They then, for a second edit,  allow participants access to the full questionnaire, as well as other households in the community.  

They conduct this experiment with a small, but multi-level group of folks who worked on the survey:   4 interviewers, 2 field supervisors, 2 data managers, and 2 data users.   The first two groups were based in the field, and the last two were based in the US. All folks were asked to edit the questionnaires using a computer -- so no revisits to the field were possible (and indeed, it was statistically unlikely that the field staff had actually worked on these individual questionnaires).    So relative to how this often works in practice, the field staff are likely to underperform since they can't go back to the household.   

What do they find? One interesting result is that there is clear within group heterogeneity -- as anyone who has managed a field team knows.   But the cross group differences are bigger.   And, perhaps not surprisingly, the researchers are the worst at fixing the data. But they're really not good -- with restricted information, they get less than half of the edits right.    Even with full access to information, they still come in below 60 percent.   Interestingly, interviewers and field supervisors perform about the same -- getting about 70 percent right with restricted information and 80 percent right with unrestricted information. Data managers do worse than the supervisors and interviewers with restricted information, but close this gap with unrestricted information. 

So this provides some things to think about.   While revisiting the respondent is a first best, it seems that good inference about what is going can come from the field staff even when they are using the data in the same way that a researcher would (looking through it on the computer). Of course, it also confirms the importance of getting whomever is editing an individual question access to as much surrounding information as possible, and the fact that the field folks have more of a clue about what the accurate answers are likely to be than us researchers.  


Submitted by Ron on
Markus: Your post is s timely one because it picks up on a very active literature in the field of implementation / public administration research that falls under the rubric of "street level bureaucracy". The field worker similar to a front-line worker in a line ministry, are really the only actual contact person encountered by the survey respondent and it is this discretionary power that yields them considerable influence in how the intervention is experienced. The takeaway from the large literature is that these street -level or front-end staff will operate in a fashion that is really beyond the control of senior / head office managers. This does always mean that these workers will work to actively sabotage the aims of objectives but their actions might have this thus unwitting result. Economists have thought about this in terms of the principal agent framework and there is the resulting optimal contracting literature to try and align the differing incentives between the principal ( investigators) and agents ie field workers. Indeed one recommendation is for senior researchers/managers focus their attention on the wording of their employment contracts issued to workers or rather the firms that are typically sub-contracted to collect the data in question. So if there are perverse incentives these could be picked earlier or through improved oversight by reducing task burden or appointing dedicated quality assurance staff person(s). However the takeaway from the street level bureaucracy literature is that such managerial / top down approaches can be expected to have limited success. This is because these workers irrespective of the training or oversight including potential punishment will still use a variety of coping mechanisms to carry out their task and meet set goals given time & resource constraints. For instance one coping mechanism is the use of short cuts such scheduling the visit or appointment at a time inconvenient for the respondent thereby recording a no show . Alternatively the data collector might simply require respondents to show up at community center to minimize effort and thereby collect the information only from those willing and able to attend. Or spending time with the more obliging/ educated respondents to ease data collection. In the case of research from the US social workers assigned to mete out unemployment benefits to single mothers would schedule group appointments so as maximize waiting time for application processing and thereby create a deterrent to mothers unable to wait because of childcare or work obligations! As such it would be important for researchers and managers to spend some time observing how fuels workers are collecting information to understand the type and nature of coping mechanisms used in the course of their duties.

Submitted by Grant Cameron on
Markus, Edit & Imputation (E&I) is an integral part of the survey process so it's great to see a blog that highlights these issues. A couple of other points are worth noting. First, the impact of response errors is not absolute. Errors might be influential for some estimates, but have negligible impact on others. This is worth considering when balancing the trade-off between accuracy and the costs of E&I - especially if multiple visits to the same respondent are being contemplated. Second, good E&I is a tool for finding out more about errors and error sources. As such, carefully documenting when and why adjustments to the raw data are made will lead to improvements in the survey process. So no matter who (insiders or outsiders) or what (computer algorithms) is doing the imputations, it is important to store these decisions in indicators and metadata to improve future surveys - and to distribute these experiences to others. I look forward to your next installment.

Add new comment