Published on Development Impact

Unobserved errors in Earth observation

This page in:

The huge increase in the availability and use of remotely sensed data in development work (highlighted in my last post) has substantially expanded both the geographic scale and the scope of questions that researchers can examine. The promise of this data is information on outcomes or treatments in locations in which data collection has historically been impossible. This inability to collect data may either have been because of difficulties in physical access or the prohibitive cost of collecting data at the necessary scale.

But despite the excitement inherent in these new abilities, researchers should exercise caution: the rapid expansion in the use of remotely sensed data may have outstripped our understanding of the measurement errors embodied in these data and the implications those errors have for our estimates. This risk is particularly acute for data that is readily available and used by researchers ‘off-the-shelf.’ While having large remotely sensed data available for download by anyone substantially increases its usefulness it also increases the distance between the end-users of the data, who may not understand the intricacies of the creation of the data, and the creators of the data.

Remote sensing and measurement error

The challenge posed to estimation by error embodied in remotely sensed data is particularly acute because much of this error is likely non-classical measurement error. These sources of error then are likely to lead not only to attenuation and lack of precision but bias of unknown sign depending on the structure of the error.

 A growing body of literature is attempting to grapple with these challenges. Its aim is to understand the implications of different sorts of measurement error in remotely sensed products for estimation and outline ways to correct for those impacts. I consider two papers in this growing literature, one here and one in my next post.

One strategy to fix it

The first of these papers – by Jennifer Alix-Garcia and Daniel Millimet – grapples with measurement error in a specific, but quite common, context. They consider the case where a researcher has used remotely sensed data to measure the outcome variable they are interested in examining and where the remotely sensed data has been reclassified (either by the researcher or by the creator of the remotely sensed data) from a continuous measure to a binary indicator.

Specifically, they consider the impact that error in the commonly used Hansen deforestation data set might have on the estimated effects of a PES program that paid Mexican landowners to reduce deforestation on their land. The Hansen data classify a given pixel as forested or not (1 or 0). In this context, misclassification error in the outcome is non-classical because it is correlated with the true value of the outcome. The authors take advantage of the fact that the Mexican government also creates a separate remotely sensed measure of deforestation in Mexico that they can compare the Hansen data against in certain years. They find substantial disagreement in the two measures in 2010. That does not imply either is more correct. It does, however, indicate that at least one has measurement error at least some of the time.

The authors’ proposed solution to the problem of misclassification takes advantage of the observation that other, non-remotely sensed, covariates predict locations of disagreement between the two data sets. If the probability of misclassification varies systematically with covariates observable to the researcher, then it is possible to construct an estimator that estimates the probability of misclassification and corrects the observed remotely sensed outcomes for this misclassification. The analysis is then conducted using this corrected data.

How does predicting misclassification do in simulation?

The authors conduct a Monte Carlo simulation to test the improvement in estimation that can be gained from this approach. There are three things worth noting about their results. First, the non-misclassification adjusted estimators they test are consistently biased. Second, the common practice of including a vector of covariates believed to be correlated with misclassification error as controls in the regression – rather than predicting misclassification error explicitly – does not systematically reduce bias in the estimation and may exacerbate it. Third, the performance of the estimators varies with the number of ones in the data. That is, the relative advantages of the misclassification adjusted estimators appear to be greatest when occurrences of the outcome of interest is relatively rare in the data.

Does this matter in the real world?

The authors turn next to examining what these estimators mean in the context of a true program evaluation. Based on their performance in the Monte Carlo analysis, the authors apply the misclassification adjusted estimators to estimate the treatment effect of a PES scheme in Mexico. They find that the treatment effect estimated with their preferred misclassification adjusted estimator is 3x as large, and more precisely estimated, than the estimate from the most commonly used estimator.

Both the simulation results and the application to a real-life program evaluation suggest that the methods proposed by the authors have promise to help alleviate the bias created by measurement error in certain types of remotely sensed data. In particular, their methods offer an approach that appears better than the common practice of simply adding covariates thought to predict measurement error as controls.

There remain substantial limitations (which they acknowledge) to these proposed approaches. One is that researchers must have a set of covariates that are accurately measured and can be reasonably expected to predict measurement error in the outcome of interest. This may be a particularly hard requirement to meet in practice in the precise settings where remotely sensed data is most useful. Often it will be tempting to use other, potentially mis-measured, remotely sensed data in this set of covariates. The effectiveness of such an approach remains untested; the authors here assume these covariates are correctly measured.

Another limitation is that the approach is only applicable to addressing mismeasurement in binary outcome measures. That leaves aside settings where the outcome measure is remotely sensed but continuous and those that use binary or continuous remotely sensed data as an independent variable. Potential solutions to the challenges in these settings is the topic of the next post.


Authors

Patrick Behrer

Economist, Development Research Group

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000