Attend Spring Meetings on development topics from Apr 18-23. Comment and engage with experts. Calendar of Events

Syndicate content

You are in school. Or, so you say…

Berk Ozler's picture

Regardless of whether we do empirical or theoretical work, we all have to utilize information given to us by others. In the field of development economics, we rely heavily on surveys of individuals, households, facilities, or firms to find out about all sorts of things. However, this reliance has been diminishing over time: we now also collect biological data, try to incorporate more direct observation of human behavior, or conduct audits of firms. And, this is with good reason: self-reports can provide a poor reflection of reality.

It’s not that people haven't been skeptical of self-reported data before – we’re usually suspicious of people reporting their sexual behavior, or, say, firms reporting about corruption. Markus has recently talked about couples hiding information from each other here and here. And researchers have been doing interesting work to elicit better information from people (or at least identify who is likely to be fibbing), such as in this paper in EDCC (2009) (gated version here). Jed has blogged about measuring consumption through surveys, in which he refers to the all kinds of very interesting survey experiments that my colleagues here are conducting about consumption, labor market, and agriculture modules in standard household surveys. I know that Julian Jamison is experimenting with data collection methods in this project.

Notice the thread here. Most, if not all, of the information above refers to topics that we intuitively consider sensitive: spouses hiding things from each other, respondents not wanting to tell an enumerator how much cash or assets they have, people or firms not wanting to admit to possibly illegal activities, most individuals uncomfortable to talk about sex, etc. And, then there are recall issues: it’s hard to remember what you ate last week, you’re not exactly sure how many bushels of corn you harvested (or how to convert that into KGs), you don’t know how many hours your wife worked last month... In surveys, we have to deal with this stuff all the time.

However, we worry less about some other outcomes. School enrollment used to be one of them. For decades, we relied on questions like “Are you currently in school?” or “Over the past two weeks, how many days have you attended school?” There is now reason to believe that these data may not only be noisy, but that they can also seriously bias the findings of important impact evaluation studies.

Barrera-Osorio et. al. (AEJ:AE, April 2011), evaluating the impact of a CCT experiment on enrollment and attendance, find that the self-reported enrollment and attendance rates among all study arms are both so close to 100% that it is practically impossible to find an impact of any intervention to improve these outcomes. When they use monitored attendance data instead (using random spot checks of attendance), they find that enrollment and attendance rates are under 80% in the control group and only a few percentage points higher in the various intervention arms. Using self-reports only, they would have found no impact of the program (and unbelievably high secondary school enrollment rates), but, instead they find small but statistically significant program impacts on both outcomes using monitored data. (A caveat is that the self-reported and monitored data do not refer to the same school year, see footnote 24, but the authors do not think that this is the main reason underlying the discrepancy.)

But, move a little more than 10,000 miles from Colombia to Cambodia, and Filmer and Schady (Journal of Development Economics, in press (gated), ungated working paper version here) find no evidence of misreporting in enrollment: whether they use household survey data on enrollment or data from spot checks of attendance, their impact sizes are identical. So, maybe there is hope for self-reports after all?

Now, let’s finally move another 5,000 miles from Cambodia to Malawi. In this paper I co-authored with Sarah Baird and Craig McIntosh, we show that not only students overstate the extent to which they are enrolled in school, but the misreporting is differential by study arms (this paper does not have direct observation on school participation as I discussed last week here, and relies instead on school ledgers and administrative data). Girls in the conditional cash transfer arm had substantially lower rates of misreporting than those in the control group or the unconditional cash transfer arm. While this sounds unintuitive at first, it is not. Given that the survey asked about past enrollment and attendance, girls in the CCT arm were reporting on behavior that they not only knew had been closely monitored by the program, but also for which they had already been rewarded or penalized. Hence, they had little reason to exaggerate their school participation. Others in the study – for whom there were no spot checks or any overt ‘suggestion’ of schooling and who were being visited only once a year by survey teams – were much more likely to overstate their enrollment rates. Given that this latter group had nothing obvious to gain from this, it is likely that this is the socially desirable answer for an adolescent girl in this context. As can be seen in Table 3, self-reported data alone would yield a substantially different finding about the relative effectiveness of CCTs and UCTs on school enrollment.

These three papers point to a few lessons:

·         Just as we are skeptical about identification issues in economics, we should be equally skeptical about data. Otherwise perfectly identified studies can produce bogus results if the researchers are not careful about the veracity of the outcome measures.

·        When it comes to answering questions, survey respondents are like anyone else: they may have incentives to give certain answers. They may also follow social norms or want to please (or piss off) the enumerator. They may get tired and start giving the answers that will make the interview as short as possible. If you are collecting your own data, it is important to think through these issues carefully beforehand. Spend as much time (if not more) on this stuff as you will on your identification strategy.

·         If you can, experiment with different ways of collecting data. This is not only good for your project, but is a public good that is very valuable to your fellow development economists. All the survey experiments mentioned above that are being conducted by my colleagues at the World Bank fit into this category. (Journal editors: please publish the findings of such experiments. Their contributions to the field are likely to be high.)


 ·      These problems with misreporting will only get worse if you are evaluating a behavior change program and survey teams are asking study participants about that (or related) behavior. Also remember that aggressive and frequent data collection (particularly of the kind subjects may not be used to) can change behavior and lessen external validity.

 ·       Complement survey data with independent sources of data (tests, games, spot checks, biomarker data, administrative data, etc.). If you rely solely on self-reports, you may draw someone like me as a referee, who may give you a harder time than you’d like. In-depth interviews, by virtue of having a different structure and perhaps putting the respondent at more ease through a conversational style by experienced, specialized interviewers, can also provide valuable supporting information (for a good reference on qual-quant methods, please see this book edited by Kanbur). And, not to sound like a geek, but it is amazing how much information you can get these days out of a dried blood spot from a simple finger prick or even a strand of hair. Check out this Summer Biomarker Institute at the Northwestern University if you’re interested in including Biomarker data collection in your research.