Sometimes a survey interview bequeaths more than the token gift of appreciation


This page in:

When I first started field work in Indonesia (as a PhD student) I observed numerous household survey interviews. Even though I didn’t speak Javanese I was familiar with the questionnaire and so could follow the ups and downs of the household interview. These survey encounters were not trivial events for the typical household that, almost universally, would welcome a group of strangers into their house who would then probe and ask about every aspect of their lives for up to two hours. Rather it was very clear that certain unexpected questions posed to respondents gave them pause, provoked reflection, and perhaps ultimately changed their perspective a little.

Here is one example – slogging through a detailed household consumption questionnaire often turned into a mini-exercise in household budgeting. This exercise in itself may have conveyed a novel approach in financial planning to the respondents and I often wondered if the survey left lasting impacts on the respondent’s views and subsequent decisions related to saving and spending. I also had anecdotal evidence of at least one other influence - field staff of the Indonesia Family Life Survey mentioned that study participation influenced the subsequent choice of newborn names in respondent households!

A recent paper by Alix Zwane and numerous co-authors explores this same conundrum with harder evidence than speculations and anecdotes. Specifically the authors compare the influence of being surveyed on subsequent household knowledge and behavior in five different field studies: one study looks at home-water treatment take-up in Kenya, a second and third at health insurance take-up in the Philippines, and a fourth and fifth at micro-lending take-up in Morocco and India respectively.

The authors find distinct evidence of interview effects, at least in certain contexts. In one Philippines experiment, surveyed households were 25% more likely to take-up insurance than the non-surveyed, even if the insurance offer occurred several months after the survey interview. In contrast, the micro-lending take-up decisions were unaffected by the prior experience of being surveyed.

Let’s first talk about what these estimated effects most likely are NOT:

1.       It doesn’t appear to be a Hawthorne effect since the respondents were never observed after the initial interview and were not aware that they were part of a continuing study.

2.       It doesn’t appear to be a “question-behavior” effect. This type of effect has been observed in various psychological and marketing studies where the elicited intent to engage in future behavior affects subsequent behavior. None of the studies asked about intentions or predictions of future behavior.

3.       More generally, it doesn’t appear to be a case where respondents were trying to “please” the researchers. In the case of the Philippines, the take-up decision was measured through administrative records (of both surveyed and non-surveyed households) and there was no further contact between respondent and surveyor after the initial interview. In addition, the survey asked households hundreds of questions over a range of topics and no more than six questions dwelt specifically on insurance topics thus making it difficult for the respondent household to infer the “main topic” of the study.

Instead of the above explanations, Zwane and co-authors speculate that the survey made respondents more aware of neglected needs or opportunities (such as the decision to insure). In a cognitive environment where attention is necessarily limited (because it is a scarce resource) experiencing a survey is, in a sense, a cognitive manipulation in so far as it brings select issues to the fore. The survey simply served as a reminder of these topics (presumably at the expense of others) and likely it was a particularly salient reminder given the unusual context in which it occurred.

If there is indeed a potential real effect of survey, we need to know which domains of survey information are particularly vulnerable. The authors found effects related to take-up of health insurance but not micro-loans, suggesting that borrowing decisions may already occupy a privileged space in the attention span while insurance decisions do not (and hence benefit from a “reminder”). However as the authors’ conclude, we don’t yet know whether surveys “work directly on attention or indirectly through intent or goal formation”.

The results also suggest that the decision to conduct a survey may be more costly then typically imagined. Perhaps less frequent surveys with larger samples will minimize this interview effect, if it were to occur at all in the study context. And if reliable administrative data can proxy for survey measures then these information sources should be pursued with even more vigor.



Jed Friedman

Senior Economist, Development Research Group, World Bank

Join the Conversation

Jed Friedman
November 16, 2011

Charles, thanks very much for the comment. I think the lessons you learned through practice are being re-learned by numerous other field researchers. It would be interesting to collect these experiences and develop general guidance for surveyor training...

Charles Lor
November 11, 2011

Thank you Jef,

I did numerous behavioral surveillance surveys and other maternal and child health and nutrition studies.

Precisely because of the effect you mention and because interviewees ask questions back to the fieldworker, it soon became key for me to:

- In addition to interviewing techniques and field sampling, we gave basic sensitization (very similar to the one given to beneficiaries) on the key topics to the interviewers.

- We made it very clear to the fieldworker what there role where in the conversation that was bound to happen on the topics of the questionnaire.

- We gave fieldworkers the lists of facilities or counseling center where respondents could ask questions and seek advice.

I've seen the same thing happening with farmers on farm budgets. Many of my colleagues actually wonder what's the separate impact doing these farm budgets on efficiency and enterprise choices.
I've then pushed for surveys to be streamlined in program activities. Even if that meant introducing a bias in terms of assessing the impact of the core program.