Researchers have long recognized the importance of choosing interviewer characteristics while designing their fieldwork – for example female interviewers are often utilized to explore topics related to domestic violence and respondents of both sexes are more likely to disclose sexual abuse to female interviewers than to male ones. Another key consideration is the degree of familiarity between interviewer and respondent, but here the decision appears to be obvious. It is an accepted norm in fieldwork that survey interviewers should be strangers with no pre-existing relationship to the respondent. There is virtually no deviation from this norm in practice. The logic behind this practice –that a pre-existing relationship can lead to biased responses if sensitive information will be withheld in order to prevent its use as subsequent gossip in the community – might be valid but is rarely tested.
In some respects, our field assessment of high-quality interviewers contravenes this received wisdom. A good and seasoned survey interviewer in the field is often one who is able to bridge any social distance with the study subject and establish a rapport, thus quickly removing the feeling of an encounter with a stranger. If a feeling of comfort and familiarity helps the quality of the interview, perhaps there are settings where using familiar interviewers results in better quality respondent reported data.
There is little recent systematic work in this area. One recent observational study of low-income U.S. housing found that the use of local interviewers did not increase survey participation rates and did not appear to result in greater respondent trust.
The only other work that I am aware of on this topic was recently presented at the 2012 annual conference of the Population Association of America. This study, by Sana, Stecklove, and Weinreb analyzed a simple survey experiment in a provincial town in the Dominican Republic. This study employed six experienced interviewers from the capitol city as well as 24 locally recruited interviewers (residents of the same provincial town as the respondents). Training was standardized for all interviewers and respondent lists were allocated randomly across interviewers. In this experiment, any significant deviation in reporting observed across interviewer type represents a local interviewer effect (although this experimental design cannot assess which interviewer type obtains more accurate data since there is no other source of information to validate the responses).
The study asked a battery of questions related to demographic and health behaviors. For the vast majority of topics there was no significant difference in responses according to interviewer type (and response rates for interview were also identical). However there are some interesting deviations:
- Respondents report a higher rate of contraceptive use to outsider interviewers. Among married respondents, the stranger interviewers measure an 82% rate of contraceptive use while the local interviewers measure a 50% rate (!).
- Respondents report a significantly higher level of household income to outsiders, as well as much higher levels of remittances received from abroad.
- Self-reported views of tolerance towards marginalized groups (such as homosexuals, Haitians and prostitutes) are much lower for local interviews than stranger interviews.
- A series of questions explored name recognition of famous people with two invented names inserted into the list of notables. Respondents speaking to outsider interviewers were significantly more likely to report that they heard of these fictitious people.
Before discussing the results, it must be said that this is not a perfectly identified experiment, as the local interviewers are also far less experienced in conducting interviews and hence the degree of interviewer familiarity may be confounded with interviewer quality. The authors are aware of this interpretive difficulty and control for deviations in observable interviewer characteristics such as age in the analysis. The fact that there is little divergence in reporting behavior except in the dimensions mentioned above might suggest that overall differences in interviewer quality are not especially great.
(Note for researchers who plan to study this in the future: there is a straightforward way to address the problem of differential interviewer quality – expose the local and the stranger interviewers to other communities where both interviewer types are unknown to the respondents. The double difference across the two interviewer and two community types should identify any net effect of differentials in interviewer quality on reporting. Correcting for this factor, should it be observed, allows for a cleaner identification of the local interview effect.)
The results from the Sana et al. study are thought provoking and suggest conditions when the use of community interviewers may result in more accurate information. First look at the divergent measures of income and remittances, which are reported as far lower to local interviewers. Any information that respondents want to shield from neighbors in the community (such as the value of received remittances) will likely be under-reported to local interviewers. Also, if local views are against contraceptive use, then respondents may underreport usage to avoid gossip. Clearly in contexts when the cost of revealing information to the community is higher than revealing it to strangers, the norm of stranger interviewer applies.
On the other hand the respondent may, for certain topics, wish to (mis)represent themselves in alignment with prevailing social norms when speaking to outsiders. Respondents were more likely to lie about knowledge of a fictitious person perhaps because they felt embarrassment from not knowing of presumably famous people. Respondents also depict themselves as more tolerant to outsiders than to locals. Hence the presence of social desirability bias may militate for survey design involving local interviewers.
Clearly it is too soon to recommend the use of local interviewers for selected surveys that contain topics potentially subject to social desirability bias. This is just one small scale study without access to validation data (such as independent confirmation of respondent behaviors). However the results should give us pause as we design future fieldwork, and will hopefully spur additional research along the directions discussed here.
Join the Conversation