coauthored with Sabrina Roshan
Imagine you are out on a pretest of a survey. Part of the goal is to measure the rights women have over property. The enumerator is trying out a question: "can you keep farming this land if you are to be divorced?" The woman responds: "it depends on whose fault it is." Welcome to yet another land where no one has heard of no-fault divorce.
There is clearly a class of questions where the answer to the query depends on context, which the survey does not and cannot provide. In cases like this, the answer will be "it depends." If the enumerator pushes, it’s possible the respondent will answer based on her/his perceived notion of who is likely to be at fault should this come to pass, which in the example above of divorce would mean that you are getting an answer colored by the current state of the marriage.
We were at a conference  recently where Sidney Shuler provided some hard evidence on this issue from a paper  she wrote with Rachel Lenzi and Kathryn Yount on Bangladesh. Their focus is on questions that measure intimate partner violence (IPV) and, in particular, the questions used by the Demographic and Health Surveys (DHS). The DHS questions are better than the divorce question above in that they assign a bit of context. For example, they ask "do you think a man would be justified in beating his wife if she fails to provide a meal on time?" So this helps a bit, since we know now why he might beat her. But what we don’t know is why she didn’t prepare the meal on time. And this is something, as Shuler et. al. show, that the respondents will wonder about. They uncover this through qualitative work, in the first instance working through cognitive interviews with respondents (cognitive interviews are where you get the respondent to tell you what they are thinking as they work up an answer to your question).
They then put specific scenarios on the back of this (and other) questions. So now there are three questions – the DHS version, a detailed scenario where the wife is really busy and then a little late providing food, and a third detailed scenario where she spent the morning gossiping. And now the answers change. Taking all of their variations together (i.e. including some other questions about when beating might occur), 63 percent of the responses differed depending on whether or not the woman was "at fault".
One other thought provoking result in this paper is that respondents might not even do so well in understanding the basic question. They find that "when asked, ‘In your opinion, do you think a man would be justified in beating his wife if she neglects their children,’ 6 of 27 women and 1 of 25 men among the initial study participants misinterpreted the question as asking about child beating rather than wife beating." So, assuming there was careful training and decent enumerators, this makes us think twice about what people are hearing.
In the end, this work, taken with the pretest experience above, generates caution about questions that come from a place where context is likely to be added by the respondent. This is obviously going to be true for a broad class of hypothetical questions: if you had a fever, would you visit the doctor? Respondent: it depends if I had the money. If you were given a business loan, would you spend it on inputs or save it for future use? Respondent: it depends on the state of the business or it depends on the size of the loan, etc.
Finally, even with expertly trained enumerators and clear survey questions, it is critical for researchers to eye the responses with contextual understanding in mind.
Making the call on whether to accept the response at hand and run with it or step back and think twice about labeling a relationship is crucial. And it’s where local guidance, qualitative underpinnings and a complementary perspective come in handy.