Syndicate content

sensitive survey questions

Sex, Lies, and Measurement: Do Indirect Response Survey Methods Work? (No…)

Berk Ozler's picture

Smart people, mainly with good reason, like to make statements like “Measure what is important, don’t make important what you can measure,” or “Measure what we treasure and not treasure what we measure.” It is rumored that even Einstein weighed in on this by saying: “Not everything that can be counted counts and not everything that counts can be counted.” A variant of this has also become a rallying cry among those who are “anti-randomista,” to agitate against focusing research only on questions that one can answer experimentally.

However, I am confident that all researchers can generally agree that there is not much worse than the helpless feeling of not being able to vouch for the veracity of what you measured. We can deal with papers reporting null results, we can deal with messy or confusing stories, but what gives no satisfaction to anyone is to present some findings and then having to say: “This could all be wrong, because we’re not sure the respondents in our surveys are telling the truth.” This does not mean that research on sensitive topics does not get done, but like the proverbial sausage, it is necessary to block out where the data came from and how it was made.

False positives in sensitive survey questions?

Berk Ozler's picture

This is a follow-up to my earlier blog on list experiments for sensitive questions, which, thanks to our readers generated many responses via the comments section and emails: more reading for me – yay! More recently, my colleague Julian Jamison, who is also interested in the topic, sent me three recent papers that I had not been aware of. This short post discusses those papers and serves as a coda to the earlier post…

Random response techniques (RRT) are used to provide more valid data than direct questioning (DQ) when it comes to sensitive questions, such as corruption, sexual behavior, etc. Using some randomization technique, such as dice, they introduce noise into the respondent’s answer, in the process concealing her answer to the sensitive question while still allowing the researcher to estimate an overall prevalence of the behavior in question. These are attractive in principle, but, in practice, as we have been trying to implement them in field work recently, one worries about implementation details and the cognitive burden on the respondents: in real life, it’s not clear that they provide an advantage to warrant use over and above DQ.