Thanks for your interest! That reminds me that I need to update the working paper version (now updated to 1.3). I am also e-mailing a colleague of mine to comment here who ran a study in Ghana, with similarly disappointing findings. A few responses below:
re: "However, on the HIV status: is it possible that the fact that you knew the answers played a role?"
Yes, this is certainly possible. However, we note that we would have expected the direct answers to be closer to 0 for the doubly known truth group in that case as well, so to get a number that was higher even than the doubly known truth group is troubling.
re: "One more question: do I understand correctly that the "direct elicitation" group did not ask the non-sensitive questions in a block, but rather one by one?"
Yes, you have interpreted the methods correctly, and that may have had an influence on the results, albeit only if the counting method itself was biased in some way (which it turns out is likely to be true). This design, as you note, was for two reasons: 1) we wanted to precisely explore the proportions of non-sensitive questions to better construct a future questionnaire in this population, and 2) if the counting method (list count vs direct count) influenced the number of true answers, it is likely that we have introduced additional design effects, and as such are unlikely to get close to the true value. One of the main discussion points from our pilot is that the counting method itself is likely one of the main culprits, and unfortunately one we didn't test directly during the pilot itself.