Syndicate content

survey methods

Some advice from survey implementers: Part 2

Markus Goldstein's picture

This is part 2 of a two part blog on what survey implementers would tell the researchers and others who work with them (part one is here).   Before we dive in, I want to reiterate my thank to the folks at EDI and IPA, as well as James Mewera of the Invest in Knowledge Initiative, Ben Watkins at Kimetrica,  and Firman Witoelar at SurveyMeter who took the time to send me really careful thoughts and then answer my queries.   As before, don’t take anything below as something specific any one of them said – I’ve edited, adjusted and merged.   Blame me if you don’t like it.  One final note, as you can see from the list, not everyone one of these is a commercial firm, and some of them do research as well – so not only keep that in mind when filtering the advice, but I’ll abbreviate with SO for survey organization.     
 
Please read this post as me channeling and interpreting their voices.  I am not sure I agree with everything I heard, but I am passing it on.   And all of it gave me food for thought.   Stuff in [italics] is me explicitly responding to a couple of points.   
 

Facility-based data collection: a data methods bleg

Berk Ozler's picture

Today, I come to our readers with a request. I have a ton of experience with household and individual survey data collection. Ditto with biomarkers, assessments/tests at home, etc. However, I have less experience with facility-based data collection, especially when it is high frequency. For example, we do have a lot of data from the childcare centers in our study in Malawi, but we had to visit each facility once at each round of data collection and spend a day to collect all the facility-level data, including classroom observations, etc. What would you do if you needed high frequency data (daily, weekly, or monthly) that is a bit richer that what the facility collects themselves for their own administrative purposes that would not break the bank?

How hard are they working?

Markus Goldstein's picture
I was at a conference a couple of years ago and a senior colleague, one who I deeply respect, summarized the conversation as: “our labor data are crap.”   I think he meant that we have a general problem when looking at labor productivity (for agriculture in this case) both in terms of the heroic recall of days and tasks we are asking survey respondents for, but also we aren’t doing a good job of measuring effort. 

Is it possible to re-interview participants in a survey conducted by someone else?

David McKenzie's picture

I recently received an email from a researcher who was interested in trying to re-interview participants in one of my experiments to test several theories about whether that intervention had impacts on political participation and other political outcomes. I get these requests infrequently, but this is by no means the first. Another example in the last year was someone who had done in-depth qualitative interviews on participants in a different experiment of mine, and then wanted to be able to link their responses on my surveys to their responses on his. I imagine I am not alone in getting such requests, and I don’t think there is a one-size-fits-all response to when this can be possible, so thought I would set out some thoughts about the issues here, and see if others can also share their thoughts/experiences.

Confidentiality and Informed Consent: typically when participants are invited to respond to a survey or participate in a study they are told i) that the purpose of the survey is X ,and will perhaps involve a baseline survey and several follow-ups; and ii) all responses they provide will be kept confidential and used for research purposes only. These factors make it hard to then hand over identifying information about respondents to another researcher.
However, I think this can be addressed via the following system:

Dialing for Data: Enterprise Edition

Markus Goldstein's picture
Surveys are expensive.   And, in sub-Saharan Africa in particular, a big part of that cost is logistics – fuel, car-hire and the like.   So with the increasing mobile phone coverage more folks are thinking about, and actually using, phones in lieu of in person interviews to complete surveys.   The question is: what does that do to data quality?  

When bad people do good surveys

Markus Goldstein's picture
So there I was, a graduate student doing my PhD fieldwork.    In the rather hot office at the University of Ghana, I was going through questionnaire after questionnaire checking for consistency, missed questions and other dimensions of quality.   All of a sudden I saw a pattern:  in the time allocation questions, men in one village seemed to be doing the exact same things, for the same amount of time, on two very different days of the week.  
 

Pages