Published on Development Impact

Some advice from survey implementers: Part 1

This page in:
I have often wondered what the folks who do the surveys I use in my research think of how it is to work with me.   Since I wasn’t sure I had the courage to hear that straight to my face, I wrote to a number of survey folks I knew (and thought highly of) or that other people recommended.   I asked them what they would tell researchers in general.  
 
I was amazed at how much they had to say.   So this is going to be a 2 part post, with the other half coming in 2 weeks.   Here is part 1. Before we get to the advice, I want to really thank the folks at EDI and IPA, as well as James Mewera of the Invest in Knowledge Initiative, Ben Watkins at Kimetrica,  and Firman Witoelar at SurveyMeter who took the time to send me really careful thoughts and then answer my queries.   Don’t take anything below as something specific any one of them said – I’ve edited, adjusted and merged.   Blame me if you don’t like it.  One final note, as you can see from the list, not everyone one of these is a commercial firm, and some of them do research as well – so not only keep that in mind when filtering the advice, but I’ll abbreviate with SO for survey organization.      
 
Please read this post as me channeling and interpreting their voices.  I am not sure I agree with everything I heard, but I am passing it on.   And all of it gave me food for thought.   Stuff in [italics] is me explicitly responding to a couple of points.    
 
First principle
The first thing that struck me is that some of these folks are scarred by interactions with certain researchers.  One of them even used the phrase “hostile work environment.”  So, the first piece of advice, especially for some of us, is be nice(r). 
 
Writing proposals
I’ve been there.  You are trying to make a proposal fit within a specified envelope and you just lop off some costs to make it fit.   As one SO put it, researchers really have present bias in their proposals.   Their suggestion: do a reality check with your survey partner before sending the proposal in.    They’ll help you face the music.  
 
Choosing a SO
  • Make sure they’re obeying the laws, especially the employment laws.
  • Ask the SO about attrition in past studies.  If they seem high to you (or higher than you’d want in your study), ask why.   And/or ask them for an attrition estimate for your study.   
  • Does the SO you are going to work with have the resources to support your project?  Take a hard look.    The SOs used both car and fruit analogies to describe this – so go kick the tires on your fruit truck.  
  • The cheapest survey firm may not be the best bang for your research buck.   Among other things, take a hard look at the studies they have been involved in.   Ben Watkins put it more poetically: “The  PI / Survey Service provider relationship is a classic example of gaming under information asymmetry and risk. In the vocabulary of Akerlof's seminal article of 1970, if a PI evaluates and selects survey services based only on price, she may expect a peach, but will almost certainly get a lemon. To avoid this risk, the PI needs to look under the hood and understand the time, cost and data quality trade-offs that determine whether the time and resources are adequate to yield high quality data. In the long-run, a cooperative “one team” solution (which reduces the information asymmetry and shares the risk), will be in the best interests of the PI and of the survey firm. “

Budget issues
This came up from just about everyone.   I’d love it if there weren’t budget constraints in this world.   But there are, and here are a couple of points on how to potentially make things go smoother:
  • Build some slack into the budget.  Something will go wrong and/or cost more than anticipated.   Either put in a specific contingency if you can swing it, or else add in extra RA time or be conservative on survey length.    The specific form will depend on the nature of the contract, but some breathing room will reduce tension.  
  • Survey implementers actually do have some real overheads.  If they’re legit, they do have to get work permits and things like social security for their staff.   And there is the roof over their heads.  And security. So, don’t bellyache…too much.
  • If you are doing field experiments, expect delays.   Add two months to whatever conservative estimate you start with.  
  • Make sure you budget for tracking.  This may be a case where your marginal dollar can get you a lot of additional power.  
  • There is no one-size-fits-all when budgeting a survey.   Lots of things matter such as the distance between clusters, how quickly the interviews go, how many it’s possible to get into a day, whether people will be around or all out planting their crops, and the like.   [A bad sign for me is when the SO asks how many pages your questionnaire is before giving me an estimate. A) because this doesn’t make sense with CAPI, and B) because in my experience it doesn’t correlate well with length]
  • Keep in mind that there is inflation everywhere, and what you paid enumerators as a graduate student is no longer a living wage. 
  • Speaking of graduate students, make sure you understand efficiency wages before you try and cut enumerator salaries.  [I owe Chris Udry a debt for teaching me this lesson when I was a graduate student and thought we really could pay less.   He was right.]
  • Don’t skimp on travel for enumerators.  If you don’t have vehicles dedicated to delivering them to the survey site, the incentives to skip an actual field visit increase. 

Building the sample
It’s a rare survey where you walk into a situation with a full census of the area of where you will do a survey AND you have access to it.  With that in mind, the SOs have this advice:
  • Find out what other researchers or surveys have used as primary sampling units (PSUs).  
  • Consult with the survey team on the question of PSUs very early on or be very specific in the contract or terms of reference
  • Be nice to those who come after you by writing down how you sampled somewhere accessible (e.g. a working paper version)
  • Make sure the SO is explicit on what is realistic (in terms of time and money) for a listing exercise. 
  • When it comes time to actually going out and listing people, think about your criteria and share them with the survey firm [my favorite example: people who used to grow certain types of crops but have stopped].   One way to share them early is to put it into the survey firm’s terms of reference.
  • If a complete community listing or finding red-haired metal workers who migrated from Tonga is going to be too expensive, work with your SO to find alternative options (e.g. snowball samples, informant-based listings).   Draw on their experience.  And don’t forget to ask how much each will cost. 
  • If you are going to want to know specific details about the listing exercise, do it during the survey.  Don’t wait for months after the survey has ended. 

Preparing for the survey
  • Pilot the survey!    And allow enough resources (time and cash) to do it well. A number of SOs made this point.  As one pointed out “an extra two weeks of refinement has large returns to quality.”   And keep track of how long each question takes so you can start to get a handle on how many interviews can be done per day.  
  • Spend time training the staff.   This includes making sure that the field staff understand the research design.   [I’ve done surveys where I wanted the enumerators to be blind to the research design due to the sensitive nature of the topic and, in other cases, where I didn’t want them to know or guess people’s treatment status.  That being said, their managers knew.  My take is you may want to adjust the level of this based on your design, but you for sure want someone on the ground to have an idea of the big picture. As one SO pointed out to me:  don’t discount the payoff in terms of helping you see things in the design and in the responses that you might miss if they don’t know why they are asking stuff.]
  • Speaking of which, make sure the management of the survey understand the nuances in your research design as well as what you are trying to get at.  They can be a helpful set of extra eyes and ears. 
  • Another point on time allocated for training.   Remember that field teams need to be trained on every single question of the questionnaires to ensure quality.   (And you can start thinking about that when you are writing the questions). 
  • In this vein: think about every single question you are including and why it is necessary for what you are trying to do with your research. 
  • While we are at it: Keep it short.   As Ben Watkins points research indicates that the average adult attention span is under 30 minutes.   So it might not matter how good your SO is.  [If you are bored already, see if the BBC agrees with Ben]
  • Be open to the advice of the SO on your survey design.   This was mentioned multiple times.   As one SO puts it “it goes without saying that the local person knows their context better than any external person.”  Another SO points out that “less experienced researchers use “but this has worked in other countries” argument too early and too often, only to get the response “well it has never worked here” by the SO.   Explain what the point of your questions are.  Maybe they can help   And it might help to show them where in the literature your questions are coming from.  
  • Of course, when enumerators get excited about the research and start adding questions to the survey, keep in mind that they aren’t thinking about the budget.    That’s up to you and the SO management to watch. 
  • Too many researchers with different research questions can make it hard to cut the questionnaire.  Do you really want the SO to be the referee?
  • Don’t forget to include the time needed to find respondents when forming your expectations of how many surveys can be done per day.
  • Using your preferred CAPI software may generate friction with the SO if they have another preference.   Think about being flexible, but also push the SO to list the pluses and minuses of their favorite software.   Don’t forget to ask about proprietary rights (e.g. if the SO built the software themselves).  
  • Don’t tinker at the last minute.   If you change stuff after the CAPI interface is built, you are not only incurring additional costs, but adding a risk that something could go wrong.   And the SO will have to add extra training for the enumerators.  
  • Ben Watkins makes an interesting point on investing researcher time in building data validation checks:  “We find that too little time is invested in validation and skips. Some PIs still see the CAPI work as beneath their pay-grade and leave too much room for routine data capture errors in the field. PIs should work with the service provider to ensure that the scope for error is minimized in every module. Tamper-proof verification (e.g. through video and photographic cross validation) should added where appropriate. Rigorous CAPI testing should be done for all possible use-cases by joint client-service provider teams. Ex post data cleaning should be avoided whenever possible: it is notoriously subjective and is poor research practice.” 
  • Avoid open ended questions on large scale surveys.   It’s not clear the enumerators are going to have the skill level to do them justice.   
  • Don’t leave the IRB until the last minute.   And when you do, don’t make things super difficult since your SO is going to have deal with the IRB again, soon.   Also, you may want to make the timeline contingent on IRB approval from the get go.   You don’t want to have idle enumerators sitting around (being paid) while you all wait for that letter.  

That’s it for part 1.    Stay tuned for part 2 where the survey hits the field.
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000