Published on Development Impact

Dialing for Data: Enterprise Edition

This page in:
Surveys are expensive.   And, in sub-Saharan Africa in particular, a big part of that cost is logistics – fuel, car-hire and the like.   So with the increasing mobile phone coverage more folks are thinking about, and actually using, phones in lieu of in person interviews to complete surveys.   The question is: what does that do to data quality?  

A new paper from Rob Garlick, Kate Orkin and Simon Quinn gives us some insight (and raises questions) into what this might mean for data quality.    Garlick and co. run an experiment on the mode of data collection for firm surveys – experimenting both with the frequency of the interview and phone versus in person.   These are the first results I’ve seen that do this experiment explicitly.  

Let’s start with the set-up.   Garlick and co. are surveying microenterprises in Soweto, South Africa.   They take a sample of small firms (2 employees or less), which operate regularly and who whose owner has a phone which has uses prepaid airtime.   This latter criterion is something to keep in mind with respect to the kind of sample they are capturing.  While they note that there are no firms which meet their enterprise definition but don’t meet the phone criteria, work by Croke and co-authors show that for households in Tanzania, wealth is correlated with participation in a phone based survey.   Croke and co. are working with households not firms, so we don’t know what Garlick and co. would find in other contexts.   (And yes, you can always give respondents a phone (and maybe a charger) – as a number of surveys have done, but this will add to your costs, which we’ll get to in a bit). 

Garlick and co. then take a sample of 895 firms with which they successfully complete an in-person baseline interview and stratify (on gender, number of employees, sector and location) and randomize into three groups: 1) weekly phone surveys, 2) weekly in-person surveys, and 3) monthly in-person surveys.   The last group they randomly split into 4 sub-groups, with each group interviewed during a given week so that the monthly data lines up evenly against the weekly data.   In each of their three main treatment arms, interviewers are explicitly instructed to only make three interview attempts in order to keep the effort level comparable.   Respondents get about $1 in airtime for completing the baseline and every time they complete 4 follow-up surveys.   Finally, after 12 weeks, Garlick and co. do an in-person endline with everyone, which lets them take a look at whether the mode of interview may have changed how people respond.

What do they find?  Overall, their team completes 4070 out of 8058 surveys.   To someone who does only in-person surveys this seems really low.   And to be sure, there is a difference between missing an interview in a low-frequency, long duration panel – which usually means the respondent is not coming back (and hence attriting) and this case where there is a chance they will get them back, next week.  Garlick and co. cite numbers from other studies that are in their ballpark for missed interviews (including Croke and co.) but there are others which get higher response rates.   Garlick and co. argue that the higher response rate is often a result of tighter sample selection and they caution: “these differences highlight a potential trade-off between selected samples with low attrition versus representative samples with high attrition.”  True for almost all surveys, but it looks like it may be more binding here (without using compensatory measures). 

Who are the folks missing their interviews?   To start with, individuals in the weekly arms are significantly more likely to miss an interview than those in the monthly arm.   When looking at the specific reasons for missing the interview it’s not frequency but rather medium – those contacted by phone are more likely to miss an interview because they are too busy or there is wrong contact information. 

Now the point of this exercise is to see if this is a good way to measure potentially volatile firm indicators such as sales and stocks and inventories.   Garlick and co. unpack these differences in three ways: 1) comparing CDFs, 2) mean regressions, and 3) testing standard deviations across the survey modes/frequencies.   Using the CDFs, they don’t find big differences across the modes/frequencies for measures of profits, sales, costs, and assets.   There is a difference in stocks/inventories (with the phone weekly folks reporting more than their in-person counterparts), but this goes away with trimming. Medium does seem to matter for reports of hours the business is open and money taken from the enterprise with phone folks reporting fewer hours open and taking less out.   These results are largely similar when they look at the regression results.  

Turning to the frequency of interviews, one interesting result they get with the CDF is that both of the weekly arms get reported sales minus reported costs to line up better with reported aggregate profits than the monthly reports.  In the regression results, it’s the weekly in-person interviews that are giving a statistically different result from the monthly reports.    

In terms of dispersion, all three methods surprisingly yield similar levels for sales and aggregate reports of profits.  However, the phone interviews yield more dispersion in terms of the number of employees and costs 
Garlick and co. then ask the question: can the interview mode or frequency change the behavior of the enterprise owners?   To look at this, they use the in-person endline answers.   The short answer, for these key microenterprise outcomes, is not really.   There is a bit of action on keeping books, with phone respondents reporting that they are more likely to do this – reversing the pattern from when they were being interviewed on the phone.   But other than that, not much is going on. 

So what does it cost?   Remember they are working in South Africa, and they have a short survey instrument (about 15 minutes).  A successfully completed phone interview costs $4.76, in-person weekly is $6.12, and monthly in-person is $7.30.   A couple of notes:   The monthly in-persons are higher because there is more flexibility in interview schedules in the weekly interviews – allowing the enumerators to save travel costs and time.   Indeed, travel costs are the big savings in the phone versus in-person interviews, even though Garlick and co. gave the phone enumerators a subsidy to get to the office.   And Garlick and co. gave all enumerators the same daily rate and per diem for field based work versus the phone bank to equalize incentives.   In the non-experimental setting, it may very well be cheaper to pay folks less to work in a phone bank versus trudging around the city looking for respondents.  

So the phone surveys are substantially cheaper – is that the way to go?   The answer, given what we have now, depends on what you are after.    This was an admirable experiment – carefully designed to allow us to compare the three methods/modes.   But there’s lots of folks missing in a given round (including the endline where they get 59 to 73% responding).   And this was only a 12 week experiment.    With those caveats in mind, we need to think about what kind of things this might be good for, particularly when book-ended by in-person surveys with more effort (and success) in respondent retention.   One area where this might be good would be in understanding and characterizing the process of adjusting to a shock or intervention (as in Karlan et.al.) or for understanding things like labor movements.   And, of course, this is early work in a young field – so further experiments on call-backs (here they’re limited to 3), on incentives (for both respondents and enumerators) for retention, and other dimensions of effort are needed.  
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000