Published on Development Impact

Mobile Phone Surveys for Understanding COVID-19 Impacts: Part II Response, Quality, and Questions

This page in:

Yesterday’s post discussed the different reasons for doing a phone survey in response to Covid-19, how to get a sample for a mobile phone survey, and the three modes of conducting these surveys (CATI, IVR and SMS). Today we focus on a range of practical issues and steps you can take to improve the quality of data you obtain from these surveys.

What are the expected response rates for mobile phone surveys?

The response rates will vary widely, depending on country context, whether you have a pre-existing relationship with the respondents (e.g. phone follow-up to a face-to-face baseline versus a cold-call), the quality of mobile phone coverage, the mode being used, etc. A key issue in defining response rates is knowing what the denominator should be: in impact evaluation follow-ups or samples with an existing baseline, this can be either the full sample, or the sample with phone numbers provided at baseline; in contrast, with random digit dialing (RDD) or lists obtained from other organizations, sometimes it can be difficult to tell if a number is valid or not. That said, here are some examples of phone survey response rates as a starting point for your planning:

Response rates during the Ebola crisis:

·       The World Bank Ebola Survey in Liberia had a response rate of 46% over five rounds of data collection of households providing a phone number in the sample frame, and 26% of the overall sample frame. The World Bank Ebola panel surveys in Sierra Leone conducted three rounds of surveys of a sample first interviewed in a labor force survey. 66% of that sample had cellphones, and of those with cellphones, 51% were surveyed in all three phone rounds, and 79% reached in at least one phone round.  

·       Another Ebola survey in Liberia using random digit dialing (RDD) and interactive voice response (IVR) reports a response rate of 51.9%.

·       A face-to-face baseline survey in Liberia followed by two rounds of cellphone follow-up interviews three and six months later found attrition at 3 months was 22% (170/774) due to the non-ownership of a phone at baseline (40/170) or inactive numbers (124/170), with only 6 refusals; and 34% at six months.

The main issues for response here seem to be whether people still have valid numbers, and whether they will answer the phone in the first place, with very little of the attrition coming from refusals conditional on answering. This finding is important for thinking about ways to improve response rates, since some of the methods we will discuss depend on being able to initially contact the person.

Response rates in non-crisis mobile phone surveys

·       The World Bank has fielded a number of mobile phone panel surveys under the Listening to name, including Listening to Africa, Listening to LAC, and Listening to Tajikistan. These surveys typically first conduct an in-person face-to-face baseline (giving out cellphones in some cases to those without them), and then frequent mobile phone follow-ups. In Tajikistan, 25% of households refused to join the phone panel, but then conditional on joining it, attrition rates were only 1 to 7% across 24 waves (each about 10 days apart). Attrition rates were somewhat higher in some African countries (in South Sudan the attrition rate was 31% in the first follow-up round and 49% by the fourth wave); and much higher in Latin America (in Peru reaching 67% for the first follow-up survey, and increasing 1 to 3% each wave to reach 75% by the sixth wave; in Honduras 41% in the first follow-up, and 50% by the end).

·       The Peru Listening to LAC pilot randomized the mode, and found in the first round the level of attrition was highest for IVR (80%), followed by SMS (70%), compared to 49% for CATI. Moreover, item non-response was also higher for IVR and SMS (more don’t know or refusals). Over the course of the panel, the attrition gap narrowed, as attrition increased from the CATI group. In Honduras, attrition was 60% for IVR, 55% for SMS, and 12% for CATI. While the response rate is a poor indicator of a survey’s representativeness, these are big differences in response rates by mode.

·       Comparatives work on SMS surveys in Africa found response rates of 12% in Kenya and less than 1% in Ghana and Nigeria.

·       For microenterprise survey in South Africa, they get response rates of about 50% in each round of their panel (conditional on participation in the baseline wave).

What can be done to increase response rates?

There are three main sources of non-response: invalid or disconnected telephone numbers, non-contact, and refusal. There are several ways to reduce non-contact and refusals to improve the representativeness of the data, including:

·       Vary contact protocols: Call multiple times and at different times of the day. Recent work with a CATI survey in Turkey found that 7 attempts balanced cost and quality, but this finding is likely to be highly context specific. In the Sierra Leone Ebola surveys, respondents were called up to 25 times, but this approach is probably not replicable to other contexts. In her talk on an impact evaluation follow-up phone survey in Kenya, Tavneet Suri notes that their protocol is for 9 attempts to be made – 3 per day at different times, then a day off, then repeat.

·       Send a prenotification SMS? An Australian CATI study found 5 percentage point higher response rates for cases that received SMS messages. African IVR surveys in three countries found completion rates were 1 to 2 percentage points higher when SMS’s were sent, although this difference was only statistically significant in one country.

·       Provide incentives? Several experiments have tested the impact of financial incentives (typically in the form of $1 or so of phone credit). The impacts on survey response have typically not been that large: in Peru, the first round response was 32% with no incentives, and 34% with either $1 or $5 in incentives, although incentives did a bit more at keeping people in a panel – with a 7 percentage point difference in attrition after 6 waves; in Afghanistan and Ethiopia, no significant impact of airtime incentives was found, and a 1-4 percentage point increase in Mozambique that is only significant when a raffle was used. If incentives are not large enough, they could also potentially backfire and override pro-social motivations: this might explain the finding in South Sudan that incentives lowered response rates by 6 percentage points. Incentives may also impact data quality.

·       Use multiple rounds of data collection: The Listening to Tajikistan project had a 25% non-response rate in the first round, but never more than 7% additional attrition for the later rounds of data collection. In addition, interviewing the same respondents over time boosts the power of the data to detect trends, generally the main objective in crisis response surveys.

·       Carefully craft introductions: Write introductions concisely, focusing on critical points. Experience in Zambia showed that removing even one short sentence in the introduction to an SMS survey increased response rates.

·       Offer multiple languages: In multi-lingual countries, offer an initial greeting in a lingua franca, and then allow the respondent to select their preferred language. Keep the initial greeting short. Experiment with different formats for the greeting (see this example from an IVR survey in Ghana.)

·       Be culturally conscious in design of methodology: The call center used for the mobile phone surveys conducted in Liberia during Ebola was based in Nebraska in the United States.  Following standard American conventions, interviewers let the phone ring from 10-15 seconds before disconnecting in the first round.  In the second round, the protocol was changed to ring up to 60 seconds, consistent with Liberian norms.  During that round, it was a median of 23 seconds to answer, with 10% of calls being answered after 36 seconds.

·       Mix modes (CATI, IVR, SMS): Offer respondents more options to complete the survey, to increase response rates and improve representativeness. A tool like Surveda can smoothly manage SMS, IVR, and Web mixed mode surveys.

·       Use tracing: in situations where you have an existing baseline survey, asking other respondents in the same village, or village elders, or teachers, or other contacts you have information for may help you get new phone numbers for those people whose phones no longer work.

Questionnaire design

Some people approach CATI, SMS and IVR questionnaires like short versions of regular household surveys. They are not. The type and complexity of the questions affect the quality of the data and the likelihood the respondent will answer again in the next round.

Ask short, concise questions. Roster-style questions that ask about each household member can be confusing. Also avoid very detailed questions or questions with long lists of answer choices. Consider a modular questionnaire design, where a long questionnaire is broken up into smaller chunks and sent to different sets of respondents (or to the same respondent in different rounds).

Questionnaire design for IVR and SMS requires special considerations. Many respondents will have never participated in an automated voice survey, so minimize cognitive burden for respondents. SMS presents additional challenges because each question has a 160-character limit typically, though more advanced SMS software allows for multiple SMS per question. Experiments show the need to randomize response options and avoid “select all that apply” questions.

Monitoring the quality of responses

A key question is whether the information you receive over the phone will be reliable. There are two issues here – interviewer error or lack of effort (especially when conducting interviews from home rather than in a call center with closer monitoring), and respondent reluctance to answer certain types of questions by phone. A few suggestions for collecting accurate responses:

·       For CATI surveys, audio monitoring can be done to record at least one side of the interview, allowing supervisors to remotely check quality. But in some cases, respondents may be very reluctant to have responses recorded (just as many people prefer verbal consent than written consent, since they do not want to sign anything).

·       A nice suggestion from Tavneet Suri is to make sure to ask a couple of questions you already know the answers to from baseline or other information. This can be used as a quality check. In an ongoing COVID-19 survey of high school students in Ecuador for example, David and his co-authors on that project re-ask the month and place of birth, which can be matched to baseline data and used as a check that enumerators are reaching the right people.

·       In some contexts, there may be a real or perceived benefit by respondents to answer questions in a certain way. For example, in the World Bank Liberia Ebola surveys, the final round asked whether respondents have been receiving the phone credit incentives, and nearly all respondents said no. The project team though had added a number of local team member phone numbers to the list sent to the telecom company for incentive payments, confirming that the credit had in fact been delivered. Other examples that may be used to identify false information include asking whether people have received COVID-19 help from different government programs, adding a fake name among the set of programs, or to see whether students are over-claiming use of educational resources, the survey might ask about one or two made-up websites or programs.

We hope these two posts provide some useful experience in helping ongoing response efforts, but welcome comments below on any advice others have as well as questions that we have not addressed that could be part of a third possible post.


Authors

Kristen Himelein

Senior Economist / Statistician

Charles Lau

Survey Methodologist

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000