Published on Data Blog

Do household surveys face a non-response crisis? Not necessarily, according to recent experience in Jordan

This page in:

In a recently published article, Meyer et al 2015 argue that household surveys are in crisis, in part due to rising rates of unit and item non-response, which could imply serious biases in inferences based on survey data. While this paper focuses on US census and survey data, these concerns are pervasive across middle-income countries in the developing world. For instance, in Jordan, in the 2013-14 round of the Household Income and Expenditure Survey (HIES), unit non-response was upwards of 35%, in addition to significant item non-response rates. In Lebanon, the 2011 Household Budget Survey (HBS) had an overall non-response rate of 43%, and a rate close to 60% in rich urban clusters. More broadly, as countries get richer there is a concern that there will be a tremendous increase in non-response. To quote a May 2018 article in the Economist, Response rates to surveys are plummeting across the rich world. Last year only around 43% of households contacted by the British government responded to the LFS, down from 70% in 2001. In America the share of households responding to the Current Population Survey (CPS) has fallen from 94% to 85% over the same period. The rest of Europe and Canada have seen similar trends”.

If this is indeed an enduring feature of the state of household surveys, where do we go from here? The continued need to track and monitor key indicators of living standard and welfare raises some key questions: Are these non-response rates a fait accompli, due to the unavoidable consequences of higher opportunity costs of survey participation? Or do they point to the need for a different model of survey design and implementation?

Changes in approach to the Jordan household survey and expanding survey coverage to all residents of Jordan, including Syrian refugees

This was the question we had to grapple with in Jordan in 2016. Jordan is a particularly appropriate context because it is an upper middle-income country, which has had a tradition of conducting household surveys at regular intervals through an independent statistical authority. At the time, there was also an urgent need for high quality survey data, following a 6-year period with no credible data on welfare and living standards. This same period had also been one of tremendous change with a significant refugee influx, policy issues surrounding its impacts, and swirling questions on what types of programs were needed for refugees and hosts. This urgency created the political space for proposing and implementing widespread changes in the household survey system.

To respond to this challenge, we deployed a multi-pronged approach. We explored changes in sampling and survey design that would significantly reduce respondent burden and expand survey representation to Jordanian and Non-Jordanian nationals. The sampling design previously adopted in Jordan, had a quarterly panel structure which implied each household in the sample was visited as many as 17 times over the survey year. We apportioned the sample to be nationally representative in each of the 4 quarters and to yield governorate-level representation at the end of the survey year. This new sample design drastically reduced the number of visits per sample household to 4 visits per household, while maintaining sub-national representativeness. In other words, the sampling strategy was made more efficient. The questionnaire was redesigned to measure salient policy relevant data on access to services, labor market indicators and social protection transfers, and to collect improved data on consumption expenditures. A significant change introduced in the questionnaire was a transition from diary to recall methods to measure consumption expenditures, which was expected to contribute to a reduction in respondent fatigue.

Finally, there was a major overhaul of survey implementation in the field: beyond the standard practice of piloting and training, an integrated approach was used to monitor field progress and data quality concurrently through a series of high-frequency reports on critical indicators. These reports were used systematically, each week, to identify bottlenecks and poor performance and implement corrective measures immediately. To improve incentives for good performance in the field, a monthly system of performance-based awards based on key metrics of effort and quality was implemented for field enumerators and teams. The expansion of surveying hours to include evenings and weekends, when urban households with employed members were more likely to be available also helped reach more households in these clusters which had traditionally had high rates of non-response.

Improved sampling and survey design resulted in increased response rates

Such an overhaul of the household survey system in design and implementation achieved key objectives. It set new standards on response rates in a middle-income country context—96 percent over the survey year. Incentives appeared to work as response rates grew steadily over survey quarters, from 93 percent in quarter 1 to 97 percent in quarter 4. These high response rates were achieved across all governorates in Jordan, including in highly urbanized settings such as Amman (95%) and governorates in the South (such as Maan and Aqaba) where response rates have traditionally been far lower.

It also yielded high quality data ready for analysis within 4 weeks of completion of the whole survey. The goal of data is to provide timely evidence to inform policy and strategy. This survey, by virtue of its innovations in design and data management structure, was able to provide key analysis for policy, even during intervening survey quarters as it was still being implemented. Similarly, welfare estimates were available within 6 weeks after its completion in the field. It has thus set a record for the least time taken between data collection and analysis: across the developing world the time lag between collection and analysis takes anywhere between 6 months in the best-case scenarios and up to 2 or 3 years in many country contexts.

Implementation quality shouldn’t be taken for granted

So, for Jordan, at least, the attention paid to data monitoring and implementation paid off! A key lesson we learned is that sampling, questionnaire, and such design features are important, but a large part a survey’s success is about the quality of its implementation.  Implementation quality is largely invisible, difficult to verify, and often under-invested in. Key features that facilitated this process in Jordan included careful field piloting of key changes in survey design; and the use of a system of performance-based awards to field enumerators to incentivize both field quality and the systematic use of data management systems. Ultimately, whether to respond to a survey is the decision made by the “respondent” and little is known about what all the likely reasons for refusals are. But the work in Jordan suggests that much can be done to dramatically reduce non-response rates before we worry about the intrinsic reasons for it.

 


Authors

Nethra Palaniswamy

Economist, Poverty Global Practice, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000