Published on Development Impact

Weekly links May 3: reducing survey non-response or perhaps it is ok, s-values, an award for incentives, and more...

This page in:
  • On the data blog, Nethra Palaniswamy and Tara Vishwanath ask whether survey non-responses rates are doomed to fall as countries get richer – and explain how this need not be the case, based on their work in Jordan if surveys adapt. They reduced non-response rates in the household income and expenditure survey from a rate of around 43% in 2011, to only 5% in the 2017/18 version.
  • On the other hand, in the Evaluation Review, an article by Hendra and Hill (2018) provocatively argues that there can be little relationship between survey response rates and nonresponse bias (in the U.S.). Using a large survey of 12,000 that had multiple stages of tracking, they simulate what results and treatment vs control balance would look like if response rates were only 40% all the way up to their actual 80%+ - finding only small changes. They conclude “Lower response rate targets may yield results that are as valid, more timely, and less expensive... Accepting lower response rates would also reduce the burden on research subjects who are often subjected to multiple phone calls and invasive home visits”. So next time I struggle to get high response rates, I can just claim that this is a strategic choice? Some interesting bits I learned from this:
    • An 80% survey response rate is the standard for Federally funded public policy research, and is set by OMB. Also “A survey of editors at journals in the social and health sciences found that an 80% response rate is a de facto standard”
    • Reaching this standard is expensive, with survey costs reaching or exceeding US$1,000 per completed survey  in the U.S.!
  • Paul Johnson has a nice summary of some of the great work of Oriana Bandiera and Imran Rasul, for which they were both recently awarded the Yrjo Jahnsson award for best European economist under 45.
  • Paul Hünermund on why you shouldn’t pay too much attention to the coefficients on control variables in regressions when estimating causal effects. Basically because they are probably correlated with other determinants of the outcome that we aren’t conditioning on, and so are just a way of soaking up variation in the outcome, with the coefficients then representing a weighting of multiple causal impacts of these controls.
  • Perhaps s-values are easier to interpret than p-values: “Let’s say our study gives us a P-value of 0.005, which would indicate to many very low compatibility between the test model and the observed data; this would yield an s value of –log2(0.005) = 7.6 bits of information. k which is the closest integer to s would be 8. Thus, the data which yield a P-value of 0.005 are no more surprising than getting all heads on 8 fair coin tosses.”  (h/t Cyrus Samii).
  • Over at the CGD blog, Dave Evans summarizes two recent papers showing that readability boosts citations in economics papers and speculates as to why it might be different in the hard sciences. 

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000