Weekly links January 18: an example of the problem of expost power calcs, new tools for measuring behavior change, plan your surveys better, and more...
This page in:
 The Science of Behavior Change Repository offers a repository of measures of stress, personality, selfregulation, time preferences, etc. – with instruments for both children and adults, and information on how long the questions take to administer and where they have been validated.
 Andrew Gelman on posthoc power calculations – “my problem is that their recommended calculations will give wrong answers because they are based on extremely noisy estimates of effect size... Suppose you have 200 patients: 100 treated and 100 control, and postoperative survival is 94 for the treated group and 90 for the controls. Then the raw estimated treatment effect is 0.04 with standard error sqrt(0.94*0.06/100 + 0.90*0.10/100) = 0.04. The estimate is just one s.e. away from zero, hence not statistically significant. And the crudely estimated posthoc power, using the normal distribution, is approximately 16% (the probability of observing an estimate at least 2 standard errors away from zero, conditional on the true parameter value being 1 standard error away from zero). But that’s a noisy, noisy estimate! Consider that effect sizes consistent with these data could be anywhere from 0.04 to +0.12 (roughly), hence absolute effect sizes could be roughly between 0 and 3 standard errors away from zero, corresponding to power being somewhere between 5% (if the true population effect size happened to be zero) and 97.5% (if the true effect size were three standard errors from zero).”

The World Bank’s data blog uses metadata from hosting its survey solutions tool to ask how well people plan their surveys (and read the comments for good context in interpreting the data). Some key findings:
 Surveys usually take longer than you think they will: 47% of users underestimated the amount of time they needed for the field work – and after requesting more server time, many then rerequest this extension
 Spend more time piloting questionnaires before launching: 80% of users revise their surveys at least once when surveying has started, and “a surprisingly high proportion of novice users made 10 or more revisions of their questionnaires during the fieldwork”
 Another factoid of interest “An average nationally representative survey in developing countries costs about US$2M”
 On the EDI Global blog, Nkolo, Mallet, and Terenzi draw on the experiences of EDI and the recent literature to discuss how to deal with surveys on sensitive topics.
Join the Conversation