Syndicate content

Weekly links January 11: it’s not the experiment, it’s the policy; using evidence; clustering re-visited; and more...

David McKenzie's picture
  • “Experiments are not unpopular, unpopular policies are unpopular” – Mislavsky et al. on whether people object to companies running experiments. “Additionally, participants found experiments with deception (e.g., one shipping speed was promised, another was actually delivered), unequal outcomes (e.g., some participants get $5 for attending the gym, others get $10), and lack of consent, to be acceptable, as long as all conditions were themselves acceptable.” – caveat to note-  results are based on asking MTurk subjects (and one sample of university workers) whether they thought it was ok for companies to do this.
  • Doing power calculations via simulations in Stata – the Stata blog provides an introduction on how to do this.
  • Marc Bellemare has a post on how to use Pearl’s front-door criterion for identifying causal effects – he references this more comprehensive post by Alex Chino which provides some examples of its use in economics.
  • From the Stanford Social Innovation Review – six ways to support small and growing businesses in emerging markets.
  • The Economist on how equal rights can boost economic growth.
  • The team at Declare Design re-examines the question of whether you should cluster standard errors above the level of treatment. They make the point that if you are interested in the sample average treatment effect, then the advice to cluster at the level of randomization works fine, but if there is treatment heterogeneity, you only sample clusters, and you are interested in the population treatment effect, then you may need to cluster at a higher level. See also my sidenote 2 in this post about when to cluster – basically in all the applications I have done clustered experiments at, we are greedy/constrained and take as many clusters as possible, and so there is never any first-stage cluster sampling, so all we can say something about is the SATE.
  • J-PAL has a report on its lessons from trying to get policymakers to use evidence in Latin America – including a case study discussing the set up and functioning of MineduLAB, a laboratory for innovation within the Peruvian Ministry of Education – which was set up in 2014 and has to date identified 9 innovations to pilot, of which 6 RCTs have been completed, and one with positive impacts already scaled-up.
  • IPA has a trilogy of posts on moving evidence to policy: first, IPA’s strategic ambition, including a set of conditions for research to be incorporated into policy, then “strategies for co-creating research” with decisionmakers in order to maximize impact, and finally, a post on what to do with the research once you’ve done it
  • Funding: Call for expressions of interest -- the World Bank Africa Gender Innovation Lab is looking for teams and projects to work on impact evaluations on programs that aim to improve women’s land tenure security in sub-Saharan Africa.

Add new comment