Weekly links March 29: dynamic experimentation, making data accessible and transparent, summaries of a gazillion conference papers, assessing economic significance, and more...


This page in:

  • Max Kasy blogs about his new work on designing multiple experiments for policy choice – “Trying to identify the best policy is different from estimating the precise impact of every individual policy: as long as we can identify the best policy, we do not care about the precise impacts of inferior policies. Yet, despite this, most experiments follow protocols that are designed to figure out the impact of every policy, even the obviously inferior ones.... The key to our proposal is staging: rather than running the experiment all at once, we propose that researchers start by running a first round of the experiment with a smaller number of participants. Based on this first round, you will be able to identify which treatments are clearly not likely to be the best. You can then go on to run another round of the experiment where you focus attention on those treatments that performed well in the first round. This way you will end up with a lot more observations to distinguish between the best performing treatments.” Sounds very cool, but it does depend on short-term outcomes being your main objects of interest.
  • Why researchers should publish their data – the J-PAL blog provides some stats on the increase in data sharing requirements and practices, and the intriguing claim that “papers in top economics and political science journals with public data and code are cited between 30-45 percent more often than papers without public data and code” – which is based on preliminary work that uses changes in journal data availability requirements to attempt to make this a causal statement.
  • Related, on the Data blog, Ben Daniels and co-authors discuss how to make analytics reusable – and how, in addition to learning new technical tools, it requires conceptually rethinking how to organize workflows. The irony to me is that they point out the usefulness of this for large research teams, but the larger the research team, the more coordination issues become an issue (e.g. one person prefers Dropbox, another Git, another OneDrive, another Google Drive and another Box; one person likes Latex, others Word; some like Stata, others R; some people want to use Slack and others prefer emails and files; etc.)  – and so convergence on the simplest, most widely already used tools ends up happening – the point is that the lab-style model of a dictator PI telling everyone else this is how we will do this contrasts with the collaborative nature of many projects – so the challenge is also how to make the entry points into these different tools as easy as possible – or to make these tools as modular as possible so people can use what they prefer.
  • Last week was the annual conference at the Center for the Study of African Economies, and Dave Evans has micro-summaries of 275+ papers up at the CGD blog
  • Jeff Bloem on how to assess economic significance
  • Penny Goldberg on how a recently published paper in the AER provides insight into the effects of fast internet connection on employment in Africa.


David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation