· A VoxDev piece summarizing empirical work on how the variety of products available for sale is much less in remote communities, reducing consumer welfare: the authors estimate that “the average welfare loss over space due to the loss in variety is 16% of spending on manufactured goods”.
· Webcasts and slides from the AEA continuing education lectures. Of particular interest to many this year might be the session on Climate Change Economics.
· J-PAL has put out two new methods guides to de-identifying and publishing research data.
· Gabrielle Kruks-Wisner’s syllabus for a graduate course on field methods and research design – designed for political science, but lots of the readings are useful for social scientists more generally.
· Stata trick: You can recover the recover the underlying data needed to draw a particular graph from a .gph file by typing serset dir, serset use (via @SethGershenson).
· Andrew Gelman doubles down on his concerns about (some of) the use of regression discontinuities in economics: “researchers push that big button on their computer labeled REGRESSION DISCONTINUITY ANALYSIS, which does two bad things: First, it points them toward an analysis that focuses obsessively on adjusting for just one pre-treatment variable, often a relatively unimportant variable, while insufficiently adjusting for other differences between treatment and control groups. Second, it leads to an overconfidence borne from the slogan, “causal identification,” which leads researchers, reviewers, and outsiders to think that the analysis has some special truth value. What we typically have is a noisy, untrustworthy estimate of a causal effect, presented with little to no sense of the statistical challenges of observational research. And, for the usual “garden of forking paths” reason, the result will typically be “statistically significant,” and, for the usual “statistical significance filter” reason, the resulting estimate will be large and newsworthy.”