I noted these links over summer break, but somehow forgot to post them – look for the regular links tomorrow, so you will have lots of weekend reading to browse
· Noah Smith has several posts on the development processes and current challenges of different countries with the most recent one on the Mexican development puzzle.
· JPAL overview of the literature on improving tax compliance via reminders to taxpayers.
· Quentin André on whether lots of p-values just under 0.05 indicate a goldilocks just right powering of an experiment – the answer is no – “Even at 50% power (a very low target power, that I’ve never seen used in any power analysis), more than 50% of the significant p-values (<0.05) will be lower than 0.01.” This post was perhaps prompted by discussion of this Brodeur et al. working paper that shows online experiments using Mechanical Turk seem incredibly p-hacked and subject to severe publication bias, especially those in marketing, with lots of studies having pretty small samples, despite the low cost ($1.30 per subject or less): “The Marketing panel reveals a pattern of statistical significance quite different in character to the other fields. There is a very low mass of published non-statistically significant results and a very sharp peak of z-statistics at values immediately above 1.96”
· Scott Cunningham continues to host really interesting interviews with his mixtape podcasts. Two recent ones that I enjoyed were with people whose work I was not familiar with before: Anna Aizer talks about work on youth incarceration, and is a great example of how taking a roundabout route to economics and getting real-life work experience gave deep institutional knowledge that helped both in generating research questions and a research design (the judge leniency design here); Ronny Kohavi talks about early days of machine learning and A/B testing in tech: while he uses different terminology (“sample ratio mismatch”), he gives an interesting example of how what we would call attrition bias can really mess up A/B online experiments. An issue is that there can be lots of bots and fake email addresses, which can add a lot of noise to online tests, and so users typically tend to remove these observations after randomization but before analyzing the results (since they can only detect these were bots or fake emails after some outcome occurs) – but then you may be more likely to experience these issues in say the treatment group than the control group, so that although the initial allocation of 1,000,000 users was 50:50 to treatment and control, you end up with a ratio of 50.2/49.8 after removing these observations – and this mismatch can be enough to give misleading results. And since the summer, I enjoyed his interview with Noam Angrist, which provides another great example of crafting your own career path to make an impact in development economics.
· Useful Stata command I hadn’t seen before, via Todd Jones – use mdesc to quickly see how many observations of each variable are missing.
· On VoxDev, Bazzi, Cameron, Schaner and Witoelar summarize an experiment which provided potential migrant women in Indonesia information about the quality of migrant placement agencies – doing so led fewer women to migrate, but those that did went with better agencies and had better non-monetary outcomes when they did migrate.
· “It is useful to think about data-then-model papers as tracing out a frontier that trades off the strength of the assumptions for more economically relevant results…. At each stage in the paper, you are offering the reader a deal: if you accept some additional assumptions, then I will provide you with additional results. If the reader is willing to accept assumptions about the validity of the empirical approach, you can offer causal estimates. If the reader is willing to accept additional assumptions about the economic environment, you can deliver additional results in terms of economic parameters, counterfactuals, or welfare…This type of structure allows the reader to situate themselves at the point on this frontier that best matches their preferences—and allows the reader to “get off the train” at the point where they are no longer comfortable with the trade-off being offered” – Neale Mahoney in the JEP on principles for combining descriptive and model-based analysis in applied micro work.
· Dave Evans has collated several twitter threads providing advice on how to write tenure letters in this google doc.
· New data on suitability for artisanal and small-scale mining in Africa: Victoire Girard, Teresa Mollina-Millan and Guillaume Vic propose a new measure of artisanal and small-scale gold mining (ASgM) in Africa. The measure combines geological information on where gold can be with international gold price variation. "Spatially, we exploit the fact that ASgM requires geologically suitable locations, i.e. locations that host gold." They retrieve these locations from recent research in geology. "The temporal variation comes from changes in the international price of gold. The potential revenues from ASgM directly depend on the international price of gold, and miners are price takers on this market which they closely follow " They use this data to show that ASgM increases both deforestation and nighttime light emissions. They are happy to share the data: get in touch with Victoire if you are interested in using it.
· Journal special issue: call for papers for a special issue of the Journal for Development Effectiveness, ‘Trends in Research Transparency, Reproducibility, and Ethics for Development Effectiveness’. They are interested in projects, programs, policies, and practices that aim to improve the transparency, reproducibility, and ethical conduct of impact evaluations and/or test whether improvements in these areas improve development effectiveness. Submission deadline Jan 31.
Join the Conversation