· The latest VoxDevLit is now out, on climate adaptation, edited by Namrata Kala, Clare Balboni, and Shweta Bhogale. The review focuses on three areas: 1) evidence on the impacts of weather and climate shocks on different economic outcomes; 2) literature on adaption mechanisms such as technology adoption, migration, job-switching, etc., and 3) how spatial linkages and general equilibrium effects can transmit climate shocks across regions.
· Also on Climate, the next edition of the BREAD-IGC Virtual PhD course will be on Environmental Economics and Development – with online lectures from September-November and you can sign-up now. Looks like a fantastic line-up of topics and speakers.
· On VoxDev, Edward Asiedu, Monica Lambon-Quayefio, Francesca Truffa and Ashley Wong summarize their field experiment in Ghana that assigned female entrepreneurs to virtual networking groups: “Each week, one member was assigned to meet virtually with another group member. In addition, a directory consisting of all entrepreneurs in the treatment group with their contact information was made available to facilitate the networking process. This treatment aims to expand the business networks of participants and increase their opportunities for business collaborations…one year after the intervention, the treatment groups increased business innovation by 25 to 31%, as measured by the likelihood of introducing changes to their businesses, such as new products or new ways of marketing. Second, we also document an improvement in business practices, driven by a positive effect on marketing and financial planning practices….the treatment groups also experience a 21% increase in business profits”
· Scott Cunningham interviews David Card, and we hear about why he thinks reduced form versus structural is not a great describer, and why model-based versus design-based may better capture different approaches; the history of his minimum wage paper(s) and their reception; disruptive papers; why we need to be vampires as we get older, and more.
· Andrew Gelman offers his thoughts on the main things to do to make your study more likely to be replicable: not pre-registration and bigger sample sizes, but focusing on things like a more powerful treatment that you clearly describe and provide enough details of so someone else can implement it, targeting it on a sample where impacts are likely to be large, and investing in better measurement of the outcome and in measurement of pre-treatment observations. One can definitely take issue with the suggestion to only focus on those where the impacts are likely to be largest, and on a very powerful treatment, since this might not be cost-effective or tell you much about a feasible intensity for scale-up – the comments discuss this, with some readers coming down on the side of “see if you can find evidence that it works somewhere for some group” and then do more experiments to test scope and intensity effects – but certainly this point is debatable, and there are concerns that we place too much weight on studies done with extreme implementation oversight by highly motivated researchers/graduate students on a specially selected sample in optimal conditions, which will often be hard to do elsewhere.
· Borusyak, Hull and Jaravel have a short review paper on shift-share and other “formula-based” instruments: “Identification in the general case, for arbitrary formulas and designs, follows from simple adjustments based on the expected instrument: the average value of the formula across counterfactual sets of shocks, drawn from the specified assignment process. Specifically, OVB is avoided by either adding the expected instrument as a control or by using a recentered instrument which subtracts the expected instrument from the original formula. Controlling for or recentering by the expected instrument is generally necessary for identification with formula instruments, absent auxiliary assumptions on the exogeneity of shock exposure.”
· On Let’s Talk Development, Siddharth Dixit has an explainer on India’s digital transformation, discussing the Aadhar digital identify system, a mass roll-out of zero-balance bank accounts and then direct government transfers, the Unified Payments Interface (UPI) and rise of digital payments, and a new consent manager (account aggregator) system to manage data-sharing across different institutions and what this could mean for increasing access to credit.
· On the CGD blog, Anand and co-authors summarize a new systematic review and meta-analysis they have conducted on how to improve school management: “we found 20 experimental or quasi-experimental evaluations of school leader training which reported student learning outcomes. The key advantage of the meta-analysis is that we improve our ability to measure small effect sizes by pooling studies. Across 20 studies, we find an overall small effect size of 0.03 - 0.04 standard deviations. This effect is so small that many of the individual studies didn’t have a big enough sample size to be able to detect them…Whilst the effect is small per child, it applies to every child in the school. And you’re only paying to train one person: the school leader. This means that these programs can still be cost-effective compared to more expensive programs.”
Join the Conversation