Published on Development Impact

Weekly links October 15: Nobel prizes for causal inference, hc3 vs robust, CORE economics, learning from doing experiments but not from publication, and more…

This page in:

Like so many applied researchers, I was delighted to see the Nobel prize in economics this year going to David Card, Josh Angrist, and Guido Imbens for their work on answering causal questions using observational data. I like this summary from the scientific background for the award, which makes clear it is not just for a particular paper or technique, but for the whole way we think about causal estimation “This change was not primarily about using new empirical methods, but rather about how to approach a causal question. The natural experiment agenda required researchers to understand the process determining which units receive which treatments. The new approach thus required an understanding of the source of identifying information, i.e., it required institutional knowledge about the natural experiment.”. This was noticeable in a lot of the media summaries, where the ideas behind comparing minimum wage changes in two adjoining states, or income changes when Miami had a migration influx from the Mariel boatlift and other cities did not, or outcomes for kids with birthdays on either side of schooling cutoffs were very appealing and intuitive for non-economists to understand. This importance of making a rhetorical case for identification, and not just a model-based/econometric assumption based case is one I think is crucial and still not completely appreciated. There is of course no shortage of summaries, so I just wanted to add a few reflections:

·       I love that this is a prize that makes clear the challenges and continued improvements in estimating causal relationships. Several of those classic studies have spurred debate about inference with small numbers of treated units (and been re-visited with synthetic control approaches), the quarter of birth and schooling study famously led to a whole set of studies about weak instruments, etc. So it is a reminder that you don’t have to get it perfect to get a Nobel prize, just try to think hard about how to identify effects better than we had before.

·       This may also reflect me getting older, but my impression is that for a long-time the economics Nobel prize was playing catch-up, trying to award theorists in their 70s or 80s for one or two seminal contributions made in their 20s or 30s, with the result being that the Nobel laureates seemed almost like historical figures to me. Whereas, as with the prize to Banerjee, Duflo and Kremer, this prize is to economists who are all still very active researchers. It was then wonderful to read lots of people’s comments about how all three laureates and helped and supported them at various points, and fun stories – who doesn’t love seeing Josh Angrist taking a sword to the question of whether econometrics is boring;  or Liran Einav on Guido Imbens “He’s just a normal dude with a lot of humility who is always trying to help, which translates to his work”. This video that Stanford put out of Guido’s kids interviewing him about his work is just delightful as well. This older interview of David Card is well worth a read as well – I loved this quote “People tend not to understand that most of what we have to do as researchers is just crap work. Yesterday, for example, I spent the whole day in the library going through historical volumes. It’s very hard to get graduate students to do something like that, or collect data, code it carefully, and then put it in a spreadsheet. They don’t pay attention, or put in the hours, unless they’re a co-author on the project. But an undergraduate thinks it’s fun.”

·       Guido has also been a supporter of this blog, and especially helpful for questions on clustering of standard errors – e.g. we did an Ask Guido about dealing with individual level outcomes with a program taking place in just one state and not others, he offered helpful advice for this post on power calculations with unequally sized clusters, I also summarized his paper on when to cluster standard errors, and reviewed his book with Rubin on causal inference. We hope his long-awaited volume II does not get completely crowded out by all the new demands he will face on his time.

In non-Nobel things I read this week:

·       reg y x, vce(hc3) for the win: The Data Colada blog revisits Alwyn Young’s Channeling Fisher paper and argues that i) his finding that randomization inference leads to fewer significant results than using regression standard errors is due to Stata’s robust command using HC1 standard errors, when HC3 errors should be used with smaller samples (i.e. regression is still fine, robust is the problem); and ii) the sensitivity to outliers that he notes where 35% of results that are significant at the 1% level are no longer significant at the level after removing just one observation is actually pretty similar to what you would expect if there were no big outliers and the data are just drawn from a normal distribution.

·       The New Yorker covers the CORE introductory economics curriculum: “Shifts in the economics curriculum can affect who takes economics. Max Kasy, an economics professor at Oxford, described the phenomenon. “Once, I had this really stark experience teaching advanced econometrics, which was, like, almost a hundred-per-cent white and Asian men taking it, and then teaching a class on economic inequality that was at a similar technical level, and it being almost a hundred-per-cent minority students and women,” he told me.”

·       The NEUDC 2021 program is now up, and registration for the conference on Nov 5-6 is free. This conference both inspired and intimidated me when I was fresh out of grad school – it was amazing as one of the few people doing development at my university to see this whole community of researchers working on such a wide variety of development topics – but also somewhat overwhelming and made me wonder how I would ever hope to publish my papers given the volume of other work out there. Especially since we have another year of it being online, I encourage everyone to check out the wide variety of topics, and especially to give encouragement and feedback to all the nervous grad students and young faculty presenting new work.

·       A nice Lant Pritchett post on the RISE blog about what he has learned from Rukmini Banerji (the CEO of Pratham and winner of this year’s Yidan Prize for Education). “the collaboration of Pratham with the Nobelists Banerjee and Duflo (and thereafter JPAL) has been a long and fruitful partnership. Rukmini points out that while they have learned a great deal from doing the experiments, they have never changed what they were doing due to the published results of the experiment. That is, Pratham has learned from doing experiments in three ways. One, the interaction with the geniuses in the design of the treatments was useful as it helped them be precise about what the “treatment” was, what the hypothesized causal mechanisms of effect were, and helped them design the implementation. Two, the implementation of the experiment served as a disciplined pilot, and much was learned of the teething troubles about what would be encountered at scale. Third, often the impact of the program was so obvious (either positive or in the lack of an impact) during implementation it passed the FO test and hence on did not need to wait for the completion and computations needed for publication to learn the right lessons.”  - this is interesting to me, since in many of my studies, it is really hard to just see visually whether the treatment has worked so well – which is why we need big samples to get enough power; and also we do sometimes learn more about it from going through the publication process and refining the analysis and thinking through comments from thoughtful reviewers.

·       A guidebook to agricultural survey design from the LSMS survey team.


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000