On May 25, I attended a workshop organized by the Harvard School of Public Health, titled “Causal Inference with Highly Dependent Data in Communicable Diseases Research.” I got to meet many of the “who’s who” of this literature from the fields of biostatistics, public health, and political science, among whom was Elizabeth Halloran, who co-authored this paper with Michael Hudgens – one of the more influential papers in the field.
- Rachel Glennerster on how the “small questions” RCTs have been argued to offer have in fact helped provide a big answer in public health
- Planet Money put together a nice podcast about the Nigerian business plan competition I evaluated
- On VoxEU Stefano Dellavigna and Devin Pope summarize work comparing the relative effectiveness of different types of monetary and non-monetary incentives in inducing effort, and a goal effort to compare the results with what experts predict….with the most menial task imaginable “M-Turk participants… task for the subjects is to alternately press the "a" and "b" buttons on their keyboards as quickly as possible for ten minutes.”
Angela Duckworth’s new book Grit: The Power of Passion and Perseverance has been launched with great fanfare, reaching number two on the NY Times Nonfiction bestseller list. She recently gave a very polished and smooth book launch talk to a packed audience at the World Bank, and is working with World Bank colleagues on improving grit in classrooms in Macedonia. Billed as giving “the secret to outstanding achievement” I was interested in reading the book as both a researcher and a parent. I thought I’d continue my book reviews series with some thoughts on the book.
- On selecting what variables to gather data for in your impact evaluation: Carneiro et al. have a new paper out – “Optimal Data Collection for Randomized Control Trials” – which argues that if you have a household survey or census in advance, you can use an algorithm to select the right covariates, potentially reducing data collection costs or improving precision substantially.
This post was co-authored by Sacha Dray, Felipe Dunsch, and Marcus Holmlund.
Impact evaluation needs data, and often research teams collect this from scratch. Raw data fresh from the field is a bit like dirty laundry: it needs cleaning. Some stains are unavoidable – we all spill wine/sauce/coffee on ourselves from time to time, which is mildly frustrating but easily discarded as a fact of life, a random occurrence. But as these occurrences become regular we might begin to ask ourselves whether something is systematically wrong.
- In the Richard T. Ely lecture, John Campbell discusses the challenge of consumer financial regulation – he distinguishes 5 dimensions of financial ignorance many households exhibit: 1) ignorance of even the most basic financial concepts (financial illiteracy); 2) ignorance of contract terms (such as not knowing about the fees build into credit cards or when mortgage interest rates can change); 3) ignorance of financial history – relying too much on own experiences and the recent past; 4) ignorance of self- a lot of financially illiterate people are over-confident about their abilities; and 5) ignorance of incentives, strategy and equilibrium – failure to take account of incentives faced by other parties to transactions. Given these problems, and the limits of financial education and disclosure requirements to fix them, he discusses what financial regulation is needed: “consumer financial regulation must confront the trade-off between the benefits of intervention to behavioral agents, and the costs to rational agents….the task for economists is to confront this trade-off explicitly”
Last week I attended a workshop on Subjective Expectations at the New York Fed. There were 24 new papers on using subjective probabilities and subjective expectations in both developed and developing country settings. I thought I’d summarize some of the things I learned or that I thought most of interest to me or potentially our readers:
Subjective Expectations don’t provide a substitute for impact evaluation
I presented a new paper I have that is based on the large business plan competition I conducted an impact evaluation of in Nigeria. Three years after applying for the program, I elicited expectations from the treatment group (competition winners) of what their businesses would be like had they not won, and from the control group of what their businesses would have been like had they won. The key question of interest is whether these individuals can form accurate counterfactuals. If they could, this would allow us a way to measure impacts of programs without control groups (just ask the treated for counterfactuals), and to derive individual-level treatment effects. Unfortunately the results show neither the treatment nor control group can form accurate counterfactuals. Both overestimate how important the program was for businesses: the treatment group thinks they would be doing worse off if they had lost than the control group actually is doing, while the control group thinks they would be doing much better than the treatment group is actually doing. In a dynamic environment, where businesses are changing rapidly, it doesn’t seem that subjective expectations can offer a substitute for impact evaluation counterfactuals.