The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.
Tools of the Trade
Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.
Last week David linked to a virtual discussion involving Dave Giles and Steffen Pischke on the merits or demerits of the Linear Probability Model (LPM).
Alan Gerber and Don Green, political scientists at Yale and Columbia respectively, and authors of a large number of voting experiments, have a new textbook out titled Field Experiments: Design, Analysis, and Interpretation. This is noteworthy because despite the massive growth in field experiments, to date there hasn’t been an accessible and modern textbook for social scientists looking to work in, or better understand, this area. The new book is very good, and I definitely recommend anyone working in this area to read at least key chapters.
Suppose you were investigating the observed wage gap in urban China, where men are paid approximately 30% more than women. The first thing you would like to know is whether the higher wages paid to men are a result of the greater average years of schooling and years in the labor force that men have or whether, instead, men are paid more even after accounting for education and experience. If the latter situation is the case then the difference in wages may at least in part be due to labor market discrimination.
For many years, researchers have recognized the need to correct standard error estimates for observational dependence within clusters. An earlier post contrasted the typical approach to this matter, the cluster robust standard error (CRSE), and various methods to cluster bootstrap the standard error.
Random lotteries to allocate scarce slots for an oversubscribed program provide a useful tool for estimating impacts of such a program. However, an issue which can arise in practice is that there may be multiple lotteries that an individual can apply for. For example,