Syndicate content

Tools of the Trade

Tools of the trade: recent tests of matching estimators through the evaluation of job-training programs

Jed Friedman's picture
Of all the impact evaluation methods, the one that consistently (and justifiably) comes last in the methods courses we teach is matching. We de-emphasize this method because it requires the strongest assumptions to yield a valid estimate of causal impact. Most importantly this concerns the assumption of unconfoundedness, namely that selection into treatment can be accurately captured solely as a function of observable covariates in the data.

Tools of the trade: when to use those sample weights

Jed Friedman's picture

In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A recent working paper by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.

“Oops! Did I just ruin this impact evaluation?” Top 5 of mistakes and how the new Impact Evaluation Toolkit can help.

Christel Vermeersch's picture

On October 3rd, I sent out a survey asking people what was the biggest, most embarrassing, dramatic, funny, or other oops mistake they made in an impact evaluation. Within a few hours, a former manager came into my office to warn me: “Christel, I tried this 10 years ago, and I got exactly two responses.” 

Tools of the Trade: Intra-cluster correlations

David McKenzie's picture

In clustered randomized experiments, random assignment occurs at the group level, with multiple units observed within each group. For example, education interventions might be assigned at the school level, with outcomes measured at the student level, or microfinance interventions might be assigned at the savings group level, with outcomes measured for individual clients.

Tools of the Trade: A quick adjustment for multiple hypothesis testing

David McKenzie's picture

As our impact evaluations broaden to consider more and more possible outcomes of economic interventions (an extreme example being the 334 unique outcome variables considered by Casey et al. in their CDD evaluation) and increasingly investigate the channels of impact through subgroup heterogeneity analysis, the issue of multiple hypothesis testing is gaining increasing prominence.

Tools of the trade: The covariate balanced propensity score

Jed Friedman's picture

The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.

Help for attrition is just a phone call away – a new bounding approach to help deal with non-response

David McKenzie's picture

Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.

Gerber and Green’s new textbook on Field Experiments – should you read it, and what should they add for version 2.0?

David McKenzie's picture

Alan Gerber and Don Green, political scientists at Yale and Columbia respectively, and authors of a large number of voting experiments, have a new textbook out titled Field Experiments: Design, Analysis, and Interpretation.  This is noteworthy because despite the massive growth in field experiments, to date there hasn’t been an accessible and modern textbook for social scientists looking to work in, or better understand, this area. The new book is very good, and I definitely recommend anyone working in this area to read at least key chapters.

Pages