Syndicate content

Tools of the Trade

Tools of the Trade: Intra-cluster correlations

David McKenzie's picture

In clustered randomized experiments, random assignment occurs at the group level, with multiple units observed within each group. For example, education interventions might be assigned at the school level, with outcomes measured at the student level, or microfinance interventions might be assigned at the savings group level, with outcomes measured for individual clients.

Tools of the Trade: A quick adjustment for multiple hypothesis testing

David McKenzie's picture

As our impact evaluations broaden to consider more and more possible outcomes of economic interventions (an extreme example being the 334 unique outcome variables considered by Casey et al. in their CDD evaluation) and increasingly investigate the channels of impact through subgroup heterogeneity analysis, the issue of multiple hypothesis testing is gaining increasing prominence.

Tools of the trade: The covariate balanced propensity score

Jed Friedman's picture

The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.

Help for attrition is just a phone call away – a new bounding approach to help deal with non-response

David McKenzie's picture

Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.

Gerber and Green’s new textbook on Field Experiments – should you read it, and what should they add for version 2.0?

David McKenzie's picture

Alan Gerber and Don Green, political scientists at Yale and Columbia respectively, and authors of a large number of voting experiments, have a new textbook out titled Field Experiments: Design, Analysis, and Interpretation.  This is noteworthy because despite the massive growth in field experiments, to date there hasn’t been an accessible and modern textbook for social scientists looking to work in, or better understand, this area. The new book is very good, and I definitely recommend anyone working in this area to read at least key chapters.

Tools of the Trade: Beyond mean decompositions (with an application to the gender wage gap in China)

Jed Friedman's picture

Suppose you were investigating the observed wage gap in urban China, where men are paid approximately 30% more than women. The first thing you would like to know is whether the higher wages paid to men are a result of the greater average years of schooling and years in the labor force that men have or whether, instead, men are paid more even after accounting for education and experience. If the latter situation is the case then the difference in wages may at least in part be due to labor market discrimination.

Tools of the Trade: Getting those standard errors correct in small sample cluster studies

Jed Friedman's picture

Some of the earliest posts on this blog concerned the inferential challenges of cluster randomized trials when clusters are few in number (see here and here for two examples of discussion). Today’s post continues this theme with a focus on better practice in the treatment of standard errors.