- Guido Imbens has a new paper on how to do matching in practice, offering his recommendations for empirical practice based on his reading of the literature: some key takeaways: use normalized differences rather than t-tests to assess overlap in the distributions; trimming the sample to get rid of observations with propensity scores close to 1 or 0 will also help robustness in whether you use logit or probit to estimate the propensity score; he proposes a step-wise procedure for choosing what goes into the propensity score (especially which second-order terms to include); and the use of placebo tests to assess the unconfoundedness assumption. He goes through three empirical examples to illustrate doing this in practice.
- Courtesy of Chris Blattman, I see Guido also has a guide to instrumental variables out.
- Rachel Glennerster on comparing cost-effectiveness across contexts.
- The causal impact of happiness on productivity? Watching a comedy or eating chocolate made people more productive on a subsequent task.
- Michael Trucano summarizes several new IDB evaluations on the use of technology in schools in Latin America.
- Markus’ Gender Innovation Lab and the One Campaign have a new (glossy, easy to read) report out on the Gender Productivity Gap in Agriculture in Africa and what can be done about it. This blog post summarizes some key findings: “Yet after accounting for regional differences and the fact that women tend to farm smaller plots than men, one arrives at a much starker conclusion: significant gender gaps range from 23% in Tanzania to a strikingly large 66% in Niger.” Their discussion of recommended policy actions at the end draws on a body of impact evaluations, and notes the state of evidence for each recommendation.
- We are pleased by the discussion this week of our series of posts on the ethics of randomization. One comment that I thought was particularly useful was one by Heather Lanthorn on the ethics of pipeline designs: basically questioning whether you should be promising the control group they will get the program later, since what happens if your evaluation reveals the program isn’t very good (or is even harmful)?
Join the Conversation