Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.
· In Science this week (gated), Katz and Kling add some co-authors and follow-up on their famous Econometrica paper on the Moving to Opportunity program to examine impacts 10-15 years after moving from a high-poverty to a low-poverty neighborhood.
Reporting findings from studies in economics is changing, and likely for the better. It’s hard to not credit at least some of this improvement to the proliferation of RCTs in the field. As issues of publication bias, internal and external validity, ex-ante registration of protocols and primary data analysis plans, open data, etc. are being debated, the way we report research findings is changing.
(with contributions from Will Martin)
Last week I blogged about a paper that David wrote with Chris Woodruff which takes stock of the existing evidence on the impact of business trainings. The bottom line was that we still don’t know much. Part of the reason is that these types of evaluations are not straightforward to do – they have some pitfalls that you don’t always find in your garden variety impact evaluation. So to
· Essential reading this week: The Boston Review has an excellent feature on early interventions to promote social mobility, with the lead article by Jim Heckman. I never realized quite how small the samples of the famous early childhood studies are – treatment group of 58 kids in the Perry Preschool program and 65 in the control group.
With the increasing use of randomized and natural experiments in economics to identify causal program effects, it can sometimes be easy for the layperson to be easily confused about the population for which a parameter is being estimated. Just this morning, giving a presentation to a non-technical crowd, I could not help but go over the distinction between the average treatment effect (ATE) and the local average treatment effect (LATE). The questions these two estimands address are related yet quite different, in a way that matters not only to academics but equally to policymakers.
When Development Impact shut down for August, I had ambitious goals. Unfortunately I didn’t meet them all (why does that always happen?). However I did manage to madly review almost 60 proposals for the funding of prospective impact evaluations financed by various organizations and donors. Many of these proposals were excellent (unfortunately not all could be funded). However it was surprisingly informative to read so many proposals in such a condensed time.
- Proposal writing
What do we really know about how to build business capacity? A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises. They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons. What’s more, as David and Chris go through the evidence, they come up with a lot of interestin