I was recently talking with one of my younger colleagues and she was lamenting something that was going wrong in an impact evaluation she was working on. She was thinking of throwing in the towel and shutting down the work. This reminded me of the horrible feeling in the pit of my stomach when I started doing impact evaluation (as well as research more generally) when something went wrong. Now, of course, I am bald…
When done well, randomized experiments at least provide internal validity – they tell us the average impact of a particular intervention in a particular location with a particular sample at a particular point in time. Of course we would then like to use these results to predict how the same intervention would work in other locations or with other groups or in other time periods.
- external validity
To deceive or not to deceive? David’s last post about Beaman and Magruder’s experiment, in which there is a small, and seemingly harmless deception, got me thinking about this continually uncomfortable issue. David claims that this is now an increasingly popular strategy, which gives me some worry about the future of experiments in economics.
One of the frustrations facing job seekers worldwide, but especially in many developing countries, is how much finding a job depends on who you know rather than what you know. For example, in work I’ve done with small enterprises in Sri Lanka, less than 2 percent of employers openly advertised the position they last hired – with the most common ways of finding a worker being to ask friends, neighbors or family members for suggestions. Clearly networks matter for finding jobs.
An interesting, recently revised working paper by Duflo, Dupas and Kremer looks at the effects of providing school uniforms, teacher training on HIV education, and the two combined. This paper is useful in a number of dimensions – it gives us some sense of the longer term effects of these programs, the methodology is interesting (and informative), and finally, of course, the results are pretty intriguing and definitely food for thought.
Survey responses to questions on incomes (and other potentially sensitive topics) are likely to contain errors, which could go in either direction and be found at all levels of income. There is probably also non-random selection in terms of who agrees to be interviewed, implying that we get the weights wrong too (as used to "gross-up" the sample estimates to the population).
So I come back from vacation to find out that I was part of a randomized experiment in my absence. No, this had nothing to do with the wonders of airline travel in Europe (which don’t add that frisson of excitement through random cancellations like their American brethren), but rather two of our co-bloggers trying to figure out if the blog actually makes people recognize me and Jed more (here are links to parts
One of the interesting discussions I had this last week was with a World Bank consultant trying to think about how to evaluate the impact of large-scale infrastructure projects. Forming a counterfactual is very difficult in many of these cases, and so the question is what one could think of doing. Since I get asked similar types of questions reasonably regularly, I thought I’d share my thoughts on this issue, and see whether anyone has good examples to share.
- evaluation methods