Survey responses to questions on incomes (and other potentially sensitive topics) are likely to contain errors, which could go in either direction and be found at all levels of income. There is probably also non-random selection in terms of who agrees to be interviewed, implying that we get the weights wrong too (as used to "gross-up" the sample estimates to the population).
So I come back from vacation to find out that I was part of a randomized experiment in my absence. No, this had nothing to do with the wonders of airline travel in Europe (which don’t add that frisson of excitement through random cancellations like their American brethren), but rather two of our co-bloggers trying to figure out if the blog actually makes people recognize me and Jed more (here are links to parts
One of the interesting discussions I had this last week was with a World Bank consultant trying to think about how to evaluate the impact of large-scale infrastructure projects. Forming a counterfactual is very difficult in many of these cases, and so the question is what one could think of doing. Since I get asked similar types of questions reasonably regularly, I thought I’d share my thoughts on this issue, and see whether anyone has good examples to share.
- evaluation methods
The past couple of weeks have been unusually busy for August, but also fun. While Markus has been on vacation, Jed has produced a lot of interesting (and highly read) posts, and David and I ran a three-part series on the "Impact of Economics Blogs." The latter has been instructive. In particular, we realized -- mainly through feedback from readers -- that blogging about a paper in parts over time may be more effective in disseminating its messages and findings than the traditional one post/one link blog post.
The types of data available to development economists are proliferating – multi-topic household surveys are almost passé today but 25 years ago it was a rare privilege to be able to correlate economic measures of the household with other indicators such as health or community infrastructure. Not only are surveys more sophisticated, and arguably contain less error due to the use of field based computers, but the digital revolution has multiplied the types of data at our beck and call.
The latest issue of the Journal of Economic Perspectives (all content openly available online), has a symposium on the use of field experiments in economics. We’ve discussed or linked to posts on three of the four papers in previous blog posts: A paper on mechanism experiments by Ludwig, Kling and Mullainathan; a paper on the
- Research ethics
The New York Times political blog has just posted an interview between David Leonhardt and Sasha Issenberg about Issenberg’s forthcoming book on Presidential candidate Rick Perry’s campaign method. Notable is the use of randomized experiments in campaigning:
The World Bank Group provided $4.2 billion in support to the ICT (information and communications technology) sector over 2003-2010, including 410 non-lending activities for ICT sector reform and capacity building in 91 countries. The World Bank’s Independent Evaluation Group (IEG) had the unenviable task of trying to answer whether all this activity has been relevant and effective.
One of the more common requests I receive from colleagues in the World Bank’s operational units is support on evaluating the impact of a large cash transfer program, usually carried out by the national government. Despite the fact that our government counterparts are much more willing to consider a randomized promotion impact evaluation (IE) design these days, still this is often not possible. This could be, for example, because it has already been announced that the program is going to be implemented in certain areas starting on a certain date.
The increased use of randomized experiments in development economics has its enthusiastic champions and its vociferous critics. However, much of the argument seems to be battling against straw men, with at times an unwillingness to concede that the other side has a point. During our surveys of assistant professors and Ph.D.