Syndicate content

evaluation methods

Evaluate before you leap -- volunteers needed!

Markus Goldstein's picture

It’s been a year since we started the Development Impact blog, and I thought I would use the one year anniversary to focus on one of the classic papers in impact evaluation.    This paper (gated version here, ungated version here) is by Gordon Smith and Jill Pell and appeared in the BMJ back in 2003.

Can we trust shoestring evaluations?

Martin Ravallion's picture

There is much demand from practitioners for “shoestring methods” of impact evaluation—sometimes called “quick and dirty methods.” These methods try to bypass some costly element in the typical impact evaluation. Probably the thing that practitioners would most like to avoid is the need for baseline data collected prior to the intervention. Imagine how much more we could learn about development impact if we did not need baseline data!

Strategies for Evaluating the Impact of Big Infrastructure Projects: How can we tell if one big thing works?

David McKenzie's picture

One of the interesting discussions I had this last week was with a World Bank consultant trying to think about how to evaluate the impact of large-scale infrastructure projects. Forming a counterfactual is very difficult in many of these cases, and so the question is what one could think of doing. Since I get asked similar types of questions reasonably regularly, I thought I’d share my thoughts on this issue, and see whether anyone has good examples to share.