Syndicate content

designing impact evaluations

Are we over-investing in baselines?

Alaka Holla's picture

 
When I was in second grade, I was in a Catholic school, and we had to buy the pencils and pens that we used at school from a supply closet. One day I felt like getting new pencils, so I stood in line when the supply closet was open and asked for two. Before reaching for the pencils, the person who operated the supply closet, Sister Evangelista, told me a story about her time volunteering in Haiti, how the children she taught there used to scramble about in garbage heaps looking for discarded pieces of wood, charcoal, and wire so that they could make their own pencils. I left the closet that day without any pencils and with a permanent sense of guilt when buying new school supplies.
 
I now feel the same way about baseline data. Most of the variables I have ever collected – maybe even 80 percent – sit unused, while only a small minority make it to any tables or graphs. Given the length of most surveys in low- and middle-income countries, I suspect that I am not alone in this. I know that baselines can be useful for evaluations and beyond (see this blog by David McKenzie on whether balance tests are necessary for evaluations and this one by Dave Evans for suggestions and examples of how baseline data can be better used). But do we really need to spend so much time and resources on them?  
 

Informing policy with research that is more than the sum of the parts

Markus Goldstein's picture
Coauthored with Doug Parkerson

A couple of years ago, an influential paper in Science by Banerjee and coauthors looked at the impact of poverty graduation programs across 6 countries.   At the time (and probably since) this was the largest effort to look at the same(ish) intervention in multiple contexts at once – arguably solving the replication problem and proving external validity in one fell swoop.  
 

Seeking nimble plumbers

Alaka Holla's picture
Sometimes (maybe too many times), I come across an evaluation with middling or null results accompanied by a disclaimer that implementation didn’t go as planned and that results should be interpreted in that light. What can we learn from these evaluations? Would results have been better had implementation gone well? Or even if implementation had gone just fine, was the intervention the right solution for the problem? It’s hard to say, if we think of program success has a product of both implementation and a program that is right for the problem.

The program costs of impact evaluation

Markus Goldstein's picture
I was at a workshop last week where I was moderating a discussion of the practicalities of doing impact evaluations in conflict and post-conflict settings.  One of the program-implementation folks made clear that working with the impact evaluation was a strain -- as she put it this "was pulling our field staff through a keyhole".   Which got me thinking about the costs that we, as impact evaluators, can cause for a program.   
 

Tips for writing Impact Evaluation Grant Proposals

David McKenzie's picture

Recently I’ve done more than my usual amount of reviewing of grant proposals for impact evaluation work – both for World Bank research funds and for several outside funders. Many of these have been very good, but I’ve noticed a number of common issues which have cropped up in reviewing a number of them – so thought I’d share some pet peeves/tips/suggestions for people preparing these types of proposals.