Is in danger of being messed up. Here is why: There are two fundamental reasons for doing impact evaluation: learning and judgment. Judgment is simple – thumbs up, thumbs down: program continues or not. Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what. In learning evaluations, real failure is a lack of statistical power, more so than the program working or
Markus Goldstein's blog
Last week I blogged about a paper that David wrote with Chris Woodruff which takes stock of the existing evidence on the impact of business trainings. The bottom line was that we still don’t know much. Part of the reason is that these types of evaluations are not straightforward to do – they have some pitfalls that you don’t always find in your garden variety impact evaluation. So to
What do we really know about how to build business capacity? A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises. They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons. What’s more, as David and Chris go through the evidence, they come up with a lot of interestin
In honor of Labor Day here in the US, I want to talk about a recent nutrition paper by Emla Fitzsimons, Bansi Malde, Alice Mesnard and Marcos Vera-Hernandez. This paper, “Household Responses to Information on Child Nutrition,” is one with a twist – they look not only at nutrition outcomes, but they also try and figure out where these might be coming from – and hence also look at labor supply.
Coauthored with Raka Banerjee and Talip Kilic
So if you missed it, Part I of this two-part blog post outlines all of the main reasons that you should consider incorporating Computer Assisted Personal Interviewing (CAPI) into your survey efforts. We’ll now try to even things out, by going over the many pitfalls to watch out for when switching to CAPI.
Recently I was spending some time with a survey firm in Tanzania, pre-testing a survey. I got to talking with one of the folks working at the firm about how they compensated their enumerators. He made it clear that they follow a fixed efficiency wage (i.e.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division. For Part I of this series, see yesterday’s post. Today we focus on DFID’s funding for research and impact evaluation.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.
In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients). However, in most cases a combination of monitoring and survey data is required.
Imagine you are running the recruitment process for a government agency and you are trying to attract high quality, public service oriented staff to work in difficult agencies. How should you do this? If you offer higher wages, maybe it will get you higher quality folks, but will you lose public service motivation? And how do you get these high quality folks to go to remote and dangerous areas?