Syndicate content

Markus Goldstein's blog

The Tao of Impact Evaluation

Markus Goldstein's picture

Is in danger of being messed up.   Here is why:   There are two fundamental reasons for doing impact evaluation: learning and judgment.   Judgment is simple – thumbs up, thumbs down: program continues or not.   Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what. In learning evaluations, real failure is a lack of statistical power, more so than the program working or

How can we do better business training evaluations?

Markus Goldstein's picture

Last week I blogged about a paper that David wrote with Chris Woodruff which takes stock of the existing evidence on the impact of business trainings.   The bottom line was that we still don’t know much.   Part of the reason is that these types of evaluations are not straightforward to do – they have some pitfalls that you don’t always find in your garden variety impact evaluation. So to

Does Business Training Work?

Markus Goldstein's picture

What do we really know about how to build business capacity?    A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises.   They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons.   What’s more, as David and Chris go through the evidence, they come up with a lot of interestin

Better Nutrition Through Information

Markus Goldstein's picture

In honor of Labor Day here in the US, I want to talk about a recent nutrition paper by Emla Fitzsimons, Bansi Malde, Alice Mesnard and Marcos Vera-Hernandez.   This paper, “Household Responses to Information on Child Nutrition,” is one with a twist – they look not only at nutrition outcomes, but they also try and figure out where these might be coming from – and hence also look at labor supply.  

Paper or Plastic? Part II: Approaching the survey revolution with caution

Markus Goldstein's picture

Coauthored with Raka Banerjee and Talip Kilic

So if you missed it, Part I of this two-part blog post outlines all of the main reasons that you should consider incorporating Computer Assisted Personal Interviewing (CAPI) into your survey efforts. We’ll now try to even things out, by going over the many pitfalls to watch out for when switching to CAPI.

DFID's Approach to Impact Evaluation - Part II

Markus Goldstein's picture

As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division.   For Part I of this series, see yesterday’s post. Today we focus on DFID’s funding for research and impact evaluation.  

Notes from the field: Making the most of monitoring data

Markus Goldstein's picture

I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.  

In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients).  However, in most cases a combination of monitoring and survey data is required.

Getting good civil servants for tough jobs

Markus Goldstein's picture

Imagine you are running the recruitment process for a government agency and you are trying to attract high quality, public service oriented staff to work in difficult agencies.   How should you do this?   If you offer higher wages, maybe it will get you higher quality folks, but will you lose public service motivation?   And how do you get these high quality folks to go to remote and dangerous areas?  

Pages