Last week I blogged about a paper that David wrote with Chris Woodruff which takes stock of the existing evidence on the impact of business trainings. The bottom line was that we still don’t know much. Part of the reason is that these types of evaluations are not straightforward to do – they have some pitfalls that you don’t always find in your garden variety impact evaluation. So to
Markus Goldstein's blog
What do we really know about how to build business capacity? A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises. They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons. What’s more, as David and Chris go through the evidence, they come up with a lot of interestin
In honor of Labor Day here in the US, I want to talk about a recent nutrition paper by Emla Fitzsimons, Bansi Malde, Alice Mesnard and Marcos Vera-Hernandez. This paper, “Household Responses to Information on Child Nutrition,” is one with a twist – they look not only at nutrition outcomes, but they also try and figure out where these might be coming from – and hence also look at labor supply.
Coauthored with Raka Banerjee and Talip Kilic
So if you missed it, Part I of this two-part blog post outlines all of the main reasons that you should consider incorporating Computer Assisted Personal Interviewing (CAPI) into your survey efforts. We’ll now try to even things out, by going over the many pitfalls to watch out for when switching to CAPI.
Recently I was spending some time with a survey firm in Tanzania, pre-testing a survey. I got to talking with one of the folks working at the firm about how they compensated their enumerators. He made it clear that they follow a fixed efficiency wage (i.e.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division. For Part I of this series, see yesterday’s post. Today we focus on DFID’s funding for research and impact evaluation.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.
In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients). However, in most cases a combination of monitoring and survey data is required.
Imagine you are running the recruitment process for a government agency and you are trying to attract high quality, public service oriented staff to work in difficult agencies. How should you do this? If you offer higher wages, maybe it will get you higher quality folks, but will you lose public service motivation? And how do you get these high quality folks to go to remote and dangerous areas?
Yesterday, David argued that “the important work on trying to raise the incomes and status of women around the world doesn’t continue to come in part by neglecting the important role you [dads] play.” While I don’t think the world of development programs is in any remote danger of ignoring men in favor of women, I do think we aren’t paying enough attention to how men and women interact, and what that means for how programs work (e.g. to increase the welfare of all).