Is in danger of being messed up. Here is why: There are two fundamental reasons for doing impact evaluation: learning and judgment. Judgment is simple – thumbs up, thumbs down: program continues or not. Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what. In learning evaluations, real failure is a lack of statistical power, more so than the program working or
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division. For Part I of this series, see yesterday’s post. Today we focus on DFID’s funding for research and impact evaluation.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.
In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients). However, in most cases a combination of monitoring and survey data is required.
In a New York Times column last Friday David Brooks discussed a book by Jim Manzi, and extolled the idea of randomized field trials as a way for the US to make better policies.
While it’s nice to welcome Citizen Brooks into the fold, there are a couple of points in his article worth exploring a bit.
So this past week I was in Ghana following up on some of the projects I am working on there with one of my colleagues. We were designing an agricultural impact evaluation with some of our counterparts, following up on the analysis of the second round of a land tenure impact evaluation and a financial literacy intervention, and exploring the possibility of some work in the rural financial sector. In no particular order, here are some of the things I learned and some things I am still wondering about:
One of the things I learned from other folks at the Bank I work with is the usefulness of doing a workshop early in the early design of an impact evaluation to bring the project and the impact evaluation team together to hammer out design. With one of my colleagues, I did one of these during my recent trip to Ethiopia and a bunch of things stuck out.
I was in a meeting the other week where we were wrestling with the issue of how to capture better labor supply in agricultural surveys. This is tough – the farms are often far from the house, tasks are often dispersed across time, with some of them being a small amount of hours – either in total or on a given day. Families can have more than one farm, weakening what household members know about how the others spend their time. One of the interesting papers that came up was a study by Elena Bardasi, Kathleen Beegle, Andrew Dllon and Pieter Serneels. Before turning to their results its worth spending a bit more time discussing what could be going on.
Two things would seem to matter (among others). First, who you ask could shape the information you get. We’ve had multiple posts in the past about imperfections in within household information. These posts have talked about income and consumption and while labor would arguably be easier to observe, it may suffer from the same strategic motives for concealment and thus be underreported when the enumerator asks someone other than the actual worker to respond on this.
co-authored with Alaka Holla
Everyone always says that great things happen when you give money to women. Children start going to school, everyone gets better health care, and husbands stop drinking as much. And we know from impact evaluations of conditional cash transfers programs that a lot of these things are true (see for example this review of the evidence by colleagues at the World Bank). But, aside from just giving them cash with conditions, how do we get money in the hands of women? Do the programs we use to increase earnings work the same for men and women? And do the same dimensions of well-being respond to these programs for men and women?
The answer is we don’t know much. And we really should know more. If we don’t know what works to address gender inequalities in the economic realm, we can’t do the right intervention (at least on purpose). This makes it impossible to economically empower women in a sustainable, meaningful way. We also don’t know what this earned income means for household welfare. While the evidence from CCTs for example might suggest that women might spend transfers differently, we don’t know whether more farm or firm profits for a woman versus a man means more clothes for the kids and regular doctor visits. We also don’t know much about the spillover effects in non-economic realms generated by interventions in the productive sectors and whether these also differ across men and women. Quasi-experimental evidence from the US for example suggests that decreases in the gender wage-gap reduce violence against women (see this paper by Anna Aizer), but some experimental evidence by Fernald and coauthors from South Africa suggests that extending credit to poor borrowers decreases depressive symptoms for men but not for women.