Back in the tail end of last year, I did a post on using workshops with project teams to build impact evaluation design. My friend anonymous requested copies of the presentations. Since I am in the midst of doing another one of these workshops here in Ghana, I thought it would be worth posting them now.
Markus Goldstein's blog
After talking about domestic violence measurement and the need for some kind of model when you think about things like domestic violence with Toan last week, this week I look at a new paper from Jonas Hort and Espen Villanger which both asks the question carefully and definitely makes me think hard about what the ri
Coauthored with Quy-Toan Do
In response to my blog post last week, one of my colleagues stopped me in the hall and pointed out that I missed the point. So in response, I invited him to join this week for a discussion. Our discussion follows:
Toan: A survey without an underlying research question is like salt without pepper. What you need to do is talk about what questions the survey is designed to answer.
coauthored with Sabrina Roshan
Imagine you are out on a pretest of a survey. Part of the goal is to measure the rights women have over property. The enumerator is trying out a question: "can you keep farming this land if you are to be divorced?" The woman responds: "it depends on whose fault it is." Welcome to yet another land where no one has heard of no-fault divorce.
So, if you are like (some of) us, you’ve left the holiday shopping till the last minute. In that vein, we thought we would share some of what we find essential as we do the field (and other) research.
One of the things I learned from other folks at the Bank I work with is the usefulness of doing a workshop early in the early design of an impact evaluation to bring the project and the impact evaluation team together to hammer out design. With one of my colleagues, I did one of these during my recent trip to Ethiopia and a bunch of things stuck out.
I just spent the last week in Ethiopia and part of what I was doing was presenting some results from an impact evaluation baseline, as well as the final results-in-progress of another impact evaluation. In all, I ended up giving four talks of varying length to people working on these programs, but also to groups of agencies working on similar projects that started after the ones we were analyzing.
At least not in Benin. This week, I take a look at interesting paper by Leonard Wantchekon documenting an experiment he did in Benin with this year’s presidential election. In this paper, Leonard compares the results from a deliberative sharing of a candidate’s platform in a local town hall against a one-way communication of the candidate (or his broker) with a big rally.
If the data and related metadata collected for impact evaluations was more readily discoverable, searchable, and made available, the world would be a better place. Well, at least the research would be better. It would be easier to replicate studies and, in the process, to expand them by for example: trying other outcome indicators; checking robustness; and looking for heterogeneity effects (e.g.
Two weeks ago, David flagged an interesting paper by Bendavid, Avila and Miller in the Bulletin of the WHO which reminded me of a paper I had been following by Kelly Jones, a revised version of which has just been posted. Both of these papers look at the effect of the U.S. Mexico City Policy (a.k.a.