I just spent the last week in Ethiopia and part of what I was doing was presenting some results from an impact evaluation baseline, as well as the final results-in-progress of another impact evaluation. In all, I ended up giving four talks of varying length to people working on these programs, but also to groups of agencies working on similar projects that started after the ones we were analyzing.
Markus Goldstein's blog
At least not in Benin. This week, I take a look at interesting paper by Leonard Wantchekon documenting an experiment he did in Benin with this year’s presidential election. In this paper, Leonard compares the results from a deliberative sharing of a candidate’s platform in a local town hall against a one-way communication of the candidate (or his broker) with a big rally.
If the data and related metadata collected for impact evaluations was more readily discoverable, searchable, and made available, the world would be a better place. Well, at least the research would be better. It would be easier to replicate studies and, in the process, to expand them by for example: trying other outcome indicators; checking robustness; and looking for heterogeneity effects (e.g.
Two weeks ago, David flagged an interesting paper by Bendavid, Avila and Miller in the Bulletin of the WHO which reminded me of a paper I had been following by Kelly Jones, a revised version of which has just been posted. Both of these papers look at the effect of the U.S. Mexico City Policy (a.k.a.
I was in a meeting the other week where we were wrestling with the issue of how to capture better labor supply in agricultural surveys. This is tough – the farms are often far from the house, tasks are often dispersed across time, with some of them being a small amount of hours – either in total or on a given day. Families can have more than one farm, weakening what household members know about how the others spend their time. One of the interesting papers that came up was a study by Elena Bardasi, Kathleen Beegle, Andrew Dllon and Pieter Serneels. Before turning to their results its worth spending a bit more time discussing what could be going on.
Two things would seem to matter (among others). First, who you ask could shape the information you get. We’ve had multiple posts in the past about imperfections in within household information. These posts have talked about income and consumption and while labor would arguably be easier to observe, it may suffer from the same strategic motives for concealment and thus be underreported when the enumerator asks someone other than the actual worker to respond on this.
I wanted to follow up on David’s post of yesterday on the issue of sharing results with respondents. My initial reaction was that we kind of owe this to the respondents not least because they spent a lot of time answering our tedious questionnaires. But as David points out, it’s not quite that simple in cases where we expect to have ongoing work.
- dissemination of results
coauthored with Jishnu Das
Women perform 66 percent of the world’s work, and produce 50 percent of the food, yet earn only 10 percent of the income….
--Former President Bill Clinton addressing the annual meeting of the Clinton Global Initiative (September 2009)
Impressive, heart-wrenching, charity-inducing, get off your sofa and go do something heartbreaking.
I was recently talking with one of my younger colleagues and she was lamenting something that was going wrong in an impact evaluation she was working on. She was thinking of throwing in the towel and shutting down the work. This reminded me of the horrible feeling in the pit of my stomach when I started doing impact evaluation (as well as research more generally) when something went wrong. Now, of course, I am bald…