Markus Goldstein's blog
I was recently talking with one of my younger colleagues and she was lamenting something that was going wrong in an impact evaluation she was working on. She was thinking of throwing in the towel and shutting down the work. This reminded me of the horrible feeling in the pit of my stomach when I started doing impact evaluation (as well as research more generally) when something went wrong. Now, of course, I am bald…
An interesting, recently revised working paper by Duflo, Dupas and Kremer looks at the effects of providing school uniforms, teacher training on HIV education, and the two combined. This paper is useful in a number of dimensions – it gives us some sense of the longer term effects of these programs, the methodology is interesting (and informative), and finally, of course, the results are pretty intriguing and definitely food for thought.
So I come back from vacation to find out that I was part of a randomized experiment in my absence. No, this had nothing to do with the wonders of airline travel in Europe (which don’t add that frisson of excitement through random cancellations like their American brethren), but rather two of our co-bloggers trying to figure out if the blog actually makes people recognize me and Jed more (here are links to parts
coauthored with Alaka Holla
So two weeks ago we talked about how we don’t know enough about economically empowering women and last week we talked about power issues when measuring this in “gender-blind” interventions. This week we’d like to make some suggestions about how, with small effort, we could make serious progress in learning meaningful things about how to increase the earning capacity of women.
coauthored with Alaka Holla
As we argued last week, we need more results that tell us what works and what does not for economically empowering women. And a first step would be for people who are running evaluations out there to run a regression that interacts gender with treatment. Now some of these will show no significant differences by sex. Does that mean that the program did not affect men and women differently? No. Alas, all zeroes are not created equal.
co-authored with Alaka Holla
Everyone always says that great things happen when you give money to women. Children start going to school, everyone gets better health care, and husbands stop drinking as much. And we know from impact evaluations of conditional cash transfers programs that a lot of these things are true (see for example this review of the evidence by colleagues at the World Bank). But, aside from just giving them cash with conditions, how do we get money in the hands of women? Do the programs we use to increase earnings work the same for men and women? And do the same dimensions of well-being respond to these programs for men and women?
The answer is we don’t know much. And we really should know more. If we don’t know what works to address gender inequalities in the economic realm, we can’t do the right intervention (at least on purpose). This makes it impossible to economically empower women in a sustainable, meaningful way. We also don’t know what this earned income means for household welfare. While the evidence from CCTs for example might suggest that women might spend transfers differently, we don’t know whether more farm or firm profits for a woman versus a man means more clothes for the kids and regular doctor visits. We also don’t know much about the spillover effects in non-economic realms generated by interventions in the productive sectors and whether these also differ across men and women. Quasi-experimental evidence from the US for example suggests that decreases in the gender wage-gap reduce violence against women (see this paper by Anna Aizer), but some experimental evidence by Fernald and coauthors from South Africa suggests that extending credit to poor borrowers decreases depressive symptoms for men but not for women.
I want to thank Catherine, David and some anonymous readers for their responses to last week’s post on who pays for evaluations. Their thoughtful responses led to me think more about objectivity and engagement with project teams, so here it goes: