An interesting, recently revised working paper by Duflo, Dupas and Kremer looks at the effects of providing school uniforms, teacher training on HIV education, and the two combined. This paper is useful in a number of dimensions – it gives us some sense of the longer term effects of these programs, the methodology is interesting (and informative), and finally, of course, the results are pretty intriguing and definitely food for thought.
Markus Goldstein's blog
So I come back from vacation to find out that I was part of a randomized experiment in my absence. No, this had nothing to do with the wonders of airline travel in Europe (which don’t add that frisson of excitement through random cancellations like their American brethren), but rather two of our co-bloggers trying to figure out if the blog actually makes people recognize me and Jed more (here are links to parts
coauthored with Alaka Holla
So two weeks ago we talked about how we don’t know enough about economically empowering women and last week we talked about power issues when measuring this in “gender-blind” interventions. This week we’d like to make some suggestions about how, with small effort, we could make serious progress in learning meaningful things about how to increase the earning capacity of women.
coauthored with Alaka Holla
As we argued last week, we need more results that tell us what works and what does not for economically empowering women. And a first step would be for people who are running evaluations out there to run a regression that interacts gender with treatment. Now some of these will show no significant differences by sex. Does that mean that the program did not affect men and women differently? No. Alas, all zeroes are not created equal.
co-authored with Alaka Holla
Everyone always says that great things happen when you give money to women. Children start going to school, everyone gets better health care, and husbands stop drinking as much. And we know from impact evaluations of conditional cash transfers programs that a lot of these things are true (see for example this review of the evidence by colleagues at the World Bank). But, aside from just giving them cash with conditions, how do we get money in the hands of women? Do the programs we use to increase earnings work the same for men and women? And do the same dimensions of well-being respond to these programs for men and women?
The answer is we don’t know much. And we really should know more. If we don’t know what works to address gender inequalities in the economic realm, we can’t do the right intervention (at least on purpose). This makes it impossible to economically empower women in a sustainable, meaningful way. We also don’t know what this earned income means for household welfare. While the evidence from CCTs for example might suggest that women might spend transfers differently, we don’t know whether more farm or firm profits for a woman versus a man means more clothes for the kids and regular doctor visits. We also don’t know much about the spillover effects in non-economic realms generated by interventions in the productive sectors and whether these also differ across men and women. Quasi-experimental evidence from the US for example suggests that decreases in the gender wage-gap reduce violence against women (see this paper by Anna Aizer), but some experimental evidence by Fernald and coauthors from South Africa suggests that extending credit to poor borrowers decreases depressive symptoms for men but not for women.
I want to thank Catherine, David and some anonymous readers for their responses to last week’s post on who pays for evaluations. Their thoughtful responses led to me think more about objectivity and engagement with project teams, so here it goes:
The 6 foot 6 inch man looked me in the eye.
“And if we don’t like the results, I’ll break your kneecaps,” he said, without smiling.
This encounter, on my first impact evaluation, made me wonder about the impartiality of the whole exercise…and I am still wondering.
So recently one of the government agencies I am working with was telling me that they were getting a lot of pressure from communities who had been randomized out of the first phase of a program. The second phase is indeed coming (when they will get the funding for their phase of the project) but the second round of the survey has been delayed – as was implementation of the first round of the program. But that doesn’t make the pressure any less understandable.