This was a question posed by one of our readers in a comment on an earlier post I did on how to calculate the intra-class correlation in Stata.
I received a question this week from Kristen Himelein, a bank colleague who is working on an impact evaluation that will use propensity score matching.
coauthored with Alaka Holla
As we argued last week, we need more results that tell us what works and what does not for economically empowering women. And a first step would be for people who are running evaluations out there to run a regression that interacts gender with treatment. Now some of these will show no significant differences by sex. Does that mean that the program did not affect men and women differently? No. Alas, all zeroes are not created equal.
A key issue in any impact evaluation is take-up (i.e. the proportion of people offered a program who use it). This is particularly an issue in many finance and private sector (FPD) programs. In many health and education programs such as vaccination campaigns or getting children to school programs, the goal of the program is actually to have all eligible individuals participate. In contrast, universal take-up is not the goal of most FPD programs, and, even when it is a goal, it is seldom the reality.
As a fair number of impact evaluations I work on are programs designed by governments or NGOs, I often initially have to have a tricky discussion when it comes time to do the power calculations to design the impact evaluation. The subject of this conversation is the anticipated effect size. This is a key parameter – if it’s too optimistic you run the risk of an impact evaluation with no effect even when the program had worked to some (lesser) degree, if it’s too pessimistic, then you are wasting money and people’s time in your survey.