An interesting new paper by Ben Olken, Junko Onishi, and Susan Wong gives us some evidence on how incentives can make aid more effective. They look at a community block grant program in Indonesia and compare the effects of these grants with and without incentives. Incentives make a difference.
From a methodological point of view as well, this paper has a lot to like. First, the scale is massive – the program they are looking at covered 1.8 million beneficiaries. Second, as per earlier blog posts on the idea of a registry, they registered the analysis plan before they did the evaluation and, as the paper makes clear, they stick to the plan. Finally, they provide some cost-effectiveness estimates – something Alaka recently blogged about.
So what’s it all about? The program they are looking at is the Generasi program (which, when spelled out and translated is the National Community Empowerment Program – Health and Smart Generation), which targets 12 indicators of maternal and child health and education (e.g. number of children immunized, prenatal and postnatal care, and the number of children enrolled and attending school). Every year, the Indonesian government gives participating villages a block grant which they can use to improve any of the 12 core indicators. The grants average about $2.40 to $4.30 per person living in the villages. Village management committees, with the help of facilitators, figure out how to spend this money.
They compare two variants of this program – an incentivized version and a non-incentivized version. In the incentivized version, the size of the grant the village receives depends on their performance on the 12 core indicators relative to other villages in the sub-district. This performance payment accounts for 20% of the grant. The other 80% of the grant (and 100% in the non-incentivized areas) is determined by the number of target beneficiaries, i.e. expecting mothers and kids. To identify program effects, Olken and co. randomize 264 sub-districts into the incentivized version, the non-incentivized version or a control group. (no grant) The incentivized version and the non-incentivized version differ simply in how the payment is calculated – everything else (facilitation, monitoring, etc) is the same.
What do they find? The incentivized grants, over the two years they observe, had a significant impact on health to the tune of an average of 0.03 standard deviation increase in the 8 maternal and child health indicators (relative to non-incentivized program areas). They have annual data – and this show sthat the effect was stronger in the first year; the results in the second year were not statistically significant (albeit positive). For education, there is no significant effect in either year.
While only accounting for 20% of the payment, the incentives are having a disproportionate effect on outcomes: using comparisons to the non-program control, they show that overall some 50-77% of the program impacts on health are driven by the incentives. In addition, the health effects of the incentives (versus non-incentivized areas) were larger in areas with low initial levels of the indicators – the effect on communities in the bottom 10% of the distribution at baseline was roughly double the average program impact.
Now, incentives can make service providers do non-desirable things. Olken and co. take a look at three potential manifestations of this: 1) a shift in service provision away from non-incentivized outcomes, 2) manipulation of records, and 3) shifting money towards less needy areas. They rule out the first by looking at a range of indicators and finding no evidence of negative program spillovers.
To take on the second, Olken and co. combine the performance reported by villages with household survey data that they collected. This checking bears a little more explanation since it shows some neat ways to go about this. In one measure they check kids for the scar that would be left by the BCG vaccine. It turns out that there is no systematic over-reporting of this vaccine in incentivized areas. In another measure, they visit schools and do a random spot-check of classroom attendance and compare this with attendance records from the same classroom on a specific day 1-2 months ago. Why are they going back in time? Well, they are worried that if they pull the school attendance records for the day of their visit, the school administration will have time to fudge the data. But this lets them look for over-reporting on average. It turns out that, in fact, school administrators seem to be overstating attendance – administrative records show 95% while the spot checks only show attendance of 88%. However, this over-reporting is not related to incentives.
Finally, to take on the potential fund diversion (#3 above), they show that the allocation to villages seems to be positively correlated with remoteness, but not significantly correlated with average village consumption or poverty levels. Moreover, they do an interesting exercise where they create a counterfactual of what would have happened if the program were to be allocated based on absolute progress on indicators (rather than relative to other villages in the sub-district) and show that the relative benchmarking structure of the program was actually successful in preventing extra funds from going to richer villages.
In addition, Olken and co. also do a neat job of tracing out the channels through which the program impacts happen. Two main mechanisms seem to be at work here. First, the program induced villages to change their spending patterns – in incentivized villages spending on education supplies drops by about 15 percent and health spending goes up by about 7 percent. Interestingly, it doesn’t look like households are getting less from the state in terms of education supplies, but rather the efficiency of education spending is increased (and they appear to have asked households about the value of the school uniforms they got, so they can measure this precisely). The second mechanism through which the program seems to be having an impact is the effort of health workers. In particular, they show that midwives are spending about 6 percent more hours working in the incentivized areas relative to non-incentive areas.
Finally, Olken and co. show some indicative figures for the cost effectiveness of the program (and they even include an estimate of the dead weight loss of taxation in their cost-effectiveness estimates). In the health realm, they show that, for example, with this program an additional child weight check was $16-22, the cost of preventing one malnourished child was $384-528 and the cost of enrolling one more child in school was $200-275. How do these costs stack up? Relative to a conditional cash transfer program tried in Indonesia at the same time, the costs (including the spillovers of the CCT) are roughly the same.
So this is an interesting paper, with a particular structure of incentives – not too wide, but not too narrow and benchmarked off of sub-district comparisons not absolute progress. Does anyone know of other papers which show the effects of similar mechanisms but with different structures?