When we want to target a poor population for an anti-poverty program, we first need to figure out who is actually poor. This isn’t straightforward – there are a range of potential targeting criteria and options. In countries where poverty is less dense and data is decent, two of the more common options are self-targeting and proxy means tests. A nice recent paper  by Vivi Alatas, Abjijit Banerjee, Rema Hanna, Benjamin Olken, Ririn Purnamasari, and Matthew Wai-Poi sheds some light on the relative merits of these two approaches, as well as their relative cost.
Let’s start with the basics. Self-targeting works off of some kind of inferior good or cost getting the rich to separate themselves from the poor. Think providing subsidized lower quality food or a labor requirement or having to travel some distance to enroll in the program. Proxy means tests use some set of variables correlated (ideally highly) with the variable on which the program would ideally like to target (e.g. consumption).
Now let’s turn to some theory. As Alatas and co. point out, the prevailing theory until now has been that two things guide the efficacy of self-targeting. First, that the ordeal that self-targeting imposes is more costly in utility terms for the rich, and that this gap in utilities is increasing in the length of the ordeal. This leaves you with a straight tradeoff between the deadweight loss of the ordeal and the precision of targeting. But then they throw some things into the standard model that make this less clear. Credit/savings constraints, non-linear utility functions, and non-linearities in transport costs can make self-targeting perform less well. And they aren’t even going off the behavioral deep end here. So how this works is now an empirical question.
Alatas and co. use a conditional cash transfer program in Indonesia (the PKH) which targets the bottom 5% of the poor to examine self-targeting against proxy means tests in a randomized experiment. The setup of the two forms of targeting is as follows. The proxy means test is the current approach used by the government. In this method, the national statistics office, working with local leaders, comes up with a list of potential beneficiaries (e.g. folks getting other programs, folks who might have been left of the list) and they then send enumerators to the village to screen folks (an initial short set of screening questions and then a longer screen of observable assets and household demographics). These are then combined with location based indicators to generate predicted household income and everyone below the threshold gets the program.
The self-targeting approach is introduced experimentally. It consists of an NGO doing publicity in the village to make people aware of the program (including the eligibility criteria) and then people who would like to participate have to register on certain days at a nearby registration center. At the registration station, there is some waiting time, and then an enumerator walks the potential beneficiaries through the same questions administered in the proxy means test. For those who pass the screen but hadn’t been classified as poor earlier (around 63 percent), these answers are verified by enumerators visiting their village (and 68 percent of these – in line with PROGRESA according to Alatas and co. – make it into the program). To test the effects of increasing the cost of the ordeal, Alatas and co. also vary the distance to the registration center and randomly apply a restriction that both spouses rather than just one has to show up for registration.
So what do they find? Let’s start with who selects into self-targeting. Alatas and co. have actual consumption data from a baseline before the program. This allows them to conclude that the probability of self-selecting into the program under the self-targeting arm is monotonically decreasing in consumption. But remember that the counterfactual here is predicted consumption. And Alatas and co. take a harder look at this and show that the folks selecting in are poorer both in dimensions that the government can observe and those that the government can’t (well they could, but it would be much, much more expensive – more on costs in a minute). But, not all of the poor opt for the self-targeting approach – only 60 percent of them apply.
OK, but is this good or bad? Omniscience is not a policy option (I will resist the obvious reference to national security policy) so some errors in targeting are always going to be present. Alatas and co. use the obvious and policy relevant counterfactual of the government’s current proxy means test – but it is important to keep in mind that other targeting approaches could give different results and so future research in this area would be pretty interesting.
Using the government’s proxy means test, they find that the real per capita consumption is 21 percent lower for beneficiaries in the self-targeting villages than those in the proxy means test villages – and that self-targeting gets poorer folks along the entire distribution. So self-targeting is doing a better job of including the poor and excluding the rich. Alatas and co. go on to look at increasing the costs of the self-targeting ordeal and find little to suggest that this works – an increase of 1.7km in distance to register drops applications by 17 percent, but with no difference among the rich and the poor. And making the spouse come doesn’t make things work better either.
So it sounds like self-targeting might be a good option here. But wait, there are the costs. Alatas and co. do a nice job of looking at costs both to beneficiaries and the state. And the bottom line is, as they put it: “self-targeting and the status quo automatic enrollment proxy means test lie on very different parts of the frontier: the status quo costs as much as 40 percent less than self-targeting (though this difference could be muted if self-targeting enjoyed the same nationwide economies of scale as the status quo), but has substantially higher rates of both inclusion and exclusion error.” So this goes to policymakers and their voters to make the call – but it gives us food for thought on how we think about targeting our programs and what kind of research we might do to learn more about targeting in the future.