Published on Development Impact

Thoughts from the BREAD Development Conference – should our prior be no effect, and issues with learning from encouragement

This page in:

I spent Friday and Saturday at the BREAD development conference at Yale (program here). It differs from most conferences - which feature many papers each presented for a short amount of time- by instead having only 7 papers each presented for 1 hour 15 minutes with plenty of spirited discussion. The conference featured some interesting papers, including one on third-party auditing for pollution in India and one on how labor immobility hampered development in the American South that I will likely discuss in future posts. But I thought I’d instead focus today on a couple of more general issues that came up in the course of the discussions.

Should our prior be that our interventions have no effect?

Christoph Saenger presented a paper which looked at an impact evaluation of a new intervention in Vietnam. The context is one where many small farmers sell milk to a large buyer, who pays them in part based on quality (milk fat content and milk solids). Since the large buyer is the one who tests for milk quality, if it is opportunistic it may underreport quality levels back to farmers in order to reduce the amount it has to pay them. Farmers suspecting that this may occur might then underinvest in production. The intervention was to give a random subset of farmers vouchers that would entitle them to independently verify milk testing results. They find that this resulted in farmers using more inputs and producing more output, and that the buyer was also better off, because in reality it had not been taking advantage of the farmers, and this intervention allowed it to credibly signal its type as fair, leading to a Pareto improvement.

The question which came up automatically when Pareto improvement was mentioned was why, if this improved welfare for both the buyer and the sellers, the buyer had not already done something like this by itself. This type of argument doesn’t seem to come up so often in health and education studies where everyone seems to take for granted that there are widespread government failures in these sectors that can be potentially mitigated. However, this is an argument that I have experienced recently in presenting several of my own studies on firms and labor market interventions, and I certainly agree that when designing interventions we should be thinking about what are the market failures which prevent them from already having been done by the market if they were beneficial to market participants.

But on the other hand we think market failures are pervasive in many developing country settings, and I’m not sure a reasonable prior should be that we can do nothing to overcome them. Certainly a lot of the work the World Bank does is based on a belief we can do something in such contexts – perhaps acting as facilitators of technology adoption or intellectual arbitragers to take things that have been tried in one country or context and applying them to places where these ideas haven’t been tried; but also using economic theory and data that the private sector may not have to potentially design new things. Many of these new interventions will fail, which is why it is important to evaluate, but I do worry that rightful attention to what the underlying market failures are that an intervention is trying to address sometimes becomes an inherent skepticism that anything can ever work.

Learning from Encouragement Designs

Gianmarco León presented his job market paper which looks at the effect of fines for people who don’t vote on voter turnout in Peru. Peru, like many Latin American countries, has compulsory voting, with a fine for those who don’t vote. This fine was about $50, but in August 2006, the government changed the law so that the fine was lowered and now varies by the poverty level of the district- ranging from $6 to $25. This is a big nationwide law change, which are often given as one of the examples of things that can’t be evaluated through randomized evaluations. But Gianmarco does do a randomized experiment to try and evaluate the impact of these fines – in a baseline survey he documents that most people are unaware of the law change, and so he randomly provides people several weeks before an election with information about this change and the new fine levels.

He then runs a regression which looks at the impact of the perceived fine on voting turnout, using the randomized information as an instrument for perceptions of the fine level. That is he runs:

Vote or not = a + b*Perceived change in fine + controls

Where randomly receive information or not is used as an instrument for the perceived change in fine. He finds a voting elasticity of -0.21 – so that a change in the perceived fine of the magnitude which occurred in Peru led to an almost 10 percentage point reduction in voting rates.

This type of encouragement design, in which a nationwide law is passed but most people don’t know or understand the new law and so information is randomly given to some of them, has been used in other contexts, such as in evaluating the introduction of a credit bureau in Guatemala. The important thing is that this approach only estimates a local-average-treatment-effect (LATE) – the impact for those people whose perceptions of the fine change in response to the information but wouldn’t have otherwise changed. The identification assumption is that the information campaign only affected voting behavior through its effect on the perceived change in the level of the fine, but not through any other channel. But this is where linear IV is potentially problematic – it seems plausible that not just what you think the fine is, but also how certain you are about what that fine is might matter. If the randomized information changes the distribution of your beliefs as well as the level (e.g. you now know for sure the level of the fine is $25, whereas before you thought it was somewhere between $40 and $70) then this might have an independent effect on voting behavior. I think this possibility that information interventions affect not only levels but distributions is a potential issue for encouragement designs that hasn’t been explored or discussed much in the literature, but something that should cause some caution in using this approach. (Note that Gianmarco’s study is still informative of the ITT of giving information to people, it is just using this to tell us the elasticity to fines that requires these additional assumptions).

All in all a very enjoyable conference. Submissions for a special BREAD conference on Finance and Development and the next general BREAD conference (which will be at Michigan) are now open, and close June 15th


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000