Published on Development Impact

Some highlights from the IPA Impact and Policy Conference + is proof of concept policy relevant?

This page in:

At the end of August I gave several presentations at the IPA Impact and Policy Conference in Bangkok, which had days on SME development, Governance and Post-Conflict recovery, and Financial Inclusion. The agenda is here. There was a good mix of new results from studies that don’t get have papers, along with more polished work on the conference topics. One innovative part were matchmaking sessions that tried to put policymakers and NGOs together with researchers to test ideas of mutual interest – I’ll look forward to seeing if any of these studies make it to completion.

A few highlights in terms of work that was new to me:

·         Daniel Paravisini presented some joint work with Antoinette Schoar in which they conducted an experiment with a bank in Colombia that lends to small enterprises. Before the experiment, only 89 percent of loan decisions would get made by a lower level committee, with loan decisions being sent up the hierarchy or out for more information seeking for the rest (both of which are costly for the bank). They randomly introduce credit scoring to different branches and find that introducing credit scores leads to more decisions being made by the lower level committee, with no change in loan size approved or loan performance – which they find some evidence to support this being due to reducing agency problems between the lower level committee and higher management.

·         David Yanagizawa-Drott presented some joint work with Martina Bjorkman-Nyqvist and Jakob Svensson which looks at the problem of counterfeit antimalarial drugs in Uganda. At baseline 37% of private drug stores in the villages they look at sell fake drugs, with prices not signaling quality. To address this market for lemons problem, they work with a local NGO to sell authentic drugs at below market prices and observe what happens to the rest of the market – finding this leads local stores to respond by selling better quality drugs at lower prices as well.

·         There was a lot of interesting ongoing work where papers are not yet available. Among some of the interesting preliminary work we can look forward to discussing in detail on this blog in due course, we saw: (i) Fotini Christia and Cyrus Samii presenting overviews of ongoing experiments in post-conflict countries, discussing range of different possibilities for using experiments for learning about institution-building and rehabilitation in very difficult settings; (ii) evidence from implementing village savings and loan programs in three African countries from ongoing work by Bram Thuysbaert and co-authors; and (iii) results from randomized expansion of microcredit in Mexico that Dean Karlan presented.

The audience was a mix of researchers, policymakers, NGO staff, development bank staff and field staff, leading to some good discussions, but also some talking at cross-purposes. Reflecting on some of this discussion, here is one key issue that came up in various forms and is more general for those of us doing impact evaluations:

Making the case that proof of concept is policy relevant: several of the papers presented involved interventions designed as proof of concept that were not implemented in a way that was scalable/marketable. For example, Xavier Gine presented work on rainfall insurance in which policies were given away for free to farmers, with the goal of examining how this insurance affects farmer investment decisions. The question of how much insurance changes productive behavior is an important one that should inform policy decisions about how much effort to put into building this market, but a stylized finding when insurance has been sold at market prices has been incredibly low take-up (e.g. 5% of farmers buy insurance, and they don’t buy much coverage when they do) – such low take-up makes it impossible to then see what the effects of actually taking this insurance up is.

Similarly there were a couple of business training experiments which gave away training for free to see what it does for businesses. While this issue of seeing whether a policy or product is beneficial before devoting huge efforts to trying to price and market it makes sense to a lot of researchers (and is the more interesting economic question in most cases), there was some push-back from practitioners who would then mainly ask about scalability.

The more important point here I think is one of treatment heterogeneity – studies which give products away or introduce them in a manner not consistent with how they would be introduced large scale identify a treatment effect for a different population than the population who would be using the product “in the wild”. This doesn’t matter if the effect is the same for everyone, but can be important if the effects differ for different people or businesses. So for example, it may be that business training has positive effects for some firms, and negative or zero effects for others. If we give it to everyone, we find an average treatment effect near zero. But perhaps only those with positive treatment effects would choose to purchase this if it was offered at market prices, so that an experiment which offers this to a general population and finds no benefit may hamper the introduction to market of a product beneficial to those who would use it. A couple of thoughts on this point:

·         Lack of information, or lack of credit, or other market failures may mean those with the largest treatment effects are those least likely to buy it at market prices. Dean Karlan and Martin Valdivia have found an effect like this with business training, with those people least interested in training a priori having the largest treatment effects. Finding effects like this is important for policymakers thinking about whether they need to intervene to help expand access to what the market is already doing.

·         Heterogeneity analysis is important – moving beyond average effects for the population towards learning who policies work best for has been part of impact evaluations for some time. A big constraint here is sample size, which is particularly difficult for some of the SME and village level interventions discussed.

·         We need to be doing more experiments to learn how the selection of firms and individuals into a program varies with the price changed – randomization can be more challenging here – when prices are subsidized or free, the argument that resources are limited and so only a fixed number of people can get the program provides a good rationale for randomization. But in a market setting, a private firm will want to sell its product to everyone who is willing to pay the market price for it. Two solutions here: (a) randomized geographical roll-out – private companies are unlikely to be able to roll out their product to the whole country at once in many cases; and (b) discount vouchers to learn about price elasticity.

In one of my panels there was discussion of where experiments should come in the product development cycle, with my moderator asking whether we need to wait until the kinks have been ironed out of products and experience with implementation has been built up – I think conversely that the product introduction time is the perfect time to experiment, to inform how modifications to design and delivery are made, but agree we need an ongoing feedback cycle where evaluation continues as the product evolves.

We need to recognize the value of both proof of concept and scale evaluations, but also be better at communicating what each tells us.


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000