Published on Development Impact

What are "Mechanism Experiments" and should we be doing more of them?

This page in:

In an interesting new paper, Jens Ludwig, Jeffrey Kling and Sendhil Mullainathan argue that economists should be doing more experiments to identify behavioral mechanisms, and that these can be central to policy, even if the experiments themselves are far from what a policymaker would implement. So what are these mechanism experiments, and what can we learn from them?

The authors define a mechanism experiment as an experiment that does not test a policy, but one which tests a causal mechanism that underlies a policy.

Consider a causal chain running from a policy P through a mechanism (or mediator) M to a desired outcome Y. A standard policy experiment would test whether P changes Y. The mechanism experiment instead focuses on whether M changes Y.

Some examples:

·         They give the example of “broken window policing”, in which police pay more attention to enforcing minor crimes like vandalism, since these minor crimes otherwise can signal no one cares and leads to more serious criminal behavior. A policy evaluation might randomly select high-crime areas in a number of cities to receive this form of policing, and then measure impacts on serious criminal behavior. Instead, they suggest an experiment in which you buy a fleet of used cars, break the windows in half of them, and then place them in a randomly selected subset of neighborhoods and then measure directly whether more serious crimes increase in response to broken windows.

·         Medical efficacy trials are a form of mechanism experiment, in which compliance and use of the medicine is strictly monitored to see whether the medicine causes changes in outcomes.

·         Several of the experiments I’ve been involved in could be considered mechanism experiments. E.g. randomly giving capital grants to firms in Sri Lanka could be considered a microfinance mechanism experiment – rather than seeing whether microfinance increases firm capital which then increases firm profits, our experiment places capital in firms, to then measure how it directly affects profits.

The authors argue that testing the theory or mechanisms underlying a policy is useful from both a policy and research perspective:

-          They suggest in many cases it can be far cheaper to do the mechanism experiment, and this then serves as a useful screen on policy, even when the experiment does not mimic a real (or even a feasible) policy. This then allows policymakers to potentially rule out policies if there is no evidence for their hypothesized mechanism working. E.g. suppose policymakers want to encourage firm innovation by expanding the supply of high-tech training programs, which are then supposed to lead to firms enrolling in these programs, learning, and innovating. Before launching into a costly exercise of building new training infrastructure, a mechanism experiment might pay a randomly selected subset of firms to get trained, and then use this to measure directly whether the training leads to innovation. If not, then they may wish to rethink increasing supply of training centers.

-          This has the advantage of allowing the mechanisms behind policies which would be implemented at a national or district level to be tested experimentally at an individual level. E.g. would making it easier for firms to formalize lead to faster growth of these firms? Although the policy experiment would likely have to be at the national or regional level at which regulations are implemented, the mechanism experiment could choose individual firms to formalize and then trace the impact of this on firm growth – I’m working on experiments in Sri Lanka and Brazil which do just this.

-          They note that giving mega-doses or unrealistically intensive treatments can be useful to help in forecasting the effects of a wider range of realistic policy options – essentially with extreme policies one can possibly bound the likely effects of a range of real policies.

-          By unpacking the black box and understanding the underlying mechanisms, it may be easier to transfer the findings across to other settings, alleviating external validity concerns.

All very nice it seems. Moreover, since the paper is written for the Journal of Economic Perspectives, it is a friendly and easy read. And as I’ve indicated above, I certainly think there is merit to pursuing more of these types of experiments.

So what is the downside or concern? In one of my first posts on this blog I discussed a paper by Imai and co-authors on unpacking the causal chain. A key point in that paper is that experiments which establish the link between P and M, and between M and Y, are not generally enough to then establish the effect of P on Y – one needs either constant treatment effects or sequential ignorability. To take the broken window policing example, the concern is that the types of places in which broken windows lead to more serious crimes may not be the types of places in which broken window policing policies actually do much good in reducing broken windows. Or to take my microfinance and grants example, the types of businesses that have the highest returns to capital may not be the types of businesses that microfinance lenders will lend to – indeed this is what we find in a forthcoming paper.

This means that establishing a mechanism through a mechanism experiment will not be a sufficient condition for establishing that a policy will work, even in cases where we know that the policy also changes that mechanism. Moreover, since other mechanisms may be at work, finding a causal relationship in a mechanism experiment is also not a necessary condition for proving a policy to work. Ideally one therefore combines the policy and the mechanism experiment, randomly changing the dose of the mechanism within a broader policy experiment. Nonetheless, despite these caveats, I think there is a lot we can learn from these stand-alone mechanism experiments, and they certainly open up a wider range of policy questions to which experiments can offer some guidance.

 


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000