Syndicate content

Do Development Projects Need Stopping Rules?

David McKenzie's picture

It is not uncommon to read about medical trials for new drugs which get stopped early because they are too successful and it would be unethical not to offer the treatment to the control group (e.g. this cancer trial), or when early results show them not to be working, as in this study description which has the evocative phrase “the high-dose [treatment] arm had crossed the futility boundary after 90 deaths”. In contrast, I have never read a development field experiment which was stopped for being too successful in increasing schooling or reducing poverty, etc – nor one that has been stopped for worsening it. Likewise the only times one hears of policies getting stopped are from protests by those affected, rather than from dispassionate study of the project’s effects – and it is rare to hear of policies being implemented faster than planned because they are so successful (and without a control group, difficult to know whether they really are). But in discussing whether and when we should share results of experiments with participants, the issue of whether we should came up in the comments last week and got me thinking – does development need formalized stopping rules?

This seems an underexplored question in economics. With the exception of economic studies of medical interventions, I don’t think it is an issue that is raised during IRB (institutional review board) ethical reviews of projects at Universities; nor, as far as I am aware, is it something formally raised as part of project review on World Bank projects. I also have the suspicion that many economics journals would be skeptical of a study being stopped for these reasons and not view a research paper as favorably as it would if the study had been carried out to its planned end.

So is a development project or experiment different from a medical intervention? I think in a number of important dimensions it can be, especially when we consider development interventions that are not medical interventions (so ignore malaria bednets, AIDs drugs, micronutrients, etc.) and think instead of training programs, loans, subsidies, building new infrastructure, etc.

·         We are a lot more uncertain about the trajectory of impacts: This was a point Michael Woolcock made in a post on this blog: not only may the short-run and medium/long-run impacts of some of our policies differ, but we may also not have a great theory or understanding of when we should expect to see effects. Without knowing this, it would be difficult to know when to stop a trial.

·         More measurement is often needed to even make this feasible: In medical trials frequent measurement of patients occurs during regular clinical visits/check-ups. In contrast, the norm in most field experiments is only one or two follow-up surveys; while many projects that are not designed experimentally have no data on comparison groups, or are also relying on one large post-survey – so we need more T to even make stopping rules feasible.

·         The logic of limited funding or capacity justifying randomization also may make it hard to cover the control group quickly: Often the justification for not giving an intervention to a control group is two-fold: i) we don’t know whether it works (or whether it works enough to be cost-effective); and ii) funding or capacity limitations limit the number we can give it to at once. Even if early results remove the uncertainty surrounding i), the limits of ii) may still hold and prevent early stopping for positive results.

·         A good number of interventions can’t be stopped if they go badly: The logic in a drug trial is that if, for example, people in the treatment group experience lots of high side-effects or excess mortality, you can stop the trial and stop giving them the new drug. In contrast, if the intervention is to train people, or build new infrastructure, or give them a few months of a consultant, then once you have done this, allowing more time to pass is just allowing more time to see the effects – you can’t underdo the intervention easily – just halt new implementation. Of course other interventions (e.g. conditional cash transfers, food subsidies, microfinance) are more ongoing, and thus there may be more urgency to halt if they are found to have negative effects.

As a result of these points, it seems like the number of cases where formal stopping rules could be used is more limited in economics. But there are likely to be some such cases – so I don’t think the above points say we shouldn’t have stopping rules, just that they shouldn’t be universal. This is a topic that there doesn’t seem to be much discussion of in economics, so it would be good to get other people’s views – should we have formal stopping rules? If so, how would you see them working?

Comments

Interesting thoughts. One key point is that stopping rules in clinical trials (or public health community trials) are often necessary because the treatment and placebo control are blinded to the researchers, so there's one committee (not running the study) who can analyze the unblinded data as you go to ensure the researchers aren't prolonging harm to one group longer than is necessary for the purposes of the study.