Syndicate content

How should we understand “clinical equipoise” when doing RCTs in development

David McKenzie's picture
While the blog was on break over the last month, a couple of posts caught my attention by discussing whether it is ethical to do experiments on programs that we think we know will make people better off. First up, Paul Farmer on the Lancet Global Health blog writes:
 
“What happens when people who previously did not have access are provided with the kind of health care that most of The Lancet’s readership takes for granted? Not very surprisingly, health outcomes are improved: fewer children die when they are vaccinated against preventable diseases; HIV-infected patients survive longer when they are treated with antiretroviral therapy (ART); maternal deaths decline when prenatal care is linked to caesarean sections and anti-haemorrhagic agents to address obstructed labour and its complications; and fewer malaria deaths occur, and drug-resistant strains are slower to emerge, when potent anti-malarials are used in combination rather than as monotherapy.
It has long been the case that randomized clinical trials have been held up as the gold standard of clinical research... This kind of study can only be carried out ethically if the intervention being assessed is in equipoise, meaning that the medical community is in genuine doubt about its clinical merits. It is troubling, then, that clinical trials have so dominated outcomes research when observational studies of interventions like those cited above, which are clearly not in equipoise, are discredited to the point that they are difficult to publish”

This was followed by a post by Eric Djimeu on the 3ie blog asks what else development economics should be learning from clinical trials, in which he writes:
 
“In public health research, the justification for randomly assigning participants is based on clinical equipoise. This means that clinical trials are implemented only when, the researchers have substantial uncertainty (doubt) about the expected impact (efficacy) of the intervention (drug).The researchers may arrive at this conclusion after having reviewed the available research in the field. Clinical equipoise is then a necessary condition for the ethical justification of conducting RCTs. Hence, in public health, the first function of the Institutional Review Board is to ensure that clinical equipoise exists for new RCTs.
But in the development sector, economists are not aware of the need to establish clinical equipoise before conducting RCTs of development interventions. Since RCTs are being increasingly used by development economists, we should start thinking about how clinical equipoise can be established for impact evaluations of development interventions.”

How should we understand clinical equipoise?
My problem with these posts is that they seem to be understanding clinical equipoise in terms of needing uncertainty about whether or not some intervention makes people better off, without taking into account the costs of doing so relative to “how much” better off the intervention makes people. But we don’t live in a world of no budget constraints, and so the standard of clinical equipoise needs to be more along the lines of doubts over whether this use of funds makes people better off relative to any other possible use of funds in the country, or for international organizations, the world. Anyone who thinks there is not considerable uncertainty about this question is likely deluding themselves.

What does this mean in practice?
 
  • We need to do a much better job of documenting intervention costs in our studies – this should include both the direct costs of any treatment given to individuals (e.g. the amount of grants given as transfers, or the cost of malaria nets) as well as the administrative costs involved in implementing these. It is hard to justify a study on the grounds of it being needed to compare the cost-benefit of different interventions if cost is not provided! This also relates to recent discussion by Chris Blattman on his blog of whether we should be benchmarking interventions against simply giving individuals the equivalent amount in cash: as Chris notes “I’ve seen many, many, many projects that spend $1500 training and all the “other stuff” in order to give people $300 or a cow. Is it fair to ask, what if we’d just given them $1800? Or what if we’d given six people cows? Seriously, your one guy does six times better than that?”
  • There are hardly any treatments where entire world coverage is the likely outcome, so we are almost always in the case of having to choose who to give something good to, and of someone who could benefit from it not receiving it. This presents two reasons for randomization and experimentation: first, experimenting to learn how to better target individuals, when there is uncertainty as to the distribution of benefits. E.g. if we have 1000 more malaria nets to give, should we give them to pregnant mothers in Sierra Leone or families with young children in Chad? Second, the usual story of random assignment being an ethical way to give everyone who would benefit the same chance from doing so applies once you have narrowed it down to groups who you expect to benefit most.
  • Finally, even in the rare cases where it is possible to try to get 100% world or country coverage, there is debate about the ethics of doing so compared to spending the money on other things. This shows up in the case of trying to eradicate polio – where there is debate over whether disease eradication is ethical (here is the case for). So Paul Farmer’s point that we know better healthcare is good is surely not sufficient – we need to know how good relative to other things we could be doing with the same money.
  • Finally, we need to think beyond individuals and also think about the role of the collective good. This comes about most strongly in the case of interventions that may be privately undesirable but publicly desirable, but also applies when there are positive or negative spillovers – another area we need more research about.

Comments

Submitted by Ryan Cooper on

I agree. Although I would alter the order of reactions above and give a slightly different angle to the arguments. 1) 'clinical equipoise' argument should be applied equally to any prospective study that does not treat 100% of eligible population, regardless of the assignment rules (random, other sort of algorithm). 2) If there is excess demand of eligible subjects to receive a particular treatment at a given point in time, the ethical question linked to 'clinical equipoise' has to do with the decision of leaving eligible subjects without treatment and not about the mechanism or criteria chosen to select Treatment and Control groups. RCT vs. non RCT has to do with the later. So any design (RCT or not) that leaves subjects without treatment in presente of 'clinical equipoise' would be un ethical. 3) But, in most situations there is excess demand, and thus regardless of the desirability of leaving people without treatment, it simply is not technically or financially possible. In this situation, as David reminds us, RCT can be the most fair way of assigning treatment, given that after applying all reasonable targeting criteria, a lottery gives everyone the same chance. 4) There is the first issue David mentions about costs and opportunity costs. I have 2 questions though: a) Say I have 3 equally costly competing treatments that aim to meet a common social objective. If I can apply 'clinical equipoise' to one of them and not to the other 2 would this not imply that the equipoise one is better than the others? 2) Assuming a situation with 3 alternatives where I can apply 'clinical equipoise', does the concept apply to certainty of the direction of impact or also to the magnitud of impact. If it was only the former, I think your cost effectiveness and alternative cost argument is stronger. If it was also de later, I we could think of cost effectiveness comparisons and ranking without any experiment given that I would know the magnitud of the cost, the direction of impact, and also the magnitude of impact.

Having said this, a last question: In reality is it common to face 'clinical equipoise' in development economics? I agree its fair to consider this concept while prioritizing questions to answer, but does this not happen implicitly in academia?

Submitted by Ryan Cooper on

In short, The ethical issue related 2 'clinical equipoise' has to do with coverage, not selection

Submitted by Steve on

The conversation here is very interesting, but it is impt to get the definition of equipoise right. The operational definition is not whether the researchers themselves are uncertain, but whether there is meaningful uncertainty, or observed variation, among the community of practitioners, which in this case might be the policy makers, and possibly researchers. Freedman's contribution was to eliminate the concept of individual researcher (or team) uncertainty from the mix, as long as there is meaningful disagreement in the community. Now, the second question is what is teh meaningful disagreement about? If it is about allocations of money to qualitatively different health interventions, then that should be the randomization, if indeed it is possible. If there is little doubt about the efficacy of a given allocation, or intervention, it probably shouldn't be randomized against not giving that intervention, although that depends on background conditions. Randomization to a suboptimal state can be justified depending on the counterfactual in that area. So this is indeed a complicated question, and parallels w/medicine aren't perfect. There is something to be learned from the thinking that has gone on in medicine, but it has to be correctly framed. But the better medical parallel to development is the area of systems or quality improvement, which even in medicine can be very context-dependent.

Add new comment