Syndicate content

The ethics of a control group in randomized impact evaluations – the start of an ongoing discussion

Jed Friedman's picture

Last year the British Medical Journal published the results of an impact evaluation of local immunization campaigns with and without incentives in rural India. Full immunization rates were very low in the study area (2%) and the researchers wanted to test two nested approaches to improving participation in immunization campaigns. The first approach was to introduce periodic immunization camps that would guarantee service on select days. The second was to introduce small incentives at the camps in the form of food and household goods that roughly approximate the opportunity cost of camp participation. Later that same year the BMJ then published a letter accusing the study of violating ethical guidelines for research.

First a summary of the study findings. The authors Abhijit Banerjee, Esther Duflo, Rachel Glennerster, and Dhruva Kothari determined that regular immunization camps raise the full immunization rate from 6% (as found in control communities) to 18%. Adding low-powered incentives to the camps further raises the immunization rate to 39%. These gains, while still far from achieving universal coverage, are substantially higher than the rates in the “health-care-as-usual” situation in control areas. Because the immunization rates were higher in the camps with incentives, the cost-effectiveness of such an approach is actually much higher than the camps alone thus suggesting the provision of a small incentive is well worth the additional cost of doing so.

A subsequent letter published in the BMJ by Deepak MG and Anant Bhan calls the study to task on ethical grounds. The letter writers claim the study does not establish new knowledge (i.e. the world already knows that health behavior responds to incentives) and thus is not justified. With this claim the letter writers invoke the principle of Essentiality. This is the first principle listed in the Ethical Guidelines for Medical Research as published by the Indian Council of Medical Research. The principle stipulates that approved research should be deemed absolutely essential after a due consideration of all alternatives. To defend their claim of non-essentiality, the letter writers cite examples of previous knowledge in the form of evaluations of recurring CCT programs in Latin America. While I agree that essentiality is a key principle of ethical research, I don’t see how the study violates this principle. Specifically:

1. The applicability of previous knowledge is far from obvious to the more episodic immunization camps and lower powered incentives adopted by the study in rural India.

2. The study was approved by three independent review boards and was funded by several donor agencies. Presumably all review boards and donors considered the “essentiality” of the study and all agreed that the study should go forward. People may hold opposing views on the “essentiality” of a proposed study, but there is no central board that makes such a determination and that’s probably a good thing. In the decentralized review process the fact that at least one accredited body deems a study to be essential should be sufficient. Indeed the research in question did provide evidence that was used in subsequent policy discussions, as can be seen in this note by Jishnu Das and this longer article by Ramanan Laxminarayan and Nirmal Ganguly.

3. As made clear in the first paragraphs of the published paper, the research question is not just whether immunization camps have an observable effect at all but rather the magnitude of any effect and whether an effect of that magnitude justifies the spending of public resources at the expense of alternative uses. I do not study immunization policy, but it certainly seems legitimate to ask just how effective immunization campaigns are and whether low-powered incentives such as those included in the study are a cost-effective way of improving immunization rates. This is not the first time the validity of policy-oriented research questions has been challenged by medical professionals but this does not make these questions any less essential.

The letter writers do not stop with accusations of inessentiality but also claim that, by virtue of having a control group, this study violates the principle of the Non-Exploitation of Study Subjects as well as violates the norm that all participants should be beneficiaries of the research. What is meant by their claims here?

Again taking the Ethical Guidelines for Medical Research linked above, the Non-Exploitation Principle states that research participants should be remunerated for their involvement in the research and kept fully appraised of all potential dangers from the study. There were no dangers to the control group associated with the study, so the assessment hinges on what is meant by “remunerated”.

The ethical guidelines of the Indian Council of Medical Research are somewhat vague on the meaning of this term, however it does appear at first glance to be a stronger principle than the “Do No Harm” principle stipulated in the seminal 1979 Belmont report. The guidelines state “Participants may be paid for the inconvenience and time spent, and should be reimbursed for expenses incurred, in connection with their participation in research”. So perhaps the remuneration stipulation in the non-exploitation principle should take the form of direct payments? However, at a later point, the same ethics document discusses the conduct of epidemiologic research and concludes:

In a country like India, with the level of poverty that is prevalent it is easy to use inducements, especially financial inducements, to get individuals and communities to consent. Such inducements are not permissible. However it is necessary to provide for adequate compensation for loss of wages and travel / other expenses incurred for participating in the study.

So in one sense, the remuneration claim relates to compensation for time lost as a result of study participation. I don’t know if interview subjects were nominally remunerated for their time spent in interview in this case, but this is a fairly common practice and I would not be surprised if the investigators did indeed do so.

More generally remuneration surely takes the form of knowledge advancement and the possibility that participating control communities will derive future benefits from the research in the form of improved policy. (And these benefits would not be possible without the results of the research in question.) Indeed this relates to yet another principle, the Maximization Principle, as set forth in the same Ethical Guidelines:

Maximization of the public interest where the research is conducted to benefit all human kind and in particular the participants themselves and the community from which they are drawn.

The authors clearly state that assuring immunization is a great challenge in the study communities and surrounding areas – any policy learning from the study is meant foremost to benefit these underserved communities. In sum, as with the first charge of non-essentiality, I don’t see how the second charge of exploitation is valid.

Hopefully this disagreement has illuminated select principles of ethical research as well as their interpretation. The title of the post concerns the ethics of control groups so let’s now ask what should be the ethical standard for the use of control groups. Should the actual control groups benefit in some direct way from the study or is the assurance that any policy gain from knowledge generated by the study be widely shared sufficient? In part this determination relates to the essentiality claim. If a randomized study is already deemed essential to the extent that the internal validity requirement necessitates a control group, then the benefits of increased knowledge and policy learning may satisfy the remuneration claim in the non-exploitation principle. From this view, it comes down to whether a research question is deemed essential by at least one peer review group.

But perhaps researchers can do more. In contrast to no direct provision of benefits for the control group, Osrin and colleagues put forward the principle of “no survey without service” for the use of control groups. Under this principle, control groups should receive more tangible benefits than policy learning, but these benefits need be unrelated to the study question in order to not confound any inference of causal impact. Examples given by Osrin and co-authors of such benefits include supplementary training for health workers on issues not addressed by a particular study, or health information campaigns to the populace again on unrelated issues.

Right now “no survey without service” is not standard practice in socio-economic oriented impact evaluations yet this is an interesting principle that should be more widely considered and instituted where feasible. However I can imagine evaluations where the application of any service or benefit to a control group confounds the inference of program impacts. In these situations, perhaps we again need to return to the essentiality principle – a study deemed sufficiently essential may be justified even if services can’t be provided to the control group (at least during the duration of the study).

Impact evaluations in the social sciences may soon need their own specific ethical guidelines and I wonder if this principle of “no survey without service” is one that should be considered. If so, then researchers will also need guidance as to when this principle can justifiably be suspended.

Comments

Submitted by Gabriel on
An argument against the use of controls was made by Jeff Sachs et al regarding the Millennium Villlages: In a 2007 paper they wrote: "For ethical and practical reasons, there are no formal “control” villages... The ethical reasons relate to the fact that many core interventions (e.g., malaria control, access to safe water) are life-saving and would be ethically inappropriate to deny in a control village." Link to paper: http://www.pnas.org/content/104/43/16775.full This seems to be a more extreme version of the "no survey without service" argument you outline. Subsequently, the MVP people have said they have collected data at some control sites, so apparently they changed their minds. In an e-mail exchange with Jeff about this some time ago I asked what had caused them to change their thinking, but he never responded on that point. The question I would ask people who object to the collection of data for a control group is whether their same objections would apply to the collection of data in general, when it is NOT part of an RCT. Logically, it would seem that they should. But this takes you to the bizarre conclusion that all data collection without compensation is unethical. Would they apply this, say, to all the censuses collected around the world, which I'm almost sure in not a single case pays compensation to respondents? I don't know the India case, but I would bet that most surveys conducted by the government do not compensate respondents.

Gabriel, thanks very much for the thoughts and the link. I agree with your reflection on the disconnect between the ethics of observational studies and the ethics of RCTs when the control group is essentially an observational sample. However some researchers believe when a specific intervention is tested then more stringent ethical principles apply, even if the study activities in the control are virtually identical to what we do in observational studies. Personally I have difficulty with the logic but it seems that others are quite impassioned about this. One concern is with equity. Systematically favoring the treatment subjects with an intervention can be seen as unfair (although presumably we don't know if the intervention imparts significant benefits, otherwise why evaluate?). The reality though is that programs are often differentially applied - the MVP is explicitly a demonstration pilot, government programs often have difficulty reaching remote areas, etc. - so non-beneficiaries already exist in many contexts. Given this, I personally don't understand why it is unethical to talk to the non-beneficiaries, especially if the research is deemed "essential" to improving future welfare.

Thanks Jed and Gabriel for useful discussion of a great post. Jed you mention the common concern with equity, when interventions are differentially applied (which is essentially all interventions). Nothing serves that goal more than randomization, which raises an interesting point: Even if observational evaluation methods somehow had flawless internal validity, as long as the treatment must be differentially applied, a strong argument could be made for randomization *on equity grounds alone*.

Submitted by Gabriel on
Just to fine tune it a little bit, here's the series of questions I would ask to someone making the argument, to try to understand what the objection is. I'll use the MVP as an example, not to pick on them (and recognizing that they're no longer making this argument) but just to make the question more vivid. (Also, I'm not making any claim here about what the MVP is actually doing in practice.) Which of the following is ethical/unethical? 1) Collecting a household survey like the DHS which typically requires multiple hours of a household member's time to complete. 2) Implementing the MVP in a community selected by subjective criteria, e.g. in an area where a program official has contacts. 3) Implementing the MVP in a community selected by a formula based on transparent criteria, e.g. poverty rates, distance to town, population density, etc. 4) Implementing the MVP at a community selected at random among a set of communities selected using a formula based on transparent criteria. 5) Doing (2), (3), or (4), collecting data at the MVP site, and comparing what happens at the site to what happens in comparison areas using the DHS data? 6) Same as (5), but instead of using DHS data, use data collected specifically by the MVP in a set of comparison areas, selected by a matching procedure? 7) Same as (6), but with both the both treatment and comparison communities selected at random from the same pool. I suspect that the impassioned objectors you're talking about will typically object to (4) or (7). In other words, it's not the data collection per se that they worry about, it's just the idea of random selection that bugs them.

Michael, Gabriel - Thanks very much for both your comments. And I agree. Even after several years of practice, there is a great deal of unease with randomization as an assignment mechanism and this discomfort will persist for the foreseeable future. In part I believe this speaks to a metaphysical discomfort because there is no "actor" that dispenses fate (assignation), just a roll of the dice...

Submitted by Jacob AG on
I wonder what sort of evaluation he was looking for when he approached J-PAL in 2009... http://www.businessweek.com/magazine/content/10_28/b4186056393103_page_4.htm ...apparently not an RCT. Or something.

Submitted by Michael on
As an RA in this study I can report that minor renumeration was indeed provided for households completing our endline survey. This was indeed minor--a metal cup for each individual survey and a shawl for a household survey--intended not to distort the responses of the recipients nor be perceived as unjust inducement. It is also important to point out that the area of Udaipur, Rajasthan is actually a fairly active research site for J-PAL. Indeed, in the randomization of villages, at least three different different interventions (including iron-fortification of grains and nurse monitoring) were cross-cut resulting in practically every village being the "treatment group" for one of the three interventions. Using cross-cutting randomization designs therefore may be a useful tool for reducing the ethical concerns of control groups. It clearly has more practical relevance for researchers as the lengthy (and costly!) household questionnaires can be used for many different studies.

Submitted by Jacob AG on
...but doesn't cross-cutting randomization mess up the results? Where is the control group if every village is in the treatment group for at least one and up to three different RCTs?

Hi Jacob, I believe the key here is that the other interventions are unrelated to the determinants of the outcome of intetest in the vaccine study. But this raises another potential question - does a study in a cross-cutting randomization need mention the content of the other interventions benefitting the control group? This would allow the reader herself to assess the potential for confoundedness.

Submitted by Kabir on
I just discovered this blog and am very happy to have found it! In response to Gabriel and Jed above, I doubt the discomfort with randomization has anything to do with metaphysical beliefs of assignation. It is more likely because of a Rawlsian worldview geared toward helping the "poorest of the poor". Specifically in the Indian context, even poor villages are likely to have wealthy landowners, relatively wealthy (usually also higher caste) families of traders, poor agricultural and other labourers, and very poor (usually lower caste) agricultural and other labourers. None of the development project evaluations/studies I've seen, RCTs or otherwise, take these unique groups into account, and really these could be further divided into sub-groups in many parts of the country. While it could be argued that village-level interventions make this point moot, my sense is that some sections of society even within the village level do require a greater intensity of intervention. Of course none of this is to say that RCTs are always bad, and I apologize for veering so far off the intended purpose of this post, but I recently attended a lecture by Banerji/Duflo in Delhi and was surprised that there seems to be no attention at all given to the ethics of randomized implementation.