Published on Development Impact

Ethical Validity Response #3: Would a graph help?

This page in:
Martin’s post on Monday gave me some food for thought.   Berk and David’s posts have added to this.   For what it’s worth I will throw in my two cents in terms of an addition rather than a direct response.      
 
I just want to push on David's point of information/targeting in the context of randomization a bit more.  One of the ways I think of the ethics of the evaluations I engage in is with a two axis graph.   On one axis is “whether the intervention works” (and you can deepen this for example by thinking about different contexts, etc) and on the other axis is “whether we know who needs it”.    Clearly anything where the intervention works and we know fairly precisely who needs it, is not fair game for an RCT (think HIV treatment after the clinical trials – something coauthors and I tackled when we wanted to look at the economic impacts of antiretrovirals).   So this quadrant is pretty much off limits.   The other three are where you have to take a harder look at the ethics of what you are doing.  
 
The whether it works part is more straightforward.  Following on David’s points,  I am less sanguine on the who needs it part.   One way, maybe, to think about this is to break it into two pieces.    One is we can’t know in an objective or perhaps even a semi-objective sense.    This is a big divergence from medicine.   For a large class of afflictions, we can identify precisely the disease/injury with some tests or an examination.    In social programs this isn’t so clear - who is poor, what does it mean to be poor?   These are open to a lot of valid interpretations.  Social programs are messy, as they should be since people and the communities they live in don’t run like human bodies and definitely not like machines. 
 
The second piece is that information is costly and perfect information is really, really costly.   At some point, it doesn’t make cost-effective sense to get more precise.   Let’s work with this a bit.   Some information is essential for the success of a program, some information is complementary.   Take a program that supplies agricultural inputs with the goal of alleviating rural food insecurity.   It is essential that the program knows who is a farmer and who is not (or at least who is capable of farming).   It is less essential and more complementary that the program knows who has more or less land.  Obviously, knowing more complementary information will help the program target better and have higher impacts.   However, every dollar spent on this identification is a dollar lost for treatment.   So there is a balance here – and programs all over the world make a decision on where to stop spending on information and start spending on treatment all of the time.  
 
So social programs almost always operate with imperfect information on who needs the program.   Given that there is uncertainty within the group of people who have been identified as to who needs the program more and who needs it less, and the presence of a budget constraint that prevents treatment of all, a lottery strikes me as a fair way to allocate things in many (but surely not every) context.  
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000