Published on Development Impact

Is it the program or is it participation? Randomization and placebos

This page in:

So recently one of the government agencies I am working with was telling me that they were getting a lot of pressure from communities who had been randomized out of the first phase of a program. The second phase is indeed coming (when they will get the funding for their phase of the project) but the second round of the survey has been delayed – as was implementation of the first round of the program.   But that doesn’t make the pressure any less understandable.  

About the same time, I was reading an article in the Economist about alternative medicine.   The article ended up being a lot about the placebo effect – apparently for a swath of alternative medicine, the effects are no better than a placebo effect.   Which actually isn’t bad – according to the article, placebos seem to work somewhat well for disorders which are predominantly mental and subjective.   

So these two things together got me wondering about the effects of the impact evaluation on the control group in the randomized evaluations I work on.   Clearly, while some may doubt the overall efficacy of government interventions overall, we don’t offer those who are in the control group a sugar pill (or fake surgery).   Most of the programs we are evaluating make this impossible -- can you imagine walking up to someone and saying”: OK madam, I am going to offer you this fake money conditional on you sending your child to school”?   And a fair amount of what I work with – whether it is antiretroviral drugs or some agricultural technologies, are actually tested in an RCT or scientific field trial before they are introduced. 

But surely some of the control group must be annoyed at being randomized out– particularly when the randomization is obvious. Now some folks I have talked to believe that making the randomization super-explicit and public can help reduce this – which makes sense in some contexts to me.    But in a range of settings it’s plausible that maybe these folks get annoyed and this colors their responses to the surveys (wait, you’re not giving me business training AND you want me to talk to your survey team for 2 hours???).   Or maybe it just makes them angry or sad.  

Evidence from a recent paper by Berk with his coauthors Sarah Baird and Jacobus de Hoop sheds some light on this potential effect.   They’re doing what is easily the world’s most complicated CCT program (with all kinds of interesting resulting analysis) and this particular paper talks about mental health effects – specifically the impact of transfers on psychological distress.   What caught my eye are the results on the control group.  There is a two tier randomization (among others) here – communities were selected at random, and then within communities, intensity of treatment was also randomized – so you end up with control communities and control households within treatment communities.   So they find that eligible, but randomized out (and not living in a household with a treatment girl), participants within treatment areas experience an increase in psychological distress relative to schoolgirls in control areas.   The effect size here is about equal in size (but opposite in sign) to the positive effect of receiving the conditional cash transfer. There is heterogeneity here – which is likely driven by some kind of positive spillover effects – as girls who were randomized out BUT lived in households with a girl who was selected show the reverse – a reduction in psychological distress.   This is all very complicated but one possible conclusion here is seeing other girls in the village get a transfer makes you worse off psychologically, but not if that girl is your sister. So if this is right, this is worse than some sort of response bias – this is actually making people more depressed (the good news in this case is that the differences seem to lessen once the program ends).  

So this suggests looking at two control groups – those that have heard of the program and those who have no idea about the program and checking out whether there is in fact some downward trend in the former.   Or even trying different approaches to randomization (an explicit public lottery announced well in advance versus a less in your face form of randomization).  Has anyone seen a study that was able to capture these issues?  

But what about the treatment group?   In talking about this with David, he raised the point that some of our interventions might have an effect just because people were picked. So for example, the content of a given business training program might be totally useless, but the participants might get a confidence (and hence profit) boost, just because they were picked.   Again, making the randomization explicit and public might mitigate this effect, but still one might wonder if those selected somehow believed they were lucky and then this had drove the effect.     So one can think about a study which might help you understand this, but has anyone seen one? There seems to be a significant literature in medicine – but the nature of the interventions here are different enough that it would be nice to see some separate evidence for social programs.  


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000