To deceive or not to deceive? David’s last post about Beaman and Magruder’s experiment, in which there is a small, and seemingly harmless deception, got me thinking about this continually uncomfortable issue. David claims that this is now an increasingly popular strategy, which gives me some worry about the future of experiments in economics.
There must be a reason why we keep coming back to this subject: it makes us uncomfortable. Jed wrote about the conditions under which it is ethical to lie to the subjects in an experiment. I wrote about informed consent and mystery clients here. In a way, though, mystery clients are significantly different than deception: they are like clinical trials with placebo drugs or sham surgeries, except with people rather than a medical procedure. There is no deception: every subject is “read in” about the probability of receiving the treatment.
Jamison, Karlan, and Schechter (2008) discuss the pros and cons of deception in lab experiments and find that deceiving subjects and letting them know about it causes extensive and intensive margin effects: deceived individuals are less likely to return to participate in a subsequent experiment, and those that return behave differently. So, it may perhaps be both ethical and optimal to deceive at an individual project level, but in a world where these experiments are more common AND it is widely known that the experimenters cannot be trusted with their word, we are likely to get results that are decreasingly valid externally. If there are no rules, we cannot function. Partly due to these types of considerations, Jamison et al. favor a proscription to deceive in economic experiments.
I feel that this is even more important in field experiments. Sometimes, our subjects are not individuals, but institutions: we evaluate the behavior of health clinics, communities, NGOs, etc. Deceiving institutions or individuals that are part of institutions with long-term memory would be potentially very detrimental to development efforts. For example, it does not seem acceptable, say, to tell subjects that the upcoming elections will be monitored by international or local monitors and then not monitor the elections at all. This seems quite harmful if subjects were led to believe this is happening with 100% probability and then it does not. This, as opposed to telling everyone that some polling sites will be randomly monitored while others not and then following through on this promised design: the latter is no different than the “mystery client” case.
My suggestion is the following. In economics, we should keep the rule that we should NOT deceive. However, there may be some research questions for which the researchers think some amount of deception is necessary. In such cases, reviewers and IRBs should push the researchers to carefully think about whether the deception is truly necessary or whether there is an alternative design that is an acceptable compromise for the research question at hand and can avoid deception. For example, in the Beaman and Magruder case, if the recruited people were also paid for performance, then side payments would be less of a concern. In that case, the authors could only identify the ability component over and above the performance component, but that does not seem so bad – perhaps as externally valid if not more so. Only after the researchers convince some reviewers and IRBs that the proposed research design with deception is the only way to get at this question AND that the benefits from this knowledge is greater than the cost to individuals (as well as future researchers due to potential negative spillovers), then and only then we should allow deception.
P.S. I am off to New Zealand tonight to visit family and to enjoy the Rugby World Cup, so this will be my last post for about a month (look for me in the stands on September 24 during the All Blacks-Les Bleus match!). However, we have lined up some good guest bloggers in the upcoming weeks, in addition to the regular posts from David, Jed, and Markus.