One of the comments we got last week was a desire to see more “behind-the-scenes” posts of the trials and tribulations of trying to run an impact evaluation. I am sure we will do more of these, but there are many times I have thought about doing so and baulked for one of the following reasons:
· Google and Hawthorne effects: as I’m trying to do more experiments with SMEs, many of whom use the internet, I am afraid of them coming across a description of the experiment on the web and this possibly biasing their behavior, their answers to surveys, or even their willingness to continue to be interviewed.
· Media and the word “experiment”: For most of the experiments and evaluations being done with interventions to help poor people, the risk of anything more than trivial numbers of them finding out about the evaluation through the web are slim to none (and yet the University of California human subjects committee still insisted we add an email address to the informed consent form!). My more major concern is that a journalist who either doesn’t understand what we are doing, or who is looking for a sensational story, will pick up on the word experiment and write a story with the headline “World Bank experimenting on our poor!” or something like this. I think this fear is more of a concern working at the World Bank than at a university, and more of an issue working on evaluations of government programs than working with NGOs, so I think it is something that is more of a potential issue for us than our friends in academia. I don’t think the chance of it happening is that large in any particular case, but the risk is certainly there, and such a story could lead to problems in either implementation or in follow-up surveys.
· Keeping partners happy: Much of the behind the scenes lessons, stories, mini-disasters, and other interesting tidbits come from interactions with partners – be they the intervention implementers, the survey firm, or the policymaker behind the intervention. However, these are all people you need to keep on your side, especially when things are going badly – so sharing a story of the lessons from the latest calamity in the field while it is still fresh might make for an entertaining blog post that may also help others avoid the same mistake, but risks causing umbrage to partners, especially if they are the reason for the problem occurring. Not to mention the political angle in some cases – I’ve already experienced one case where I wrote about an experiment that went wrong due to the partner organization inviting the control group to come along to make up for non-attenders among the treatment group and got complaints that I was blaming a client for this. On the other hand, once all has been resolved and the intervention has been completed, there may be more willingness on the point of view of partners to discuss what in retrospect look like minor hiccups along the road, despite seeming at the time much more serious.
· What is the value added of such a pre-emptive post? Given the issues above, I feel like there need to be a strong case that there is good value-added from posting while the intervention and surveys are ongoing. While we try and entertain on occasion, a good story per se doesn’t seem reason enough – so either there has to be a lot of potential value to others in sharing this experience, or a lot of potential value in getting feedback on what we have done to make it seem worth blogging about something while it is ongoing. So expect some of this on occasion from me, but less than I would do if I wasn’t worried about the things above?
Readers – do you know of any examples where blogging about (or posting descriptions on a webpage like the IPA, JPAL, or DIME project descriptions has impacted on the project itself? Am I worrying too much?