I was recently talking with one of my younger colleagues and she was lamenting something that was going wrong in an impact evaluation she was working on. She was thinking of throwing in the towel and shutting down the work. This reminded me of the horrible feeling in the pit of my stomach when I started doing impact evaluation (as well as research more generally) when something went wrong. Now, of course, I am bald…
Which brings me to the roller coaster that was awaiting me when I came back from summer vacation (and I hate roller coasters).
Up: One project we had been working was at a sticking point over randomization. One part of the program team (a powerful part) was balking at the idea of randomization because they wanted to make a policy/demonstration point. So our team was trying to find ways around this. In the end, while I was away, we got an agreement we could live with – a slightly unbalanced randomization combined with stratification which still leaves us with power. The moral of the story seems to be: argue your case with many people at once, and compromise.
Down: Our main counterpart in another evaluation was busy over the summer ordering his folks to dig ditches where people really didn’t want them, so this guy, with whom we went spent years collaborating on building an interesting policy experiment, was sacked (and actually I think promoted into another ministry). The new guy is not so interested in our approach. So we have to start over again – we may have a hope since his boss was on board with what we were doing -- but we don’t know if we should call on him yet to support us. So maybe this broader constituency building will help us out, but I am left wondering if we could have somehow built it broader.
Sideways: One government we are working with is completing implementation ahead of schedule. The design here is a randomized phase-in and we were banking on having enough time for some impacts to materialize before we did our follow up survey. But alas, these folks are even more efficient than their optimistic predictions. So now we have to scramble to get the second round of the survey in the field. Luckily we a) built in indicators that will show different stages of the effects we are after and b) we also built in a couple of different research questions.
Up: Before I left for the summer, we had a project that was on the edge of falling apart. We had invested some time in fund raising and design but the project team was getting cold feet about setting up the intervention in way that could be evaluated – they had been going back and forth on this for quite some time. The members of our team working on this had set up and discarded (or had discarded for them) a number of different options. But then, over the summer, the intervention and the design came together and the team could go to the field for the survey pretest. The patience and persistence of following the work to the brink seems to have been rewarded…for now.
Down: We had been set to do a fairly complex randomization (all sorts of stratification) with the final draw in public. But alas, alack, the wrong file was used for the randomization (the right file existed, it was just in a different folder in Dropbox). Luckily enough, the implementer was OK with a redo. The lesson here: double, triple, quadruple check before randomizing. And put in some kind of system for file control (any ideas out there?).
Down, down: Coauthors and I have two deadlines to produce output this month. It seemed like a great idea back in March – sure, we will have the data by June and plenty of time for some nice leisurely analysis over the summer. Yes, well, here it is in September, and only one of the datasets just came in…I had been moving away from making sure there was data entry at the same time the team was in the field, but I am starting to regret that (not least of which because we found two mistakes in the data that could have been rectified if the teams were still in the field).
Anyhow, here we go for another loop…back to work!