I just spent the last week in Ethiopia and part of what I was doing was presenting some results from an impact evaluation baseline, as well as the final results-in-progress of another impact evaluation. In all, I ended up giving four talks of varying length to people working on these programs, but also to groups of agencies working on similar projects that started after the ones we were analyzing.
While David and I have touched on the importance of sharing results in the past (David talking about sharing with respondents, and me talking a bit about with program implementers), a bunch of new things popped into my head this week.
First, it’s useful to share some of the baseline analysis. In this case, working with the program team awhile back, we had come up with a set of questions that the baseline could answer that would be useful to them (and which might end up as descriptive papers). I presented some results on one of these analyses to the project team and it was clear that this excited them and helped build their motivation for a solid endline survey. This helps a lot since we will be jointly supervising the baseline survey and the money to pay for it comes out of their budget. I also presented these results to a group of agencies working on a related project; not only were the results of some use in a policy discussion they were having, they also gave me some useful insights into how we might slice the data further.
Speaking of useful feedback, for work on another evaluation, I also shared some analysis that we were pretty confident in, but where we haven’t completed analysis on the complete set of potential outcomes with two other groups. The fact that we are not finished yet turned out to be a good thing – it gave folks a lot of room to comment in. In particular, they helped me understand which other outcome variables would be useful to look at (and why). In addition, there was some good discussion of possible reporting bias – it turns out one of our outcome variables is a targeting variable for another program. In addition, although we had constructed this variable (the value of household livestock) in a very careful way, they pushed me to other ways which may help us uncover the unobserved quality dimension of livestock (e.g. testing for difference mean values per unit in treatment and control) when the discussion led us to the possibility that the treatment may have lead to improved quality rather than quantity. As the discussions progressed, another reason for sharing early results was evident. I could see the shape of the current policy discussion – both in terms of additional, follow-on programs as well as the set of new interventions that could be possibly be tried – and this will help me figure out a more effective way to pitch our policy recommendations.
In both discussions, but particularly that of the baseline results, I got further insight into how policy and reality differ. We all know that what the programs writes down as what it will implement and what will actually happen are never the same thing (yet another way in which work in economic development differs from medicine). This is why we spend time in the field talking to beneficiaries. And I usually ask multiple people within the program. But somehow, when the data is on the table, it helps to focus the mind. In one case it became clear that what we needed to know was how a specific complementary intervention (something standard for treatment and control) was set up. And there were a number of different viewpoints. That discussion will definitely help inform the endline.
One other thing I did was to go and share some of the analysis with one of the folks who had managed the data collection for one of the surveys. Here again, useful insights were to be had. Not least of which because he’s a curious guy and he had apparently done some careful observation of what was going on when he and the teams were in the field. Indeed, he suggested an alternative theory for one of the results were seeing.
All in all, it was an exciting week – giving a different kind of talk where the underlying methods weren’t a central topic, but the conclusions, interpretations and underlying data were very much front and center. I learned a lot from these discussions. And I also learned one other thing: when you’re giving a presentation, turn skype off. Although…the IMs are good for a laugh.
Join the Conversation