I wanted to follow up on David’s post of yesterday on the issue of sharing results with respondents. My initial reaction was that we kind of owe this to the respondents not least because they spent a lot of time answering our tedious questionnaires. But as David points out, it’s not quite that simple in cases where we expect to have ongoing work.
To add to his calculus, I think there is another case where sharing the results can be mutually beneficial: where the respondents can help explain your results. In one of the papers I’ve worked on (and one that took a very long time to write) my coauthor took some of our initial (well initial, reworked and reworked some more) results to focus groups with our panel survey respondents. They raised some key directions that helped us a) identify some key directions to pursue in the quantitative analysis and b) bolster the credibility (at least to us) of the results we were finding. (postscript: the end story was somewhat more complicated, but within the spirit of what they were telling us). Now this paper isn’t an impact evaluation, but presumably if we were to do this more with impact evaluation, we would get some insights into what’s behind the reduced form as well as possible additional outcome measures to consider. This could then be used to suggest new avenues for analysis or, if our survey wasn’t quite long enough the first time, additional data collection. There’s also another way this could pay off. How many times during a pilot or in-the-field background work have you discovered that program implementation is quite different than what they told you at HQ? This is perhaps more powerful: you get the new avenues to pursue plus you add an additional reality check on implementation but through the lens of the findings rather than the program rules.
To push David’s line of inquiry one step further, the same tension between sharing results and how this affects future responses and behavior also extends to sharing the results with program implementers. Take a simple distinction between field staff and program management. Sharing the results with field staff can definitely lead to some of the things David is worried about – they may change their behavior not only based on the results, but also now that they know the evaluation is real (in addition there is always the initial risk that many talk about that the announcement of an evaluation changes behavior). Moreover, I’ve heard of cases where field staff then nudge respondents on the answers to follow up questions. All of this makes future stages of an evaluation much more complicated. But of course, the same potential benefits apply as with respondents – these folks can help you understand your results and in addition, this can lead to on-the-ground program changes. And here the analog to David’s middle of the road suggestion that a reduced set of results are shared would be to share all of the results with program management and then let them balance the potential trade-offs of future evaluation work with what they think the field staff need to know at that point in time.
Anyhow, to add to David’s list of questions I would add:
-Have you had experiences where sharing the results with respondents (be they program participants or the comparison sample) got you significant insights into the effects of the program?
-How have you managed sharing results with program implementers, particularly in light of a continuing evaluation?
- dissemination of results