Syndicate content

Further thoughts on sharing results

Markus Goldstein's picture

I wanted to follow up on David’s post of yesterday on the issue of sharing results with respondents.   My initial reaction was that we kind of owe this to the respondents not least because they spent a lot of time answering our tedious questionnaires. But as David points out, it’s not quite that simple in cases where we expect to have ongoing work.  

To add to his calculus, I think there is another case where sharing the results can be mutually beneficial: where the respondents can help explain your results. In one of the papers I’ve worked on (and one that took a very long time to write) my coauthor took some of our initial (well initial, reworked and reworked some more) results to focus groups with our panel survey respondents.   They raised some key directions that helped us a) identify some key directions to pursue in the quantitative analysis and b) bolster the credibility (at least to us) of the results we were finding. (postscript: the end story was somewhat more complicated, but within the spirit of what they were telling us). Now this paper isn’t an impact evaluation, but presumably if we were to do this more with impact evaluation, we would get some insights into what’s behind the reduced form as well as possible additional outcome measures to consider.   This could then be used to suggest new avenues for analysis or, if our survey wasn’t quite long enough the first time, additional data collection. There’s also another way this could pay off.   How many times during a pilot or in-the-field background work have you discovered that program implementation is quite different than what they told you at HQ? This is perhaps more powerful: you get the new avenues to pursue plus you add an additional reality check on implementation but through the lens of the findings rather than the program rules.   

To push David’s line of inquiry one step further, the same tension between sharing results and how this affects future responses and behavior also extends to sharing the results with program implementers.   Take a simple distinction between field staff and program management. Sharing the results with field staff can definitely lead to some of the things David is worried about – they may change their behavior not only based on the results, but also now that they know the evaluation is real (in addition there is always the initial risk that many talk about that the announcement of an evaluation changes behavior). Moreover, I’ve heard of cases where field staff then nudge respondents on the answers to follow up questions. All of this makes future stages of an evaluation much more complicated.  But of course, the same potential benefits apply as with respondents – these folks can help you understand your results and in addition, this can lead to on-the-ground program changes.   And here the analog to David’s middle of the road suggestion that a reduced set of results are shared would be to share all of the results with program management and then let them balance the potential trade-offs of future evaluation work with what they think the field staff need to know at that point in time.

Anyhow, to add to David’s list of questions I would add:

-Have you had experiences where sharing the results with respondents (be they program participants or the comparison sample) got you significant insights into the effects of the program?
-How have you managed sharing results with program implementers, particularly in light of a continuing evaluation? 

Comments

In some respects this discussion seems to be about 20-30 years behind the times, in that the merits of participatory approaches to development aid evaluations have been discussed in depth decades ago. But the proposal to test the effects of more open and participatory processes on subsequent stages or iterations of an evaluation process is perhaps something newer, and worthwhile

I have one experience with the sharing of impact assessment data with the staff of a very large Bangladeshi NGO, in the mid-90s. This impact assessment was re-iterated about every three years. Rather than dumping all the results on the staff, via a report or presentation, a staff meeting was convened, at which we (consultants and evaluation staff within the NGO) started by posing questions to the staff, along the lines of, "Given this survey question that we asked the sampled households...which of these possible answers….. do you think was the most common one given?" We then shared the actual survey results for that question. Then there was typically a very animated discussion, especially where the staff predictions of the survey results and the actual survey results differed. Some argued about the facts on the ground, and their implications for what the NGO should do next. Others argued about the survey question and methodology and whether that meant the results should be taken seriously or not. One thing I can remember is that there was a high level of engagement with the survey results, one which I suspect would have been hard to achieve by any other approach to sharing of impact assessment findings.

Another merit of this more managed approach is that it enables you to test if the survey results are themselves having an impact, in a fairly immediate sense. Are the survey results reinforcing "common knowledge" or challenging it? Common knowledge being the predictions the staff made about the likely survey responses.

 

excellent points Rick, many thanks for the food for thought. Clearly this changes "the program" -- so in the case of what looks like a long terme evaluation you are evaluating something different in any given 3 year period. Of course, we shouldn't expect a program to remain static over this period anyhow (or if it did would it be worth evaluating?). The interesting thing here is that in the later periods you are now evaluating a program with a built in learning component (the evaluation) versus before when it was just a program.