After conducting our surveys of young development researchers and their use of blogs, Berk and I emailed them links to our blog postings of the results, as well as to the working paper. We received several replies thanking us for sharing the results, and saying wouldn’t it be great if the same was done for respondents in the average field experiment and survey. This is something I’ve struggled with, so I thought I’d share my thoughts and see what other people’s experiences have been on doing this.
For one-off surveys, the main issues entering the decision are typically cost of delivering the results to participants and literacy/comprehension levels. Thus in my surveys of the top academic achievers in a number of countries (see paper on brain drain here), it has been easy to email news coverage, blog postings, and a link to the paper itself to the participants – and interesting to get their comments and feedback as confirmation that our interpretations of the data are valid. For large field surveys with less educated participants, sharing the results may require going back to the field and organizing community discussions or presentations – which may be prohibitively costly for many studies, but obviously has advantages both in getting feedback on findings as well as sharing the results with those who contributed to them. My sense is that doing so is rare.
The bigger issue comes in the context of panel surveys and field experiments. The concern here is that revealing the results of earlier rounds of the analysis may (a) affect how people respond to certain questions (e.g. if I find out that everyone else in my village gave a certain response, I may respond similarly in future rounds); and (b) change behaviors and actions going forward. The latter is the larger concern for many studies. For example, in my work on microenterprises in Sri Lanka we gave grants to a random subset of firms, and found very high returns to capital, and no return to women. I have been asked several times what firms told us when we informed them of these results. The answer is that we never have – even after stopping the survey after 3 years, we wanted to leave open the possibility of going back over a longer horizon (and indeed recently did), and the concern was that telling firms what we had learned about returns to capital in these firms might affect investment decisions going forward. Alternatively telling women that we had found no increase in profits from giving women grants may have led to women who got grants over-stating profits in future waves in an effort to prove that giving such grants was beneficial.
In other cases, a middle ground may be possible, in which you share some of the less sensitive results with participants, thereby enabling them to get some sense of how the information is getting used, and some potential benefit from participating, without prejudicing the main questions of interest for future follow-up rounds. This is the option we used in our surveys of Tongans who applied to migrate to New Zealand through a migration lottery program – participants were given the following newsletter (English, Tongan), which explained why we would be coming back for another round of surveys, and how some of the information from the previous round had been used. However, the newsletter did not stress the experimental context, nor discuss wages, income expectations, or changes in poverty – which may have been more sensitive issues. These brochures were then mailed to participants, or delivered in person at the time of the next follow-up round. This offers a nice way of approaching people for the next interview - you have something to show for what has been collected so far – as well as being cost-effective since the interviewer can just deliver the content at the same time as she/he is already visiting the participant.
Something similar has been discussed in the context of Enterprise Surveys: you could give firms a short automatically generated report which benchmarks them against other firms in the sector on key financial and productivity data, as well as summarizing for them what other firms view as constraints. If there is no intention of revisiting these firms, promising this ex ante might be a useful way of helping get participation. But if the intention is to follow-up in a panel setting, one might be worried that firms will change their behavior in response to receiving this information.
This seems to me an area where best practices aren’t firmly established, and there is interesting scope for experiments going forward – experiments could test, for example, whether and under what circumstances people and firms change participation rates and responses when given this sort of information (this is a different issue from that of changes in responses and behavior from repeatedly asking particular questions, which is addressed in Jed’s recent post). But is also seems worth sharing experiences of what is being done in other surveys and experiments – so let us know:
· Have you shared results of a survey or an experiment with participants? If so, how was this done?
· Is there any work which shows how sharing previous survey round responses or experimental results influences subsequent reporting and behaviors?
Join the Conversation