Syndicate content

Should we share results with respondents?

David McKenzie's picture

After conducting our surveys of young development researchers and their use of blogs, Berk and I emailed them links to our blog postings of the results, as well as to the working paper. We received several replies thanking us for sharing the results, and saying wouldn’t it be great if the same was done for respondents in the average field experiment and survey. This is something I’ve struggled with, so I thought I’d share my thoughts and see what other people’s experiences have been on doing this.

For one-off surveys, the main issues entering the decision are typically cost of delivering the results to participants and literacy/comprehension levels. Thus in my surveys of the top academic achievers in a number of countries (see paper on brain drain here), it has been easy to email news coverage, blog postings, and a link to the paper itself to the participants – and interesting to get their comments and feedback as confirmation that our interpretations of the data are valid. For large field surveys with less educated participants, sharing the results may require going back to the field and organizing community discussions or presentations – which may be prohibitively costly for many studies, but obviously has advantages both in getting feedback on findings as well as sharing the results with those who contributed to them. My sense is that doing so is rare.

The bigger issue comes in the context of panel surveys and field experiments. The concern here is that revealing the results of earlier rounds of the analysis may (a) affect how people respond to certain questions (e.g. if I find out that everyone else in my village gave a certain response, I may respond similarly in future rounds); and (b) change behaviors and actions going forward. The latter is the larger concern for many studies. For example, in my work on microenterprises in Sri Lanka we gave grants to a random subset of firms, and found very high returns to capital, and no return to women. I have been asked several times what firms told us when we informed them of these results. The answer is that we never have – even after stopping the survey after 3 years, we wanted to leave open the possibility of going back over a longer horizon (and indeed recently did), and the concern was that telling firms what we had learned about returns to capital in these firms might affect investment decisions going forward. Alternatively telling women that we had found no increase in profits from giving women grants may have led to women who got grants over-stating profits in future waves in an effort to prove that giving such grants was beneficial.

In other cases, a middle ground may be possible, in which you share some of the less sensitive results with participants, thereby enabling them to get some sense of how the information is getting used, and some potential benefit from participating, without prejudicing the main questions of interest for future follow-up rounds. This is the option we used in our surveys of Tongans who applied to migrate to New Zealand through a migration lottery program – participants were given the following newsletter (English, Tongan), which explained why we would be coming back for another round of surveys, and how some of the information from the previous round had been used. However, the newsletter did not stress the experimental context, nor discuss wages, income expectations, or changes in poverty – which may have been more sensitive issues. These brochures were then mailed to participants, or delivered in person at the time of the next follow-up round. This offers a nice way of approaching people for the next interview - you have something to show for what has been collected so far – as well as being cost-effective since the interviewer can just deliver the content at the same time as she/he is already visiting the participant.

Something similar has been discussed in the context of Enterprise Surveys: you could give firms a short automatically generated report which benchmarks them against other firms in the sector on key financial and productivity data, as well as summarizing for them what other firms view as constraints. If there is no intention of revisiting these firms, promising this ex ante might be a useful way of helping get participation. But if the intention is to follow-up in a panel setting, one might be worried that firms will change their behavior in response to receiving this information.

This seems to me an area where best practices aren’t firmly established, and there is interesting scope for experiments going forward – experiments could test, for example, whether and under what circumstances people and firms change participation rates and responses when given this sort of information (this is a different issue from that of changes in responses and behavior from repeatedly asking particular questions, which is addressed in Jed’s recent post). But is also seems worth sharing experiences of what is being done in other surveys and experiments – so let us know:

·         Have you shared results of a survey or an experiment with participants? If so, how was this done?

·         Is there any work which shows how sharing previous survey round responses or experimental results influences subsequent reporting and behaviors?

Comments

Submitted by Thom Fitzpatrick on
Instinctively I'd want to tell the subjects the results as a thank-you for their participation. However, I think your case is well made that this information can skew future surveys if you ever wish to return. Nevertheless a knowledgeable participant may do some research and seek out your results on line. I was wondering if anyone had tested the impact of informing the subjects when - in the case of science - a placebo appears to be effective? Does it cease to be in future? Do the actual drugs become less effective?

Submitted by Anonymous on
I can understand wanting to be careful about giving people information that would compromise your follow up survey, but to me some of what you are describing here is unethical. It sounds like your results could help the Sri Lankan microentrepreneurs make better decisions, and I think it's wrong to withhold that information from them on the basis of the fact that you might want to do another follow-up survey someday. I would say that if researchers have results that they believe would actually improve the lives of poor people in developing countries, it should be considered unethical to deliberately withhold those results.

Thanks, this is certainly an issue to think about and part of my reason for posting this discussion. However, it is not obvious to me for a couple of reasons: 1) Knowing something improves lives over a short term, vs over a longer term are different things, with the latter seemingly being more important to share - so there is a tension between telling them something today that may fade out quickly, versus taking the time to get a more complete story. 2) It seems far from clear that it an accepted norm that one should incur the expenses to go back and tell everyone who participates the results of a study - this may be prohibitive in some contexts, and sharing the results with Governments, policymakers, and NGOs who can use them to design new policies may be a reasonable way of ensuring society benefits. Indeed it is common practice in many surveys to say something along the lines of "the results of this study will help us inform policymakers about policies that might better help households (or firms) like yours". I certainly would like to hear more about what others have done - since this is an area that doesn't get discussed much.

Submitted by Anonymous on
It is common in AIDS research for scientists to share their results with communities, in fact, many investigators will often participate in community forums about their results in person or online. In fact, AIDS research has pioneered a partnership with patients, research subjects, which has been beneficial to both: improving research and improving community understanding. Sharing information hasn't hurt the field--it has made it better. There is no reason this lesson can't be applied to your own work in development economics.

I'm not sure if people are ever given placebos in AIDS research for ethical issues- but if so, I would doubt that people would be told that they are on the placebo until it is time to halt the study. That is the analogy to the discussion initiated here - which is not trying to say that it is never beneficial to share results with communities and participants, but just that there are issues in doing so when sharing this information may alter the very behaviors and outcomes that you are trying to assess.

Submitted by Anonymous on
So, in AIDS research, or in most other clinical trial settings, you don't normally share results before the study is over. However, there are stopping rules and Data/Safety Monitoring Boards that review the data on an ongoing basis, should the data point towards a strong result in a + or - direction sooner than expected, or because serious/life-threatening adverse events show up. Again, think development economists are re-inventing the wheel a little bit. While not all the lessons from clinical trials/cohort studies in biomedicine are applicable to your work, there has been a robust discussion and a large literature on the issues you are raising here, which still may have some relevance for you.

Submitted by Anonymous on
I'm a PhD student and had the naive assumption (among many when working in development) that results were always shared with participants. Though, after doing work in an informal settlement, I found this wasn't the case at all (given, my sample size was 1), whether it be research on medical science or economics. Initially, I was indignant since I felt researchers were obligated to share results, but you raise some good points in your blog. My research was in an urban area, so costs of returning and distributing results are much lower, and it was a pilot study. I think another important point to raise is what is expected from returning these results? The literature on the use of information as a means of helping people change their behavior is unclear (not in the context of surveying, as you discuss and I feel is an issue, but once all is said and done, can it be useful to the population you're trying to help). Yes, I can tell firms information about their productivity, etc., but unless they know how to use this information in a meaningful way, how useful would these results be? I plan on returning and distributing my results, in which case participants will at least receive a free lunch and perhaps be more aware of the issue, but I'm stuck on how they can or even if they will use this information. I think there's an opportunity to bridge the gap between academia and the population that it's trying to help, but this goes outside the typical researchers' scope and will require more effort (perhaps linking with NGOs or govt. agencies), but I think a worthwhile effort-- though this might another naive assumption...