Syndicate content

Is it possible to re-interview participants in a survey conducted by someone else?

David McKenzie's picture

I recently received an email from a researcher who was interested in trying to re-interview participants in one of my experiments to test several theories about whether that intervention had impacts on political participation and other political outcomes. I get these requests infrequently, but this is by no means the first. Another example in the last year was someone who had done in-depth qualitative interviews on participants in a different experiment of mine, and then wanted to be able to link their responses on my surveys to their responses on his. I imagine I am not alone in getting such requests, and I don’t think there is a one-size-fits-all response to when this can be possible, so thought I would set out some thoughts about the issues here, and see if others can also share their thoughts/experiences.

Confidentiality and Informed Consent: typically when participants are invited to respond to a survey or participate in a study they are told i) that the purpose of the survey is X ,and will perhaps involve a baseline survey and several follow-ups; and ii) all responses they provide will be kept confidential and used for research purposes only. These factors make it hard to then hand over identifying information about respondents to another researcher.
However, I think this can be addressed via the following system:

  1. The new team seeks IRB approval for their study, and for approaching respondents about linking them to their earlier responses.
  2. They then hire the original survey team, or a reliable and independent third party survey firm is then provided with the contact information and identification numbers of respondents. The informed consent statement for the new survey they collect then explicitly seeks permission from the respondents to link their responses on this new survey to those in the original study.
  3. The survey company then collects the data, and gives the data to the new research team, only using the IDs from the original study for individuals who agree to have their responses linked and creating new IDs for the rest.
There is then also a question of whether the original research team would also need to have gone through an IRB to be able to share this information, if it constitutes a different use for the data from what was initially granted. If so, this would increase the burden on the original research team. It has also been suggested to me that privacy laws differ by country, and what might be allowable in the U.S. may not be in Europe for example, or that laws may differ by country of study.

Respondent burden: there are several things I consider here. First, going back to respondents who have already graciously given their time for previous studies does induce additional burden on them. But the system above lets them decide whether or not to participate in the new study, which they can decide upon based on the incentives offered for participation. Second, in cases where the respondents can be independently found, linking the data may reduce respondent burden – rather than having two sets of surveys ask about family background, education, etc., the second research team can just use the responses from the first survey.

Plans of the original research team to return: If longer term follow-up studies are planned by the original research team, then this may be another reason to avoid survey fatigue now, and it may make more sense to include a module from the new research team and do some cost-sharing and/or collaboration. Moreover, it is then likely to be a lot easier to modify an open IRB to add new data collection and new team members than to get permission to share participant data from one study with another independent study.  If the longer-term survey is not planned for several years, rather than adding a module, then this new survey may help update contact details on participants and make it easier for the original team to return later.

Co-authorship: I don’t think there should be an intermediate presumption that the original research team would need to become co-authors on this new study, but i) using someone from the original team is likely to make it a lot easier to deal with these sharing identifying information and survey issues; ii) the amount of investment and time taken to create the original survey may make it seem fair to include people, or their knowledge of their specific context and intervention may add a lot of value. However, one issue is that many field experiments and surveys already have several authors, and so if the new planned study also does, then this might end up with five or more authors – which doesn’t make this type of collaboration so attractive. This is something that will need to be negotiated case-by-case, sometimes it may make sense for one person from the original team who was most closely involved in the fieldwork to also collaborate with the new study, or at least use the same field coordinator/survey team. In discussing this topic, several people noted that they see involvement of someone from the original team as the only way in which they could see re-interviewing occur.

Value for science: one of the factors that I would take into account is how useful/important I would think the resulting research to be. But from the outside researcher point of view, one of the huge challenges in answering many research questions is getting a large enough exogenous shock to be able to measure the impacts of. Many experiments end up struggling with implementation or find very small effects. So if someone has done an experiment and found a huge effect in one domain, it will be of obvious interest to think about other changes that one might observe from this same set-up.

Government/NGO program or researcher-led experiment: In government/NGO policy experiments there is an additional party involved. This can work towards more or less sharing. On one hand, outside researchers may be able to make their case to the government or NGO on why this research is useful, and get the list of participant names and contact details directly from them. It may also then become feasible (if inefficient) to just go door-to-door in program areas and use a screening survey to identify who applied for a program and who participated in it. In these cases, the additional data collection may then happen anyway, so it becomes a question of whether records can be linked. But conversely, the government or NGO may have agreed to an evaluation when the set of outcomes being looked at were mutually agreed, but then be uncomfortable with a different research team coming back to measure more sensitive outcomes. IRB issues can also be different for government evaluations.

Feasibility of re-contacting: all of this assumes that it will be feasible to find and re-interview participants in the original study. But the more time that passes, the more people have moved or firms have died, the more phone numbers have been changed, etc. So even if the above issues have been worked out, the attrition rates might be high for some such attempts. Moreover, in some cases the original IRBs may require all identifying information to be deleted when the study ended, in which case it may be completely infeasible.

Precedents? While there are some nice examples of people re-using data from other people’s experiments to explore new ideas, I have struggled to think of many good examples where an independent research team has revisited a sample first collected by someone else. There are cases where graduate students have gone back to samples of their advisors, but few precedents to point people to.  In discussions around this blog post I’ve been pointed to:
  • A paper by Kate Baldwin and Rikhil Bhavani (2015) on what they call ancillary studies of experiments, which discusses some of these issues – one point they bring up which Berk also noted to be is guarding against fishing for outcomes from a previous intervention.
  • They cite as an example work by Nancy Hite-Rubin, who as a PhD student at Yale revisited subjects from an experiment by Karlan and Zinman which had expanded access to finance in Manila, with her focus on political outcomes.
  • Liesbet Steer and Kunal Sen have a World Development paper that in part uses a 2004 re-survey they did of firms that John McMillan and Chris Woodruff did of firms in Vietnam in 1995. This was an independent effort, from researchers at different institutions without the advisor-grad student relationship. Footnote 16 of the paper notes the difficulty “Since the MW survey was carried out in 1995, even though we had some record of company names and addresses, it was much harder to relocate the firms in the sample. Of the 259 firms we managed to identify 111 firms, of which 61 agreed to collaborate in the survey.”
  • And here is James Burton’s write-up of his study in Nigeria, which did qualitative interviews of 42 firms in my business plan study after getting their contact details from the government, and he requested from participants their permission to link them to their survey responses on my surveys.
Please share in the comments if you have good examples, or other thoughts on the points discussed above.

Thanks to Sarah Baird, Kate Baldwin, Jessica Gottlieb, Chris Woodruff,  Berk, and Markus for helpful comments as I prepared this. All views are of course my own.
 

Comments

Submitted by Paul Christian on

My memory of Owen Ozier's paper on the long-term effects of child deworming was that he went back and re-interviewed students from schools that participated in Kremer and Miguel's trial from their 2004 Econometrica paper.

Submitted by Owen Ozier on

On the Ozier deworming study: yes and no.

Owen here.  Thanks for that comment, Paul!  Very relevant.  On whether I "re-interviewed," yes and no: I did go back to the same communities as Miguel and Kremer; doing so (and analyzing the results) required having the mapping between communities and treatment arms.  So I needed the community names, definitely.  I didn't need the original (2004 Econometrica) students' names, though, because my survey team interviewed everyone who could have been younger siblings and neighbors of the original students, simply by virtue of being in the relevant age range.  My team did not try to re-interview students who participated directly in Miguel and Kremer's study, though we did go to the same schools. (Most kids in primary school in 1998 were long gone from primary school by 2009 anyway.)  The passage from the current version of the paper:

"Unlike Miguel and Kremer (2004) and the follow-up study by Baird, Hicks, Kremer and Miguel (2016), I follow a different, younger cohort of respondents. ... I gathered data in 2009 and 2010 in order to compare children who were in their first years of life at the time that treatment started..."

Link to the most updated (pre-publication) version of my paper is here:

http://economics.ozier.com/owen/papers/ozier_early_deworming_20170718.pdf

The older Bank working paper version is here:

http://documents.worldbank.org/curated/en/236591468341338819/Exploiting-externalities-to-estimate-the-long-term-effects-of-early-childhood-deworming

I hope that clears it up.  Definitely a variation on the theme that David is discussing; follow-up studies may involve the same study sites but potentially different respondents, as this example (intended to capture spillovers rather than direct effects) illustrates.

Best,
Owen

Add new comment