I’ve recently been doing some work with my team at the Gender Innovation Lab on data we collected that was interrupted by conflict (and by conflict here I mean the armed variety, between organized groups). This got me thinking about how doing an impact evaluation in a conflict situation is different and so I reached out to a number of people - Chris Blattman, Andrew Beath, Niklas Buehren, Shubha Chakravarty, and Macartan Humphreys – for their views (collectively they’re “the crowd” in the rest of this post). What follows are a few of my observations and a heck of a lot of theirs (and of course the mistakes and missteps are all mine).
1. Do your background reading.
Beyond this post, there is some good background material out there. Marie Gaarder and Jeannie Annan have a nice overview paper which includes a good set of questions to ask yourself ex ante as well as explaining why this kind of work is important. And Chris pointed out that he’s done a couple of posts on the topic – one questioning whether students should go into war zones (also including some basic questions to gauge the research), and another which gives a wide range of advice on research in conflict zones.
2. To go and research or not?
One point that that was made by more than one of the crowd was that the decision to do research in a conflict affected area has to be more carefully and deeply considered than your average research project. A lot more careful – as Chris put it, this is no place to be blithe.
Macartan pointed out that the key question is how much risk you are willing to bear. The folks doing your survey will be at above average risk of abduction, harm and even death. There is often not clear information on the risks and where they are. And once you minimize these risks (see below), you are not off the hook. These risks have to be weighed against the potential benefit of the research. This is research that has to be work that people beyond yourself and your immediate circle think is important and adds significant value. (It may be worth revisiting the Belmont report and especially the principle of justice to help with framing this decision).
3. Setting up (aka planning for some bad stuff)
Let’s start with the human side. When going into an area with potential conflict, make sure the survey team and anyone with them will be safe. This is going to require some significant intelligence gathering beforehand – through whatever contacts you can muster (and being on the ground yourself for this part helps). And once the team is heading out, make sure you have a clear plan for evacuation in case things flare up. Finally, if you are going to contract out data collection, make sure that your contract with the survey firm not only has a conflict related contingency, but also makes them specify, ex ante, security protocols and an evacuation plan.
On the data side, you are going to want to set things up for things going wrong (e.g. conflict reigniting or spreading to your research area). This is particularly true if conflict intervenes between baseline and follow-up surveys. Dislocation is often significant. So your analysis will have to be built for this possibility (and it’s another reason to hold off on the pre-analysis plan until after the data is collected and maybe even cleaned).
One concrete suggestion on how to structure things came from Andrew, who pointed out that pair-wise matching, with one community assigned to treatment and one to control, can help minimize the loss of power when conflict resurfaces in a localized way. On the flip side of power, Nik made the point that it’s good to remember that you can do an RCT without a baseline – which gives you one less survey (but more respondents) to worry about. In the end, even if you start out with the idea of doing a panel, it’s good to have a research design that won’t fall apart if the panel gets disrupted.
4. Collecting your data and planning the analyses
1. Do your background reading.
Beyond this post, there is some good background material out there. Marie Gaarder and Jeannie Annan have a nice overview paper which includes a good set of questions to ask yourself ex ante as well as explaining why this kind of work is important. And Chris pointed out that he’s done a couple of posts on the topic – one questioning whether students should go into war zones (also including some basic questions to gauge the research), and another which gives a wide range of advice on research in conflict zones.
2. To go and research or not?
One point that that was made by more than one of the crowd was that the decision to do research in a conflict affected area has to be more carefully and deeply considered than your average research project. A lot more careful – as Chris put it, this is no place to be blithe.
Macartan pointed out that the key question is how much risk you are willing to bear. The folks doing your survey will be at above average risk of abduction, harm and even death. There is often not clear information on the risks and where they are. And once you minimize these risks (see below), you are not off the hook. These risks have to be weighed against the potential benefit of the research. This is research that has to be work that people beyond yourself and your immediate circle think is important and adds significant value. (It may be worth revisiting the Belmont report and especially the principle of justice to help with framing this decision).
3. Setting up (aka planning for some bad stuff)
Let’s start with the human side. When going into an area with potential conflict, make sure the survey team and anyone with them will be safe. This is going to require some significant intelligence gathering beforehand – through whatever contacts you can muster (and being on the ground yourself for this part helps). And once the team is heading out, make sure you have a clear plan for evacuation in case things flare up. Finally, if you are going to contract out data collection, make sure that your contract with the survey firm not only has a conflict related contingency, but also makes them specify, ex ante, security protocols and an evacuation plan.
On the data side, you are going to want to set things up for things going wrong (e.g. conflict reigniting or spreading to your research area). This is particularly true if conflict intervenes between baseline and follow-up surveys. Dislocation is often significant. So your analysis will have to be built for this possibility (and it’s another reason to hold off on the pre-analysis plan until after the data is collected and maybe even cleaned).
One concrete suggestion on how to structure things came from Andrew, who pointed out that pair-wise matching, with one community assigned to treatment and one to control, can help minimize the loss of power when conflict resurfaces in a localized way. On the flip side of power, Nik made the point that it’s good to remember that you can do an RCT without a baseline – which gives you one less survey (but more respondents) to worry about. In the end, even if you start out with the idea of doing a panel, it’s good to have a research design that won’t fall apart if the panel gets disrupted.
4. Collecting your data and planning the analyses
-
Shubha and Nik had these concrete pointers on collecting data. In a conflict-affected setting, it is even more important than in a normal survey to:
- Collect multiple means of contacting respondents for follow-up (because of displacement)
- Ensure confidentiality and safe storage of data, whether digital or paper, as respondents may be especially nervous about their personal information getting into the wrong hands.
- Think twice about whether you really need to collect sensitive information (e.g., tribal affiliation, experience of GBV) and get local advice about appropriate ways to ask about these. If it gets around that you are asking these kinds of questions, and people start refusing to participate in your survey and/or your survey gets shut down by some kind of authority, was it worth it to sacrifice your response rate in order to collect these variables?
- Andrew summarized this last point as “don’t ask stupid questions.” Of course, what’s stupid might not be obvious, so he suggests commissioning the advice of several people (including both locals and foreigners). This might end up limiting the questions you can answer, but it’s going to avoid a shutdown of your survey and reduce the chances that your enumerators resort to fabrication.
- Randomize the order of data collection across treatment and controls. While this is also a good principle in general, it’s particularly important in conflict settings. Conflict breaking out during the follow-up survey could jeopardize your ability to capture significant swathes of either the control or treatment populations. You also want to think hard about the way treatment could have affected a local security situation – and thus induced attrition related to the treatment.
- Can you collect your data remotely? Macartan pointed out this (gated) example. This would also be a context where remote sensing (e.g. satellites to measure yield) may be more attractive than collecting data in the field.
-
A related point is thinking about how to maintain data quality control. More than one of the crowd pointed out that post-conflict, but still rocky, situations tend to have a concentration of cowboy (in the bad sense) data collection outfits. And this is enabled by the broader problem that it may be difficult or impossible for the researcher or field coordinator to travel to places to check on actual survey implementation. There are a couple of options that could help with this:
- Use the data quality checks through CAPI. Do frequent/daily checks of the data coming in and flag and interrogate the observations that look odd. Use the GIS and time stamp metadata that come with CAPI collected data. However, you’ll want to be careful about the audio recordings of interviews, as these can run afoul of people’s suspicions – which will be running high.
- Shubha pointed out that you could hire a third party to monitor the data collection (a firm to monitor the survey firm), but you are walking a fuzzy line on sending other people into a situation where you yourself wouldn’t go.
- Andrew stressed the importance of interlocutors or chaperones to go with the enumerators and facilitate community entry. This is true in almost every case, but in conflict settings, where enumerators can be accused of being spies (and worse), it’s critical. Moreover, given the likelihood of a situation in flux, it may be quite hard to figure out which authority (or authorities more likely) have the power to allow or deny community entry. So it’s critical that your community entry team is not only savvy, but well connected as well.
-
You are going to want to measure exposure to conflict for your analyses later:
- The starting point is think about how it interacts with treatment – is it a complement or does it lower impacts? On the flip side, is there potential for the treatment to help offset some of the negative impacts of conflict?
- One good place to collect data is in your impact evaluation surveys. Shubha and Nik point out that the LSMS-ISA has a helpful guide on potential questions to use. Don’t forget to collect the data in your baseline. Indeed, work on the effects of conflict independent of treatment could be a valuable additional paper (and maybe attract extra funding).
- Another place to collect data is from the folks monitoring the conflict. They’ll often have very detailed, georeferenced data. The trick is getting them to share it, especially if the conflict is still simmering.
- Conflicts tend to be pretty diverse (as opposed to say, natural disasters). This is going to limit your external validity (but not the importance of your evaluation!). Here some political understanding of what is going on, as well as some political science theory is going to be important for you to discuss how the effects you are observing may or may not extend to other settings (e.g. through some careful structured speculation).
Join the Conversation