Published on Development Impact

Impact evaluations in the time of Covid-19, Part 1

At Development Impact we’ve been trying to wrap our heads around what Covid-19 means for researchers conducting evaluations in the field. As such, we thought it would be useful to do a series of posts on different dimensions. Today’s post focuses on the immediate response in terms of what the virus might mean for both interventions and ongoing surveys. Later posts will focus on changing your research design and survey methods (we will revisit the efficacy of phone surveys).  

 

First of all, there is a fair amount of thoughtful guidance out there. Annie Duflo lays out IPA’s response in this blog, and Iqbal Dhaliwal provides J-PAL’s heartfelt approach here.   

 

Both of us manage teams that are doing a fair number of field based evaluations and we thought it would be worth laying out a couple of thoughts. We’ll start with the more general ones. 

 

  1. Do no harm. In the current context this means not propagating the spread of the virus. If you are at the stage in your impact evaluation where an intervention is happening, let the professionals decide what to with the intervention. If you’re running it yourself it’s probably best to stop. In terms of a survey, if you are about to go the field, it’s probably best to delay. If you have a team in the field, it’s probably best to stop.  If you can’t or conditions haven’t worsened yet there is some guidance from ID Insight here.  But, as of this writing, the WHO has confirmed cases in 196 countries or territories.  
  2. Communicate with your funders.  A number have reached out quickly and in our experience they have been flexible and understanding.
  3. Make sure your team will be OK. These are scary, uncertain times, so do what you can to make things better. 

 

Now your project isn’t going to be the same. We’re going to tackle this in two pieces - data collection and the intervention/research question.  

 

What may happen to your data collection

 

In terms of data collection, one obvious move would be to switch to a slimmed down phone survey.  J-PAL has a blog post on resources.   It’s important to keep in mind that this is a significantly imperfect substitute for most projects. However, as field operations are halted, phone surveys may be crucial in documenting intermediary outcomes along the causal chain (see more on monitoring below). And we will be revisiting the pros and cons in a forthcoming post.  In the meantime, from the archives, there are a number of posts from DI that might be helpful:

    • A discussion of a nicely done methodological experiment with firms can be found here.
    • Experience with using phone surveys during the Ebola crisis can be found here.
    • Some lessons on using phone interviews for high frequency data can be found here.
    • And last but not least, some lessons on how to reduce attrition in phone surveys was discussed here

Another option would be to delay data collection altogether.  You are likely to be in better shape if you were about to go for a baseline. This post gives us some thoughts on the pros and cons of investing in baselines.   

 

And, if you are doing lab-in-the-field work, Busara has this nice compilation of ideas and options.

 

As the pandemic prevents from organizing group meetings and most countries have instituted work from home, this poses additional challenges—we will blog and propose ideas in the coming weeks on that. 

 

What may happen to your intervention

 

Your intervention is likely to change. The key thing will be to keep up with what’s going on and adapt (as much as possible) the research to the changes.  If implementation is continuing (e.g., because this is a fundamental part of a safety net or vital infrastructure) then be prepared for the implementation modalities to shift. Some implementation is just going to be flat out delayed. Some thoughts:

  • If your intervention entails community-level targeting, it may become impossible to run this targeting exercise in the control group. Here we have two suggestions, based on recent experiences: 1/ the targeting could be done remotely in the whole study sample, through communications with the community leaders; 2/ if the crisis triggers a deployment of emergency relief in the whole study area in addition to the original program you were working with, this may provide an opportunity to deploy similar targeting procedures across treatment and control for the intervention you are evaluating. It could also be the opportunity to distribute cellphones to subsamples of households in control and treatment villages. 
  • If implementation is already substantially underway, or already finished, one thing you can do is to start thinking about how the intervention may interact with the spread of the virus. Together with colleagues, Markus did this in the context of an adolescent girls intervention during the Ebola crisis (blog post here).  A couple of thoughts based on that experience:
    • It’s important to think about what the containment measures might do. In the case of Ebola, containment was fairly effective, and from the perspective of this intervention the big differential impacts came from the facts that schools were closed, markets were closed and health centers shifted their focus to Ebola prevention and care. This meant that there were a lot of out of school, non-working teenagers without access to reproductive health services. Of course, the current crisis is global, not local, in scope, so there are a lot more dimensions to think about. 
    • Think about different sub-populations that may be affected differently (e.g. in the case of Ebola, health workers died disproportionately). 
    • Be agile in your response. In this case, the team did an interim phone/monitoring survey and followed that up with a much more in-depth in-person interview when the situation allowed for it. 
    • Monitoring data is more important now. If implementation is going to shut down, it’s important to find out how much has been done. The provider may or may not have this data (depending on how they were planning to monitor things). You may need to find a way to do some remote monitoring/checking in to get a clear sense of what has happened so far so that when things resume you’ll have a better understanding of what people’s experience of the intervention actually was. This is hard data to get retrospectively. 

 

Anyhow, we will be posting more as things evolve and people (including us) get more of a sense of how things are changing. Stay safe out there!

 

We realize that this pandemic is a rapidly changing situation. This blog post contains research discussions, and should not be taken as reflecting any official view of the World Bank. Please see https://www.worldbank.org/en/who-we-are/news/coronavirus-covid19 to learn about how the World Bank is helping countries to respond.

 

 


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Florence Kondylis

Research Manager, Lead Economist, Development Impact Evaluation

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000