Here is a slightly anonymized request that captures a set of questions many researchers are asking themselves at the moment:
Scenario: We recently finished midline data collection (7 to 8 months post baseline) and the intervention was going to continue for the next 5 months. We have short-term results from our midline, including on our main outcomes, and some mechanisms. But we had planned to collect an endline this summer, which would focus on persistence of main effects and also additional mechanism questions. Now, however, the country is on lockdown, and there is a lot of uncertainty about whether we can (or should) continue the project. Two key questions we are facing:
(1) In terms of a research paper, is it enough to have only the midline (post 7-8 months) results?
(2) Is it worth the risk (in terms of spending project funds) to try to keep the intervention going on in the background for another 5-6 months, or should we just stop now?
This is a scenario that we have faced in the past when faced with other crises, and that we are also facing in some of our current projects. We thought we’d give some examples of how this has been handled before, and of the range of factors to consider in determining your response to this.
1. What is the intervention, and do the key mechanisms through which it will lead to the desired outcomes still operate?
If your intervention is intended to help firms grow or job-seekers find jobs, then even if you could do a follow-up survey in the summer, it may make no sense to do so. When the whole economy is shut down, then even if your intervention was really good, if there are no jobs and no customers, your follow-up isn’t going to show main effects. This was the case in work David did on youth internships and firm matching grants in Yemen – a first of two planned batches had done the intervention, and a phone midline survey was rushed out just as the civil war was about to break out – and then the papers were written up and published with just these results. Alternatively, you push back the planned endline until enough time has passed for the economy to start recovering again, and hope that your intervention has long-enough lasting effects that you can see whether it still helps people find jobs or grow their firms in the recovery.
On the other hand, you could have an intervention that is much less affected by the crises. For example, maternal health services do not stop during a pandemic – women will still give birth, need prenatal care visits, receive post-partum care, come back for child vaccinations, etc. The timing of these health services might change and their frequency decline, but whatever interventions were being implemented to improve these services would continue – perhaps with a smaller number of clients. This is the case in a study Berk is conducting at a women’s and children’s hospital in Yaoundé, Cameroon. The intervention, which aims to improve the take-up of modern contraceptives among young women offers improved counseling and random price discounts to clients, does not require the involvement of outsiders: it is run during the normal course of business at the hospital by nurse counselors, who are using tablets to counsel women. The study uses administrative data from the tablets that are uploaded to the cloud. So, in this case, there is no reason to shelve the study as the intervention and data collection are part of ‘business as usual’ at the hospital. As the study has rolling enrollment (clients become part of the study as they seek family planning services at the hospital), it may take longer to reach the sample size due to a lower number of clients coming to receive services during the pandemic and the study might even pause as needed.
2. Can you distinguish between your intervention not having its intended effects because the intervention wasn’t very good versus because the external economic environment is causing no effect?
Suppose your intervention was to provide new skills for job-seekers (e.g. soft skills or vocational skills) or for firm owners (e.g. better account keeping, better marketing practices). Then if you find no effect on employment, income, or profits, the reader will want to know whether this is because your training wasn’t successful in building skills, or because these skills had no return in a shutdown economy. This is where measuring mechanisms well comes in – so perhaps your midline allows you to at least demonstrate that this intervention was really good in building skills – and then you could possibly use a surrogate index approach to say what would have been expected to happen in normal times to income or profits, and compare to what did happen.
On the other hand, the intervention may become even more relevant during the pandemic, which may even boost its effects. For example, in a study of the effects of group-based talk therapy for adolescent females in Uganda, Sarah Baird and Berk are working with BRAC Uganda and StrongMinds Uganda on a cluster-RCT that provides just therapy or therapy plus lump-sum cash. The therapy intervention ended in December 2019 and cash has already been distributed following the end of the intervention. They are interested in the longer-term effects of the program, namely whether the provision of cash at the end of the 14-week therapy session will cause more sustained reductions in anxiety and depression than therapy alone. It is true that they will most likely have to shelve the midline data collection that was going to take place 12 months after baseline (around August-September 2020) and only do an endline at the 24-month follow-up (same period 2021 – fingers crossed). However, since they have already conducted a rapid survey and have evidence on the short-term outcomes and potential mechanisms, skipping midline data collection is a loss, but not a devastating one for the study. Since the main interest is in the sustained rather than short-term effects of the interventions, anyway, they can wait. Furthermore, the provision of therapy and cash possibly came at a time for the program beneficiaries when they may be needed more than usual.
3. Is there a way to pivot your intervention to make it more applicable to the new situation?
If you are partway through your intervention, or have some flexibility to add onto it, can you pivot the intervention in a way that makes it more applicable to the new scenario. This is what David did in prior work in Egypt. He had conducted baseline surveys for an intervention that was designed to measure the impacts of microfinance expansion. But then after the Egyptian revolution, the considerable macroeconomic uncertainty made lenders reluctant to expand and borrowers cautious about borrowing. So he and his co-author worked with a partner microfinance institution to change the intervention into one that offered insurance against this risk, and measured the impact of this macroinsurance on microenterprise behavior.
4. Even if the intervention won’t work in the way intended, might it now have impacts on other outcomes?
Maybe your intervention changes the way people are affected by the crisis, or respond to it. Then an intervention which was intended to measure impacts in one domain may now instead want to look at outcomes in other aspects of life. For example, Markus blogged here about his work on a girl’s empowerment program in Sierra Leone, and how this was affected by the Ebola crisis. They find that the quarantine leads adolescent girls to be out of school, spending a lot more time with men, with an increase in out-of-wedlock pregnancies – but that their program helped mitigate these impacts. In ongoing work, David and co-authors are currently looking at whether a program to teach youth personal initiative and negotiations skills affects how they spend their time when schools are closed. Back in 1975, T.W. Schultz wrote “the ability to deal successfully with economic disequilibria is enhanced by education and … this ability is one of the major benefits of education” – so if your intervention is changing skills of some form, you might want to learn whether it changes how people deal with the crisis.
You could also add some questions to your ongoing data collection to help with your research or implementing partners’ concerns. For example, it would be very easy for the nurse counselors in Yaoundé to also collect temperature and symptoms (dry cough, shortness of breath, etc.) for COVID-19 during the course of FP counseling sessions: they already have a medical check section in their counseling routine that collects blood pressure, height, and weight data and asks about various medical conditions. The marginal cost of this would be zero and the daily digital data might help track the virus for the hospital client population and alert authorities to possible clusters.
5. Timelines and back-up plans
Your incentives to wrap up a study as is, pivot the intervention, or wait and hold out for the promise of potentially longer-term results to look at when the economy recovers will depend heavily on your time horizons. Often funding is a constraint on timelines, but funders have typically been generous and sensible in extending deadlines for spending funds when a crisis hits, and some funders have been opening windows for top-up funding to cover additional expenses. The bigger constraint is likely to be career timelines. If you are a tenure-track researcher (and your university hasn’t extended your clock), or a PhD-student, the idea of waiting another year or more to measure impacts you planned to measure soon might be untenable. In contrast, others may be fine with hitting pause, and waiting to resume interventions in 6-12 months might be the best approach.
This gets to an issue of something David blogged about a long time ago - there are many ways impact evaluations can go awry, and so it is always a good idea to plan more than one paper out of them, especially with research that doesn’t depend on your intervention going exactly as planned. So now might be the time to work on using your baseline survey for another purpose.
6. Don’t just add to your file drawer, and editors and referees be kind
Even if you decide that it is time to fold your study, and you only have some short-term results showing null effects, please go ahead and write up the results, even if it is just a short note. It is still useful for our greater research body of evidence to document these findings. And researchers, as editors and referees, while we often want to see longer-term impacts to show sustainability of interventions, there should also be recognition of for what types of interventions that makes sense for right now, and which it doesn’t.