Syndicate content

The International Rescue Committee's approach to impact evaluation

Markus Goldstein's picture

Our series on institutional approaches to impact evaluation continues!    DI recently virtually sat down with Jeannie Annan, Director of Research and Evaluation at the International Rescue Committee.   
DI:  What is the overall approach to impact evaluation at the IRC?
JA:  We are committed to providing (or supporting) the most effective and cost-effective interventions. This means using the best available research about what works combined with understanding of the context and experience in implementation to design and deliver our programs.  
Where there is little research, we are committed to contributing to the evidence base and are driven by the questions that we think are the most important. Many of those are impact questions and therefore lead to conducting impact evaluations. However, we are also interested in questions about around how and why change happens (or not). Therefore impact evaluations are one piece of our learning.
DI:   How do you decide what gets an impact evaluation? 
JA:  We have learning priorities in each of our sectors which essentially add up to an organizational research agenda. In an agency-wide effort, we are currently identifying the most important and actionable questions around which we will prioritize our research investments for the next 5 years.   As questions are defined, we look for opportunities to implement projects and evaluations to answer them.
DI:  Given that the IRC often works in emergency situations, how does this limit (or expand) what kind of impact evaluation work you can do?
JA:  The IRC works in a range of contexts including acute emergencies, protracted conflicts and post-conflict development. In some of the more protracted conflict settings like eastern DRC and the Thai-Burmese border, we are able to conduct impact evaluations because there are areas that remain relatively stable. However, we continually assess the security and analyze how conducting an impact evaluation may affect those we work with. For example, we assess whether it would be possible to randomize and how that should be done (i.e. public lottery). In acute emergencies, we do not conduct impact evaluations. We conduct ‘real time evaluations’, rapid assessments of our interventions early in the emergency that provide immediate feedback to the program teams.
DI:    Given that you deal with emergencies and people facing high levels of stress and trauma, how do you bring different disciplines together in the impact evaluations you work on?    How does that work out?      
JA:  Because in most cases we work with populations affected by both conflict and poverty, we are trying to improve multiple outcomes. For example, in our education work, we want to test approaches to improving literacy and numeracy as well as social emotional learning (what economists refer to as non-cognitive skills that are related to many later life outcomes). We want to make sure we have the best interventions for the outcomes and that we are measuring the outcomes well. Sometimes there are researchers who cross disciplines. Other times it means trying to work with a multidisciplinary team.
DI:  In terms of institutions (or types of institutions) who do you collaborate with on your impact evaluations?   How can individual researchers interested in working with you find out about opportunities to collaborate?
JA:  We look for researchers with expertise in our priority areas and those who are strongly committed to policy and program impact. We currently work with researchers from a range of universities and research institutions from the U.S. and Europe. We are looking to expand our partnerships with research institutions in the countries where we work.
Researchers can reach out to me or other colleagues on the research team or relevant technical teams if they are interested in discussing potential collaborations. We have found it extremely useful to explore potential partnerships with those who have overlapping research interests and then search for funding and implementation opportunities together.
DI: The IRC prides itself on how much of the budget goes directly into programs. How do you fund the impact evaluations you do? 
JA:  In a few cases, donors have provided separate funding for impact evaluations. Most of the time, we have carved out impact evaluation budgets under program budgets for “m&e”. The latter has allowed us to conduct more evaluations but it has been a struggle to carry out evaluations on these tight budgets. My hope is that more donors include separate funds or a require a reasonable percentage of the budget for impact evaluations. If they don’t, it will continue to be a challenge to fund impact evaluations or any rigorous research because we don’t want to take funds away from the programs.
 DI:  Relative to other forms of evaluation, in impact evaluation there is a premium on the evaluator/researcher being engaged earlier in the project, ideally in the design.   Indeed, this could help the project team expand the set of different interventions that they are testing in the evaluation.  However, within evaluation departments there is a long tradition of independence which can create tension with the more integrated role that impact evaluation can play.   How are you dealing with this at the IRC?
JA:  We greatly value early engagement of the researcher in the design of the evaluation and, at times, also helping to inform the intervention based on existing research. This has been the most successful model for us and, I believe, leads to better designed programs and evaluations. I think independence comes more from the incentives of researchers to publish in peer review journals. I think this will become more of an issue as donors move to ‘third party evaluators’. I hope early engagement with evaluators is not sacrificed for what I think is a false sense of independence.
DI:  Can you give us a concrete example of how the results of an impact evaluation have changed the way you do something at IRC?  
JA:  We partnered with Macartan Humphreys at Columbia University to evaluate the impact of a community-driven reconstruction (CDR) program in DRC on social cohesion, economic well being and governance. Even though the implementation was good – a lot of schools and bridges built –, the evaluation revealed little impact. While this was disappointing for all involved, it has lead us to commission a review of CDR to thoroughly review the mixed results across the handful of studies that exist. We have also engaged with policymakers and donors around the current gaps both in theory and program design to motivate design improvement and sustained support for research around this type of programming.  Our technical team has also been working across multiple countries to examine the assumptions behind this approach and we have committed to developing and rigorously evaluating revised, theoretically motivated and contextually appropriate approaches to continue to build evidence in this area. We are also expanding our research to examine not just the impact but the many unanswered questions about how, why, when and for whom CDR programs may or may not work.
DI: Where do you see your priorities or balance between funding efficacy trials (e.g. a particular intervention or policy works under ideal conditions on a small scale) versus effectiveness trials (e.g. a particular intervention works under real-world conditions with typical messy implementation) versus mechanism experiments (e.g. testing particular mechanisms through which a policy is supposed to work when testing the policy itself may be difficult)?
JA:  In coming years, I think we will be working on all three. To date, we have largely focused on effectiveness trials but as the evidence grows in some areas like CDR, it makes sense to move to mechanism experiments. In other areas like prevention and response to violence against women, we still need to focus on efficacy and effectiveness.

Add new comment