Published on Development Impact

DFID's Approach to Impact Evaluation - Part I

This page in:

As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division

Development Impact (DI): There has been an increasing interest in impact evaluation (defined as experimental/quasi-experimental analysis of program effects) in DFID. Going forward, what do you see as impact evaluation’s role in how DFID evaluates what it does? How do you see the use of impact evaluation relative to other methods?  

Nick YorkThe UK has been at the forefront among European countries in promoting the use of impact evaluation in international development and it is now a very significant part of what we do – driven by the need to make sure our decisions and those of our partners are based on rigorous evidence.   We are building in prospective evaluation into many of our larger and more innovative operational programmes – we have quite a number of impact evaluations underway or planned commissioned from our country and operational teams. We also support international initiatives including 3ie where the UK was a founder member and a major funder, the Strategic Impact Evaluation Fund with the World Bank on human development interventions and NONIE, the network which brings together developing country experts on evaluation to share experiences on impact evaluation with professionals in the UN, bilateral and multilateral donors.

DI: Given the cost of impact evaluation, how do you choose which projects are (impact) evaluated?

NY:  We focus on those which are most innovative – where the evidence base is considered to be weak and needs to be improved – and those which are large or particularly risky. Personally, I think the costs of impact evaluation are relatively low compared to the benefits they can generate, or compared to the costs of running programmes using interventions which are untested or don’t work.   I also believe that rigorous impact evaluations generate an output – high quality evidence – which is a public good so although the costs to the commissioning organization can be high they represent excellent value for money for the international community. This is why 3ie, which shares those costs among several organisations, is a powerful concept.

DI:   Relative to other forms of evaluation, in impact evaluation there is a premium on the evaluator/researcher being engaged earlier in the project, ideally in the design.   Indeed, this could help the project team expand the set of different interventions that they are testing in the evaluation. However, within evaluation departments there is a long tradition of independence which can create tension with the more integrated role that impact evaluation can play.   How are you dealing with this at DFID?

NY: 
Since we moved to a decentralized approach 2 years ago, with evaluation specialists embedded within operational teams and programme funds used to commission evaluations within country offices, our opportunities for doing prospective evaluation designs have been transformed. I really think this makes a huge difference to the quality of the work that can be done, the chance to collect baseline and primary data, the types of methods that can be employed, the opportunities for learning and feedback during the programme, and –perhaps most importantly – the level ofownership and engagement from the programme managers and decision makers.  Previously they were only really engaged when an evalution report arrived on their desks in draft for review, which is much too late.   In the UK, we are fortunate that the independent scrutiny of development assistance provided by the Independent Commission on Aid Impact, now 12 months old and reporting to Parliament, means that we also have a very strong level of accountability and incentives.

DI: Following up on the independence issue, how do you make the decision to do impact evaluation within DFID or to contract someone outside to do the evaluation?  

NY: Almost our evaluations and impact evaluations are, ultimately, done by external experts – our own staff have skills in commissioning and managing evaluations but, unlike the World Bank, we don’t have much capacity to do the actual field research using in-house staff. This is a really specialized area and we are turning to organization such as JPAL, IPA, World Bank, private consultancies from all over the world and, increasingly, academic researchers with relevant skills. This make or buy decision is driven by skills rather than independence – all our evaluations have governance arrangements (reference groups, peer review, quality assurance) which means that independence can be handled in that way, and our staff are trained to perform a challenge role.

DI: Do you think that impact evaluation has more of an impact on what policymakers think than other kind of evaluative evidence?  

NY: I think it probably does have a particularly strong role to play, especially for senior decision makers who are very focused on numbers and having to communicate results of programmes to the public and Parliament in straightforward ways. The strong attribution that comes from experimental methods is a powerful way of showing to sceptics that development ‘makes a difference’, and evaluations that collect new primary data are in my experience more likely to change thinking than some of the poor quality studies done in the past which simply asked a limited range of the ‘usual suspects’ what they thought of a programme they themselves have been involved in. However, we should all be very interested in using evaluative evidence for learning and innovation, and here we o need different types of evaluation and evidence.   One key thing is finding answers which convince development professionals – they often want to dig behind the measures of impact and understand that development is complex, context specific and depends on local ownership of change, so theory-based evaluation, qualitative research and participatory methods make a vital contribution too.

DI: What are you doing within DFID to make sure that evaluation results that you fund or do gets used?  

NY: The key point here is having a decentralized process in which operational teams are involved from the outset. They are much more likely to use results which they have some ownership over, than reports which come from a unit at head office. The leverage which ICAI has at political level – direct access to Parliament and Ministers – is also crucial; they follow up on reports to see what has been done with them. DFID is required to provide management responses within weeks of ICAI reports coming out, and this is being taken extremely seriously as the first few reports have shown. One thing we have been successful at in our new approach is building much greater awareness of what evaluation is there for and how it can be used, from the top of the organization downwards.

DI: What do you see as the best way(s) to build capacity to a) use and b) do impact evaluation among the countries you are working with?  

NY:  One key part of this is promoting demand for impact evaluation, by discussing it with senior decision makers in developing countries. For example our DFID Uganda office has done by helping to fund the evaluation unit in the Office of the Prime Minister in Kampala.   We and our partners (3ie, NONIE and SIEF) have played a role in raising awareness among developing country governments and emerging economies, including Ghana, South Africa, India and China .   3ie did an excellent paper on the lessons from institutionalizing evaluation in various countries including the path-breaking work in Latin America. Another aspect is building institutional capacity not just training for individuals. The UN (particularly UNICEF and UNDP), DAC partners and the African Development Bank have helped to map out the demand.   We have been co- funding(with the World Bank, Sweden, Belgium and the regional development banks) the Centers for Learning, Evaluation and Results (CLEAR) so that institutional capacity is built in the regions, including an excellent new centre at the University of Witwatersrand in South Africa which is now providing high quality training in the region. 

This is part 1.  Tommorrow we will explore DFID's approach to funding research and impact evaluations.


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000