The value of evaluations: asking the right questions

|

This page in:

Charting Accountability in Public Policy and Development
Successful public policy monitoring and evaluation: the World Bank identifies five ground rules that deliver results that are both useful and used.


During the Spring Meetings, the Governance Global Practice, the Independent Evaluation Group, and the International Initiative for Impact Evaluation (3ie) co-hosted a lively panel discussion with a provocative title: Why focus on results when no one uses them?
 
Albert Byamugisha, Commissioner for Monitoring and Evaluation from the Uganda Office of the Prime Minister, kicked off the session with a rebuttal to this question by sharing examples of the Ugandan government’s commitment to using and learning from both positive and negative results. Although this sounds like common sense, it is not always common practice.

Mr. Byamugisha told us about Uganda’s public fora where citizens hold the government accountable; the two million dollar commitment to collect evaluations- a substantial expenditure for Uganda; and a committee set up with the Ministry of Finance, academia, and the Prime Minister’s office to oversee independent evaluations and formulate policy recommendations that are presented to ministers.

What works? What doesn’t work? Real-time evaluations are used to inform policymakers about what is happening on the ground so they can take action to address problems before they get worse.
 
I was heartened to hear about the capacity building efforts in Uganda amongst government officials, CSOs, and academia to try to help connect evaluation outcomes and outputs to policy-making and action.

There are a large number of countries where this is simply not happening, and some countries are struggling to get any data or information about program results at all.
 
In developing countries, we must invest in the infrastructure that can turn raw data into relevant evidence through evaluation. Further efforts are needed to make this evidence available to the multiple actors that can benefit from the knowledge gained, most notably the administrators and policy-makers that are responsible for the formulation and implementation of government programs.
 
We should also acknowledge that public policies rest on hypotheses rather than infallible laws 99% of the time. There are hardly ever any guarantees that policies on paper will be implemented along a linear path. 

If citizens’ security or health is at risk, governments cannot always wait for final proof of concept before taking decisive action. In these cases, evaluations help to test those hypotheses and generate the necessary data to see how those hypotheses hold up.

So how do we design evaluations so that they are truly used and useful for policy making?

  1. Be open and keep it simple. Evaluations must be made public and understandable, so that there is pressure to actually use them. 

  2. Where’s the beef? Do the evaluations contain information that can actually be used by policy makers? If final conclusions for impact and action cannot be made, can we still learn other lessons in the process?

  3. Timing is everything. Evaluations should be done and completed at strategic times so that policy makers have the necessary data when they are making decisions. Incomplete but timely information may be more valuable for policymaking than conclusive post-mortems.

  4. Make evaluations constructive. Evaluations are more effective when they are seen as a support to get programs right rather than as an inquisitional judgment. People must believe in the power of evaluations  so they won’t hide ‘unfavorable’ or ‘negative’ information during the evaluation process.

  5. Ask the right questions.  When creating evaluations, start by determining what’s needed by policy makers.   Identify the relevant research questions first, then methodology and technology can follow.

Legitimate evaluations demand strong institutional architecture and systems ; the planning, funding, management, quality control, and dissemination of evaluations are critical in order to be effective.

The World Bank’s Governance Global Practice aims to help governments build this architecture by integrating the evaluation process into the policy making process. We can work with planners, budget offices, and Centers of Government to help connect evaluation architecture to the policy side.

​Although we are still developing this area of work, it is critical to our mission and desire to help spark transformational change in developing countries. Without the advantage gained from evaluations, these changes can go either way.
 
So in response to the question, “Why focus on results when no one uses them?” I would beg to differ with its implicit premise. We do use results, let’s just ensure that they are relevant, timely, and understandable enough to provide useful guidance for action.

Authors

Mario Marcel

Senior Director, Governance Global Practice

Join the Conversation

Glenys Jones
May 29, 2015

The lessons from developing the Monitoring and Reporting System for Tasmania's national parks and reserves is that evaluation needs to be RELEVANT (e.g. to staff & stakeholders), RELIABLE (e.g. evidence-based, founded in good science), ACCESSIBLE (e.g. evaluation reports available online) and RESILIENT (e.g. to institutional changes, variable budgets etc). See http://www.parks.tas.gov.au/file.aspx?id=31865

Adiza Lamien Ouando
May 29, 2015

I do agree with all the rules specially the Timing, the need to focus on results and to make evaluations attractive. So I would like to add two rules:
1. insure that all stakeholders especially youth and women groups have been been sent the TORs and know the content
2. Avoid starting the assessment with the prejudges, thinking that ‘public’ means always shortcomings, weak results, top-down approach
Show public officers that first evaluation is trying to understand first through positive attitude; before analyzing and making recommendations.
Rule one is very important as often some evaluators fell they should impress people by using complicated words or using complicated tools or diagrams in their reports.
Public policy evaluation should be an opportunity of communication with public actors to make evaluation attractive and to encourage more demand of from them and promote by this way the culture of evaluation and its integration in governance.

Godfrey Bwanika
May 29, 2015

My answer is yes and no. Yes because the government of Uganda has deliberately established platforms through which evaluation results are used. for example the work of Cabinet retreats in examining the performance of government sectors. In local governments evaluation results are used to determine financing levels. During assessment of the evaluation policy some local governments are rewarded and others are punished through funding deductions. On the other hand, the use on the evaluation results is limited compared to what is generated. Governments need to establish more rewards for using evaluation findings

Saiful Ali
May 29, 2015

In designing evaluation, I think the most important factor is selecting the right person(s) to do the evaluation. Evaluation is a very focused and specialized activity and not a common sense way of analyzing what was desired and what has been achieved.

Anand P Gupta
May 27, 2015

Mario, I agree with your response to the question, “Why focus on results when no one uses them?”
But can we go beyond this question and ask: Do policymakers believe in the culture of rigorous impact evaluation of public interventions – a culture in which policymakers demand such evaluations not because they have to comply with any requirement, but because they really want to know the answers to the impact evaluation questions of what works?, under what conditions does it work?, for whom does it work?, what part of a given intervention works?, and at what cost?, so that they may draw appropriate lessons from these answers and use these lessons while designing and implementing public interventions in future?
Mario, what’s your response to this question?

Jaqueline Koning
May 31, 2015

I agree in principle with the 5 points raised in respect of design. I do have two comments though:
1. Designing the evaluation is not all there is to making sure the results influence policy decisions. For example, point 3: yes, timing is important, but a recent report by AidData suggests that the evaluation report may have more impact if it is available at the agenda-setting phase in the policy cycle, rather than the decision-making phase (Who do developing world leaders listen to and why? - www.aiddata.org).
2. The statement "We can work with planners, budget offices, and Centers of Government to help connect evaluation architecture to the policy side", relating to support to developing countries has to make one heave a huge sigh of exasperation. In my experience, (and admittedly because of support from international sources such as the WB) there is much more effort in developing countries on strengthening evidence-based decision-making (with mixed outcomes, so I would love to hear more about Uganda's experience) than there is within and between development organisations.. Yet, development organisations seem to pay as little attention to evidence (evaluation outcomes) when it comes to decision-making than do governments of developing countries, so perhaps these organisations should do some internal house-cleaning so that they can advocate a 'do as I do' philosophy rather than a 'do as I say' one?

Maarten de Jong
June 02, 2015

Excellent points although I do feel that we should not overestimate the element of timeliness. Ofcourse it is great to have a set of evidence based, ready-to-use recommendations delivered at the start of a new program or government turn. In reality however, evaluation results are used in a far more unpredictable and messy way but are nonetheless valuable. In fact evidence from studying the impact of spending reviews and policy evaluation conducted in the Netherlands since the 1980s reveals that it may take up to a decade before many recommendations are actually adopted. In addition evaluations are sometimes used as leverage for making other improvements. In other words we should not be discouraged if our evaluations aren't used instantly and remain 'on the shelf' for a couple of years.

Maarten de Jong
June 03, 2015

Excellent points although a I do feel that we should be careful not to overemphasize the aspect of timing. Ofcourse it is wondeful to have a set of ready-to-use evidence based recommendations ready at the start of a new government term or when a policy expires. In reality however, recommendations from evaluations (especially innovative and controversial ones) tend to be adopted in a less predictable way. In fact they may lie idle on the shelf for years before an opportunity comes up for adoption. These opportunities are often the result of shifting political preferences or external pressure. Research on the impact of spending reviews conducted in the Netherlands from the early 1980s reveals that it often takes up to a decade before the lessons from evaluation are actually adopted. The important thing is not to be discouraged if evaluation results aren't used right away. Even if an apparent opportunity for adoption is missed, gathering the evidence may prove to be worthwile anyway in the long run.

Maaten de Jong
June 03, 2015

Timing isn’t everything although it certainly can be important. I agree with most of the points in this excellent blog but I do feel that we should be careful not to overemphasize the importance of timing. Ofcourse it is great to have a set of ready-to-use evidence based recommendations ready at the start of a new government term or when a policy expires. In reality however, recommendations from evaluations are often adopted in a far more unpredictable way. Especially when evaluations result in innovative (and therefore) controversial proposals, it may take years before a real opportunity for adoption occurs. Such an opportunity usually results from shifts in political preferences or external pressure. In fact research on the impact of spending reviews in the Netherlands from the early 1980s revealed that recommendations often laid idle ‘on the shelf’ for up to a decade before being adopted. The important point is that we should not get discouraged when evaluation results don’t get used right away. Gathering the evidence may prove to be worthwile after all on the longer run. Unfortunately it sometimes just requires a lot of patience.