Syndicate content

Impact evaluation

What does it mean to do policy-relevant research and evaluation?

Heather Lanthorn's picture

Center for International Forestry Research (CIFOR) researchers upload the data to see the resultsWhat does it mean to do policy-relevant research and evaluation? How does it differ from policy adjacent research and evaluation? Heather Lanthorn explores these questions and offers some food for thought on intention and decision making.

This post is really a conversation with myself, which I started here, but I would be happy if everyone was conversing on it a bit more: what does it mean to do research that is ‘policy relevant’? From my vantage point in impact evaluation and applied political-economy and stakeholder analyses, ‘policy relevant’ is a glossy label that a researcher or organization can apply to his/her own work at his/her own discretion. This is confusing, slightly unsettling, and probably dulls some of the gloss off the label.

The main thrust of the discussion is this: we (researchers, donors, folks who have generally bought-into the goal of evidence- and evaluation-informed decision-making) should be clear (and more humble) about what is meant by ‘policy relevant’ research and evaluation. I don’t have an answer to this, but I try to lay out some of the key facets, below.
 
Overall, we need more thought and clarity – as well as humility – around what it means to be doing policy-relevant work. As a start, we may try to distinguish work that is ‘policy adjacent’ (done on a policy) from work that is either ‘decision-relevant’ or ‘policymaker-relevant’ (similar to ‘decision-relevant,’ (done with the explicit, ex ante purpose of informing a policy or practice decision and therefore an intent to be actionable).
 
I believe the distinction I am trying to draw echoes what Tom Pepinsky wrestled with when he blogged that it was the “murky and quirky” questions and research (a delightful turn of phrase that Tom borrowed from Don Emmerson) “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own idiosyncratic settings. These questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to a grander, paradigmatic question.
 
Throughout, my claim is not that one type of work is more important or that one type will always inform better decision-making. I am, however, asking that, as “policy-relevant” becomes an increasingly popular buzzword, we pause and think about what it means.

Building evidence-informed policy networks in Africa

Paromita Mukhopadhyay's picture

Evidence-informed policymaking is gaining importance in several African countries. Networks of researchers and policymakers in Malawi, Uganda, Cameroon, South Africa, Kenya, Ghana, Benin and Zimbabwe are working assiduously to ensure credible evidence reaches government officials in time and are also building the capacity of policymakers to use the evidence effectively. The Africa Evidence Network (AEN) is one such body working with governments in South Africa and Malawi. It held its first colloquium in November 2014 in Johannesburg.  



Africa Evidence Network, the beginning

A network of over 300 policymakers, researchers and practitioners, AEN is now emerging as a regional body in its own right. The network began in December 2012 with a meeting of 20 African representatives at 3ie’s Dhaka Colloquium of Systematic Reviews in International Development.

Buffet of Champions: What Kind Do We Need for Impact Evaluations and Policy?

Heather Lanthorn's picture
I realize that the thesis of “we may need a new kind of champion” sounds like a rather anemic pitch for Guardians of the Galaxy. Moreover, it may lead to inflated hopes that I am going to propose that dance-offs be used more often to decide policy questions. While I don’t necessarily deny that this is a fantastic idea (and would certainly boost c-span viewership), I want to quickly dash hopes that this is the main premise of this post. Rather, I am curious why “we” believe that policy champions will be keen on promoting and using impact evaluation (and subsequent evidence syntheses of these) and to suggest that another range of actors, which I call “evidence” and “issue” champions may be more natural allies. There has been a recurring storyline in recent literature and musings on (impact) evaluation and policy- or decision-making:
  • First, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
  • Second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report, “What is the evidence on evidence-informed policy-making”, as well as here).
  • Third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
  • Fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. This even includes bringing policy champions (say, bureaucrats) on as research PIs.

There seems to be a sleight of hand at work in the above formulation, and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. Said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation.

Do impact evaluations tell us anything about reducing poverty?

Markus Goldstein's picture
I recently was thinking about what impact evaluations in development can tell us about poverty reduction.  On one level this is a ridiculous question.  Most of the impact evaluations out there are designed to look at interventions to improve people's lives and the work is done in developing countries, so it follows that we are making poor people's lives better, right?   That's less obvious.  
 

The need to improve transport impact evaluations to better target the Bottom 40%

Julie Babinard's picture
In line with the World Bank’s overarching new goals to decrease extreme poverty to 3 % of the world's population by 2030 and to raise the income of the bottom 40% in every country, what can the transport sector do to provide development opportunities such as access to employment and services to the poorest?

Estimating the direct and indirect benefits of transport projects remains difficult. Only a handful of rigorous impact evaluations have been done as the methodologies are technically and financially demanding. There are also differences between the impact of rural and urban projects that need to be carefully anticipated and evaluated.

Can we simplify the methodologies?

Despite the Bank’s rich experience with transport development projects, it remains quite difficult to fully capture the direct and indirect effects of improved transport connectivity and mobility on poverty outcomes. There are many statistical problems that come with impact evaluation. Chief among them, surveys must be carefully designed to avoid some of the pitfalls that usually hinder the evaluation of transport projects (sample bias, timeline, direct vs. indirect effects, issues with control group selection, etc.).

Impact evaluation typically requires comparing groups that have similar characteristics but one is located in the area of a project (treatment group), therefore it is likely to be affected by the project implementation, while the other group is not (control group). Ideally, both groups must be randomly selected and sufficiently large to minimize sample bias. In the majority of road transport projects, the reality is that it is difficult to identify control groups to properly evaluate the direct and indirect impact of road transport improvements. Also, road projects take a long time to be implemented and it is difficult to monitor the effects for the duration of a project on both control and treatment groups. Statistical and econometric tools can be used to compensate for methodological shortcomings but they still require the use of significant resources and knowhow to be done in a systematic and successful manner.

Africa Impact Evaluation Podcast: Economic Empowerment of Young Women in Africa #AfricaBigIdeas


When it comes to helping young women in Africa with both economic and social opportunity, what does the evidence tell us?  Broadcaster Georges Collinet sat down with researchers and policymakers to discuss the hard evidence behind two programs that have succeeded in giving girls a better chance at getting started in their adult lives.

Are Impact Evaluations Enough? The Social Observatory Approach to Doing-by-Learning

Vijayendra Rao's picture

Impact Evaluations are just one of many important tools to improve “adaptive capacity.” To improve implementation, they need to be integrated with monitoring and decision support systems, methods to understand mechanisms of change, and efforts to build feedback loops that pay attention both to everyday and long-term learning.  While there has been some scholarly writing and advocacy on this point, it has been more talk than action. 

Hammers, wrenches, and development policy

Markus Goldstein's picture
The old saw goes: when you have a hammer, everything looks like a nail.    But what if the best way to fix your broken policy is actually a bolt?   I was recently at a workshop where someone was presenting preliminary results of an evaluation cash transfer program which, while perhaps started with social protection kind of objectives in mind, actually seems to have had impacts on business creation and revenues that dwarfed your average business training program or microfinance program. 
 

Pages