Syndicate content

Impact evaluation

Weekly wire: The global forum

Roxanne Bauer's picture

World of NewsThese are some of the views and reports relevant to our readers that caught our attention this week.


Middle-Class Heroes: The Best Guarantee of Good Governance
Foreign Affairs
The two economic developments that have garnered the most attention in recent years are the concentration of massive wealth in the richest one percent of the world’s population and the tremendous, growth-driven decline in extreme poverty in the developing world, especially in China. But just as important has been the emergence of large middle classes in developing countries around the planet. This phenomenon—the result of more than two decades of nearly continuous fast-paced global economic growth—has been good not only for economies but also for governance. After all, history suggests that a large and secure middle class is a solid foundation on which to build and sustain an effective, democratic state. Middle classes not only have the wherewithal to finance vital services such as roads and public education through taxes; they also demand regulations, the fair enforcement of contracts, and the rule of law more generally—public goods that create a level social and economic playing field on which all can prosper.
 

Humanitarian reform: What's on - and off - the table
IRIN News
As pressure mounts to come up with concrete proposals for the future of humanitarian aid, horse-trading and negotiations have begun in earnest behind the scenes in the lead-up to the first ever World Humanitarian Summit (WHS), to be held in Istanbul in May. The release this week of the UN secretary-general’s vision for humanitarian reforms marks one of the last stages in a multi-year process that has seen consultations with some 23,000 people around the world on how to improve crisis response. (See: Editor’s Take: The UN Secretary General’s vision for humanitarian reform)  Hundreds of ideas are floating around. Which are now rising to the top? And which are being pushed to the side? Here’s our take on the emerging trends:
 

What does it mean to do policy-relevant research and evaluation?

Heather Lanthorn's picture

Center for International Forestry Research (CIFOR) researchers upload the data to see the resultsWhat does it mean to do policy-relevant research and evaluation? How does it differ from policy adjacent research and evaluation? Heather Lanthorn explores these questions and offers some food for thought on intention and decision making.

This post is really a conversation with myself, which I started here, but I would be happy if everyone was conversing on it a bit more: what does it mean to do research that is ‘policy relevant’? From my vantage point in impact evaluation and applied political-economy and stakeholder analyses, ‘policy relevant’ is a glossy label that a researcher or organization can apply to his/her own work at his/her own discretion. This is confusing, slightly unsettling, and probably dulls some of the gloss off the label.

The main thrust of the discussion is this: we (researchers, donors, folks who have generally bought-into the goal of evidence- and evaluation-informed decision-making) should be clear (and more humble) about what is meant by ‘policy relevant’ research and evaluation. I don’t have an answer to this, but I try to lay out some of the key facets, below.
 
Overall, we need more thought and clarity – as well as humility – around what it means to be doing policy-relevant work. As a start, we may try to distinguish work that is ‘policy adjacent’ (done on a policy) from work that is either ‘decision-relevant’ or ‘policymaker-relevant’ (similar to ‘decision-relevant,’ (done with the explicit, ex ante purpose of informing a policy or practice decision and therefore an intent to be actionable).
 
I believe the distinction I am trying to draw echoes what Tom Pepinsky wrestled with when he blogged that it was the “murky and quirky” questions and research (a delightful turn of phrase that Tom borrowed from Don Emmerson) “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own idiosyncratic settings. These questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to a grander, paradigmatic question.
 
Throughout, my claim is not that one type of work is more important or that one type will always inform better decision-making. I am, however, asking that, as “policy-relevant” becomes an increasingly popular buzzword, we pause and think about what it means.

Building evidence-informed policy networks in Africa

Paromita Mukhopadhyay's picture

Evidence-informed policymaking is gaining importance in several African countries. Networks of researchers and policymakers in Malawi, Uganda, Cameroon, South Africa, Kenya, Ghana, Benin and Zimbabwe are working assiduously to ensure credible evidence reaches government officials in time and are also building the capacity of policymakers to use the evidence effectively. The Africa Evidence Network (AEN) is one such body working with governments in South Africa and Malawi. It held its first colloquium in November 2014 in Johannesburg.  



Africa Evidence Network, the beginning

A network of over 300 policymakers, researchers and practitioners, AEN is now emerging as a regional body in its own right. The network began in December 2012 with a meeting of 20 African representatives at 3ie’s Dhaka Colloquium of Systematic Reviews in International Development.

Buffet of Champions: What Kind Do We Need for Impact Evaluations and Policy?

Heather Lanthorn's picture
I realize that the thesis of “we may need a new kind of champion” sounds like a rather anemic pitch for Guardians of the Galaxy. Moreover, it may lead to inflated hopes that I am going to propose that dance-offs be used more often to decide policy questions. While I don’t necessarily deny that this is a fantastic idea (and would certainly boost c-span viewership), I want to quickly dash hopes that this is the main premise of this post. Rather, I am curious why “we” believe that policy champions will be keen on promoting and using impact evaluation (and subsequent evidence syntheses of these) and to suggest that another range of actors, which I call “evidence” and “issue” champions may be more natural allies. There has been a recurring storyline in recent literature and musings on (impact) evaluation and policy- or decision-making:
  • First, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
  • Second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report, “What is the evidence on evidence-informed policy-making”, as well as here).
  • Third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
  • Fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. This even includes bringing policy champions (say, bureaucrats) on as research PIs.

There seems to be a sleight of hand at work in the above formulation, and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. Said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation.

It’s Not about the Technology, It’s about the People: Evaluating the Impact of ICT Programs

Shamiela Mir's picture

How can we better design ICT programs for development and evaluate their impact on improving peoples’ well-being? A new approach, the Alternative Evaluation Framework (AEF) takes into account multiple dimensions of peoples’ economic, social and political lives rather than simply focusing on access, expenditure and infrastructure of ICT tools. This new approach is presented in How-To Notes, Valuing Information: A Framework for Evaluating the Impact of ICT Programs, authored by Bjorn-Soren Gigler, a Senior Governance Specialist at the World Bank Institute’s Innovation Practice.

When We (Rigorously) Measure Effectiveness, What Do We Find? Initial Results from an Oxfam Experiment

Duncan Green's picture

Guest post from ace evaluator Dr Karl Hughes (right, in the field. Literally.)

Just over a year ago now, I wrote a blog featured on FP2P – Can we demonstrate effectiveness without bankrupting our NGO and/or becoming a randomista? – about Oxfam’s attempt to up its game in understanding and demonstrating its effectiveness.  Here, I outlined our ambitious plan of ‘randomly selecting and then evaluating, using relatively rigorous methods by NGO standards, 40-ish mature interventions in various thematic areas’.  We have dubbed these ‘effectiveness reviews’.  Given that most NGOs are currently grappling with how to credibly demonstrate their effectiveness, our ‘global experiment’ has grabbed the attention of some eminent bloggers (see William Savedoff’s post for a recent example).  Now I’m back with an update.