Syndicate content

The Copenhagen Consensus 2012: reflections on impact evaluation’s role in the tyranny of the known

Jed Friedman's picture

Very recently, the results of the third global Copenhagen Consensus were released. This is a semi-regular event self-billed as an effort to put together “the world’s smartest minds to analyze the costs and benefits of different approaches to tackling the world’s biggest problems”. This year’s consensus exercise seeks to determine the best ways of advancing welfare by “supposing that an additional $75 billion of resources were at [the experts’] disposal over a 4-year period”. Review papers were commissioned in domains such as infectious disease, education, water and sanitation, and climate change. The direct intent of these papers is to inform an expert panel of 4 Nobel laureates in economics (plus one more notable economist) who issued a prioritized list of interventions over which to allocate the funds. Here are the top five recommended interventions (and the domains to which they principally belong):

1.       Bundled interventions to reduce under-nutrition in pre-schoolers (listed in both the hunger and education domains)

2.       Subsidized malaria combination therapy (infectious disease domain)

3.       Expanded childhood immunization coverage (infectious diseases)

4.       Deworming of school children (infectious diseases)

5.       Expanded tuberculosis treatment (infectious diseases)

I find it fascinating that four of the top five interventions deal with infectious diseases – and the top intervention deals in education and social protection – because, till date, the three sectors of health, education, and social protection have received the vast majority of impact evaluation funding and effort. In fact if we think about it, The Copenhagen Consensus thought experiment is actually well suited to the task and tools of impact evaluation: it involves envisioning a finite budget spent on specific projects that have been vetted through rigorous cost-effectiveness calculations often based on a series of stand-alone, hence disjoint, evaluations. The problem, though, is not every global challenge fits this model of assessment.

Let’s contrast the commissioned work on infectious disease to that on carbon dioxide (CO2) abatement:

The review paper on infectious disease, authored by Dean Jamison, Prabhat Jha, Ramanan Laxminarayan, and Toby Ord, draws on virtually thousands of studies in the biomedical and social scientific literature that have been compiled by the Disease Control Priorities Project II (DCP II) which engaged over 350 authors to estimate the cost-effectiveness of 315 interventions.

Based on this extensive body of work, the disease review paper “identifies six key interventions in terms of their cost-effectiveness, the size of the disease burden they address, the amount of financial protection they provide, their feasibility of implementation and their relevance for development assistance budget.” The amassed wealth of evidence is convincing. As noted above, the expert panel rates four of the six very favorably, and the other two are rated just outside the top five (at numbers 8 and 14 respectively).

Against this relative wealth of cost-effectiveness research stands the Challenge Paper on CO2 abatement by Richard Tol. This brief paper largely reports the summary of previous modeling work and presents a wide range of cost-effectiveness estimates that derive from uncertainty over key parameter values such as the discount rate, the relative weight given to citizens in rich and poor countries, the climatic impact of increased atmospheric CO2 concentrations, and so on. The dearth of actionable knowledge is partly due to this uncertainty and hence, as Tol has pointed out elsewhere, “current estimates of the economic impact of climate change are incomplete… the research efforts on the economic impact of climate change are minute and lack diversity”.

My sense is we are just beginning to narrow our estimates of the benefits of abatement because we are gradually becoming more certain of the impacts of climate change in a variety of inter-related dimensions such as reduced crop yields, ocean acidification, and sea level rise. Most importantly, we don’t yet have a firm grasp of the increased likelihood of extreme climate scenarios. Yet these scenarios would likely dominate any cost estimate. Furthermore, any benefit from a particular abatement intervention would have to be determined by its interaction with other existing abatement and adaptation activities.

These are some of the considerations that lead Tol to reject the Copenhagen exercise as biased against possible investments in climate change mitigation: “Climate policy is a long program, not a short project… climate policy is a portfolio of adaptation, abatement of various gases, R&D, and perhaps geo-engineering. Ignoring the complementarity of these options is silly…The analysis reveals that the Copenhagen Consensus is indeed inadequate for a problem like climate change.”

We’ve heard the general thrust of these thoughts before, as they parallel previously stated concerns with the renewed emphasis on assessing results in development financing. Demand for “results” in development, while generally welcome, can subtly shift attention towards domains that are measurable and attributable to individual interventions. By putting various global challenges side by side within its stated framework, the Copenhagen consensus does prioritize the challenges that currently have a strong evidentiary basis for cost-effectiveness. And there is logic to this prioritization if we are indeed limited to $75 billion of aid with a mandate to achieve measurable gains within the time span of a political cycle or two.

But many of the important development challenges fall outside this framework. And, while a practitioner of “impact evaluation”, I fear a shift towards domains that are measurable and attributable to individual interventions is reinforced by our burgeoning activities. Impact evaluation is a relatively accessible form of inquiry – a layperson can readily grasp the methods as well as the research question of policy or program evaluation. Instead much work on climate change is forced to model (as opposed to measure) behavioral responses to a range of projected climate changes. Given this, climate change researchers face an uphill battle in conveying the findings to policymakers and the public in a convincing and easily digested fashion.

Externalities are hard for a layperson to comprehend but often constitute the justification for public action or financing. However the major positive externalities associated with disease control and other traditional domains of IE occur in a measurable, understandable scale of time and space – infections jump fairly quickly across neighbors. The positive externalities from carbon abatement are simply on a different scale not fully appreciated by the layperson or, I’d argue, the policy maker. They are also largely outside the ability of impact evaluation to measure. My concern is that the relative ease of understandability of evidence can in turn generate a political demand for more of the same work, and this process continues to build on itself. Let’s not ignore some of the most pressing questions because we cannot easily conduct short-cycle impact evaluations on them.

 

Comments

Submitted by Sean Dalby on
These are great points - the top 5 recommendations are real problems for sure but it's hard to see how mitigating the impacts of climate change don't rank up there as well. I guess one of the inherent problems in modeling the effects of climate change is the enormity and complexity of the data involved. Standardization is an issue too, even in establishing definitive weather measures that we should use to model climate change. So, advancing proposals for more R&D and interventions for climate change with all of these issues lurking in the background is a difficult task for sure. All that said, I know of at least one project that attempts to amass a large amount of climate change-related data to provide digestible / actionable figures reflecting the vulnerability and preparedness of countries with respect to the possible effects of climate change. Last fall, The Global Adaptation Institute (http://gain.org/) released a website modeling their "GAIN" - Global Adaptation Index (http://index.gain.org/) - scores for countries, largely relying on open data available from The World Bank and the World Health Organization. The methodology of how the scores are calculated is laid out pretty explicitly in the site and there are numerous, more finely grained indicators for nations available as well. One of the biggest problems with this project is, surprise, an extreme lack of data on many countries, but the site does a good job of highlighting where we are missing data too. Of course, there's plenty of room for debates and critiques of the kinds of data used for this project and what we could do to improve the overall validity and importance of the categories that are employed to visualize the data. (As I found in a previous post of yours, the Terra Populus (http://www.terrapop.org/) project will hopefully make some good strides towards standardizing what data we use.) However, I think, at a minimum, the project does a great job showing how it's possible to communicate complex ideas surrounding climate change with large amounts of data. (It's also a great success story for open data, but maybe that's for a different post.) None of this changes the inherently slow nature of gathering actionable data useful for studying climate change, but at least it might help communicate the importance of doing so to more people :)