Syndicate content

evidence

Enforcing Accountability in Decision-Making

Heather Lanthorn's picture

A recent episode reminded us of why we began this series of posts, of which is this is the last. We recently saw our guiding scenario for this series play out: a donor was funding a pilot project accompanied by a rigorous evaluation, which was intended to inform further funding decisions.

In this specific episode, a group of donors discussed an on-going pilot programme in Country X, part of which was evaluated using a randomized-control trial. The full results and analyses were not yet in; the preliminary results, marginally significant, suggested that there ought to be a larger pilot taking into account lessons learnt.

Along with X’s government, the donors decided to scale-up. The donors secured a significant funding contribution from the Government of X — before the evaluation yielded results. Indeed, securing government funding for the scale-up and a few innovations in the operational model had already given this project a sort-of superstar status, in the eyes of both the donor as well as the government. It appeared the donors in question had committed to the government that the pilot would be scaled-up before the results were in. Moreover, a little inquiry revealed that the donors did not have clear benchmarks or decision-criteria going into the pilot about key impacts and magnitudes — that is, the types of evidence and results — that would inform whether to take the project forward.

There was evidence (at least it was on the way) and there was a decision but it is not clear how they were linked or how one informed the other.

Allowing ‘Revisibility’ in Decision-Making

Heather Lanthorn's picture

Throughout this series of posts (1, 2, 3, 4), we have considered two main issues. First, how can evidence and evaluation be shaped to be made more useful - that is, directly useable - in guiding decision-makers to initiate, modify, scale-up or drop a program? Or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. Second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

For such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. If the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he further argues, consequent fairness to the decision and coherence to decisions over time.

The first two criteria set us up for the third: first, decision-makers agree ex ante to constrain themselves to relevant reasons (determined by stakeholders) in deliberation and, second, make public the grounds for a decision after the deliberation. These first two, we argue, can aid organizational learning and coherence in decision-making over time by setting and using precedent over time - an issue that has been bopping around the blogosphere this week.

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Giving the Poor What They Need, Not Just What We Have

David Evans's picture
Recently, this blog discussed a study on cinematic representations of development, highlighting notable films such as Slumdog Millionaire and City of God. Over the weekend, I was reminded that even forgettable films can underline key development lessons. In The Incredible Burt Wonderstone, a professional magician engages in international charity work. He explains, “I go to places where children have neither food nor clean water, and I give them magic,” as he passes out magic kits in an unidentified low-income rural community. A journalist asks, “Do you also give them food and clean water?” “Well, no. I’m a magician. I bring magic.” Later, his endeavor failed, the magician returns to the United States and meets an old friend:

“What about the poor?”
“Turns out they didn’t want magic: They just wanted food and clean water.”
“Ugh. Fools!”
 
The Incredible Burt Wonderstone

What is the Evidence for Evidence-Based Policy Making? Pretty Thin, Actually

Duncan Green's picture

A recent conference in Nigeria considered the evidence that evidence-based policy-making actually, you know, exists. The conference report sets out its theory of change in a handy diagram – the major conference sessions are indicated in boxes.

Conclusion?

‘There is a shortage of evidence on policy makers’ actual capacity to use research evidence and there is even less evidence on effective strategies to build policy makers’ capacity. Furthermore, many presentations highlighted the insidious effect of corruption on use of evidence in policy making processes.

So What do I take Away from The Great Evidence Debate? Final Thoughts (for now)

Duncan Green's picture

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and 20 pages of top quality comments (I couldn’t face adding the twitter traffic).

(For those of you that missed the wonk-war, we had an initial critique of the results agenda from Chris Roche and Rosalind Eyben, a take-no-prisoners response from Chris Whitty and Stefan Dercon, then a final salvo from Roche and Eyben + lots of comments and an online poll. Epic.)

On the debate itself, I had a strong sense that it was unhelpfully entrenched throughout – the two sides were largely talking past each other,  accusing each other of ‘straw manism’ (with some justification) and lobbing in the odd cheap shot (my favourite, from Chris and Stefan ‘Please complete the sentence ‘More biased research is better because…’ – debaters take note). Commenter Marcus Jenal summed it up perfectly:

Evidence and Results Wonkwar Final Salvo (for now): Eyben and Roche Respond to Whitty and Dercon + Your Chance to Vote

Duncan Green's picture

In this final post (Chris Whitty and Stefan Dercon have opted not to write a second installment), Rosalind Eyben and Chris Roche reply to their critics. And now is your chance to vote – but only if you’ve read all three posts, please.The comments on this have been brilliant, and I may well repost some next week, when I’ve had a chance to process.

Let’s start with what we seem to agree upon:

  • Unhappiness with ‘experts’ – or at least the kind that pat you patronizingly on the arm,
  • The importance of understanding context and politics,
  • Power and political institutions are generally biased against the poor,
  • We don’t know much about the ability of aid agencies to influence transformational change,
  • Mixed methods approaches to producing ‘evidence’ are important. And, importantly,
  • We are all often wrong!

We suggest the principal difference between us seems to concern our assumptions about: how different kinds of change happen; what we can know about change processes; if how and when evidence from one intervention can practically be taken and sensibly used in another; and how institutional and political contexts then determine how evidence is then used in practice. This set of assumptions has fundamental importance for international development practice.

The Evidence Debate Continues: Chris Whitty and Stefan Dercon Respond from DFID

Duncan Green's picture

Yesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Chief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond.

It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions.” Neither would anyone argue that power, politics and ideology are not central to policy and indeed day-to-day decisions. Much of the rest of yesterday’s passionate blog by Rosalind Eyben and Chris Roche sets up a series of straw men, presenting a supposed case for evidence-based approaches that is far removed from reality and in places borders on the sinister, with its implication that this is some coming together of scientists in laboratories experimenting on Africans, 1930s colonialism, and money-pinching government truth-junkies. Whilst this may work as polemic, the logical and factual base of the blog is less strong.

Rosalind and Chris start with evidence-based medicine, so let’s start in the same place. One of us (CW) started training as the last senior doctors to oppose evidence-based medicine were nearing retirement. ‘My boy’ they would say, generally with a slightly patronising pat on the arm, ‘this evidence-based medicine fad won’t last. Every patient is different, every family situation is unique; how can you generalise from a mass of data to the complexity of the human situation.” Fortunately they lost that argument. As evidence-informed approaches supplanted expert opinion the likelihood of dying from a heart attack dropped by 40% over 10 years, and the research tools which achieved this (of which randomised trials are only one) are now being used to address the problems of health and poverty in Africa and Asia.

The Political Implications of Evidence-Based Approaches (aka Start of This Week’s Wonkwar on the Results Agenda)

Duncan Green's picture

The debate on evidence and results continues to rage. Rosalind Eyben (left) and Chris Roche (right, dressed for battle), two of the organisers of April’s Big Push Forward conference on the Politics of  Evidence, kick off a discussion. Tomorrow Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific Adviser, and Stefan Dercon, its Chief Economist, respond.

Distinct from its more general usage of what is observed or experienced, ‘evidence’ has acquired a particular meaning relating to proof about ‘what works’, particularly through robust evidence from rigorous experimental trials. But no-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions. Most development workers do not see themselves as scientists in a laboratory, but more as reflective practitioners seeking to learn how to support locally generated transformative processes for greater equity and social justice. Where have these experimental approaches come from and what is at stake?

Lant Pritchett v the Randomistas on the Nature of Evidence - Is a Wonkwar Brewing?

Duncan Green's picture

Recently I had a lot of conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But the tacit hierarchy of these different kinds of knowledge worried me – anything with a number attached had a privileged position, however partial the number or questionable the process for arriving at it. In contrast, decades of experience were not even credited as ‘evidence’, but often written off as ‘opinion’. It felt like we were in danger of discounting our richest source of insight – gut feeling.

In this state of discomfort, I went off for lunch with Lant Pritchett (right – he seems to have forgiven me for my screw-up of a couple of years ago). He’s a brilliant and original thinker and speaker on any number of development issues, but I was most struck by the vehemence of his critique of the RCT randomistas and the quest for experimental certainty. Don’t get me (or him) wrong, he thinks the results agenda is crucial in ‘moving from an input orientation to a performance orientation’ and set out his views as long ago as 2002 in a paper called ‘It pays to be ignorant’, but he sees the current emphasis on RCTs as an example of the failings of ‘thin accountability’ compared to the thick version.


Pages