Syndicate content

evidence

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Giving the Poor What They Need, Not Just What We Have

David Evans's picture

Recently, this blog discussed a study on cinematic representations of development, highlighting notable films such as Slumdog Millionaire and City of God. Over the weekend, I was reminded that even forgettable films can underline key development lessons. In The Incredible Burt Wonderstone, a professional magician engages in international charity work. He explains, “I go to places where children have neither food nor clean water, and I give them magic,” as he passes out magic kits in an unidentified low-income rural community. A journalist asks, “Do you also give them food and clean water?” “Well, no. I’m a magician. I bring magic.” Later, his endeavor failed, the magician returns to the United States and meets an old friend:

“What about the poor?”
“Turns out they didn’t want magic: They just wanted food and clean water.”
“Ugh. Fools!”
 

The Incredible Burt Wonderstone

What is the Evidence for Evidence-Based Policy Making? Pretty Thin, Actually

Duncan Green's picture

A recent conference in Nigeria considered the evidence that evidence-based policy-making actually, you know, exists. The conference report sets out its theory of change in a handy diagram – the major conference sessions are indicated in boxes.

Conclusion?

‘There is a shortage of evidence on policy makers’ actual capacity to use research evidence and there is even less evidence on effective strategies to build policy makers’ capacity. Furthermore, many presentations highlighted the insidious effect of corruption on use of evidence in policy making processes.

So What do I take Away from The Great Evidence Debate? Final Thoughts (for now)

Duncan Green's picture

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and 20 pages of top quality comments (I couldn’t face adding the twitter traffic).

(For those of you that missed the wonk-war, we had an initial critique of the results agenda from Chris Roche and Rosalind Eyben, a take-no-prisoners response from Chris Whitty and Stefan Dercon, then a final salvo from Roche and Eyben + lots of comments and an online poll. Epic.)

On the debate itself, I had a strong sense that it was unhelpfully entrenched throughout – the two sides were largely talking past each other,  accusing each other of ‘straw manism’ (with some justification) and lobbing in the odd cheap shot (my favourite, from Chris and Stefan ‘Please complete the sentence ‘More biased research is better because…’ – debaters take note). Commenter Marcus Jenal summed it up perfectly:

Evidence and Results Wonkwar Final Salvo (for now): Eyben and Roche Respond to Whitty and Dercon + Your Chance to Vote

Duncan Green's picture

In this final post (Chris Whitty and Stefan Dercon have opted not to write a second installment), Rosalind Eyben and Chris Roche reply to their critics. And now is your chance to vote – but only if you’ve read all three posts, please.The comments on this have been brilliant, and I may well repost some next week, when I’ve had a chance to process.

Let’s start with what we seem to agree upon:

  • Unhappiness with ‘experts’ – or at least the kind that pat you patronizingly on the arm,
  • The importance of understanding context and politics,
  • Power and political institutions are generally biased against the poor,
  • We don’t know much about the ability of aid agencies to influence transformational change,
  • Mixed methods approaches to producing ‘evidence’ are important. And, importantly,
  • We are all often wrong!

We suggest the principal difference between us seems to concern our assumptions about: how different kinds of change happen; what we can know about change processes; if how and when evidence from one intervention can practically be taken and sensibly used in another; and how institutional and political contexts then determine how evidence is then used in practice. This set of assumptions has fundamental importance for international development practice.

The Evidence Debate Continues: Chris Whitty and Stefan Dercon Respond from DFID

Duncan Green's picture

Yesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Chief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond.

It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions.” Neither would anyone argue that power, politics and ideology are not central to policy and indeed day-to-day decisions. Much of the rest of yesterday’s passionate blog by Rosalind Eyben and Chris Roche sets up a series of straw men, presenting a supposed case for evidence-based approaches that is far removed from reality and in places borders on the sinister, with its implication that this is some coming together of scientists in laboratories experimenting on Africans, 1930s colonialism, and money-pinching government truth-junkies. Whilst this may work as polemic, the logical and factual base of the blog is less strong.

Rosalind and Chris start with evidence-based medicine, so let’s start in the same place. One of us (CW) started training as the last senior doctors to oppose evidence-based medicine were nearing retirement. ‘My boy’ they would say, generally with a slightly patronising pat on the arm, ‘this evidence-based medicine fad won’t last. Every patient is different, every family situation is unique; how can you generalise from a mass of data to the complexity of the human situation.” Fortunately they lost that argument. As evidence-informed approaches supplanted expert opinion the likelihood of dying from a heart attack dropped by 40% over 10 years, and the research tools which achieved this (of which randomised trials are only one) are now being used to address the problems of health and poverty in Africa and Asia.

The Political Implications of Evidence-Based Approaches (aka Start of This Week’s Wonkwar on the Results Agenda)

Duncan Green's picture

The debate on evidence and results continues to rage. Rosalind Eyben (left) and Chris Roche (right, dressed for battle), two of the organisers of April’s Big Push Forward conference on the Politics of  Evidence, kick off a discussion. Tomorrow Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific Adviser, and Stefan Dercon, its Chief Economist, respond.

Distinct from its more general usage of what is observed or experienced, ‘evidence’ has acquired a particular meaning relating to proof about ‘what works’, particularly through robust evidence from rigorous experimental trials. But no-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions. Most development workers do not see themselves as scientists in a laboratory, but more as reflective practitioners seeking to learn how to support locally generated transformative processes for greater equity and social justice. Where have these experimental approaches come from and what is at stake?

Lant Pritchett v the Randomistas on the Nature of Evidence - Is a Wonkwar Brewing?

Duncan Green's picture

Recently I had a lot of conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But the tacit hierarchy of these different kinds of knowledge worried me – anything with a number attached had a privileged position, however partial the number or questionable the process for arriving at it. In contrast, decades of experience were not even credited as ‘evidence’, but often written off as ‘opinion’. It felt like we were in danger of discounting our richest source of insight – gut feeling.

In this state of discomfort, I went off for lunch with Lant Pritchett (right – he seems to have forgiven me for my screw-up of a couple of years ago). He’s a brilliant and original thinker and speaker on any number of development issues, but I was most struck by the vehemence of his critique of the RCT randomistas and the quest for experimental certainty. Don’t get me (or him) wrong, he thinks the results agenda is crucial in ‘moving from an input orientation to a performance orientation’ and set out his views as long ago as 2002 in a paper called ‘It pays to be ignorant’, but he sees the current emphasis on RCTs as an example of the failings of ‘thin accountability’ compared to the thick version.

Doing Development Differently - A Chimera?

Maya Brahmam's picture

A lot has recently been written about “doing development differently” from crowdsourcing the next Millennium Development Goals (a la ONE’s Jamie Drummond) to the Copenhagen Consensus and their 16 investments with the biggest payoffs for development (listed here).

Enter Ha-Joon Chang, a noted Cambridge economist, who sees development as a different game altogether –the analogy he uses is that current development thinking is like “Hamlet without the prince.” According to Duncan Green’s recent blog post, Chang believes that with all the focus on health, education, poverty reduction, we are missing the elephant in the room (the prince): We are missing what poor countries really need, which is “productive capabilities” and an important focus on upgrading skills and industry, which has largely been set aside since the 1980s by donors and international organizations.

Getting Evaluation Right: A Five Point Plan

Duncan Green's picture

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)

Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations.  The internal wrangling and pressure to not publish these must have been so high:

  • ‘What will our donors say? How will we justify poor results to our funders and contributors?’
  • ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
  • ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
  • ‘Our field managers will get upset, angry and discouraged when they read these.’
  • ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
  • ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’

The zeitgeist may indeed be transparency, but few organizations are actually doing it.


Pages