Syndicate content


What did we learn from real-time tracking of market prices in South Sudan?

Utz Pape's picture

Economic shocks can be painful and destructive, especially in fragile countries that can get trapped into a cycle of conflict and violence. Effective policy responses must be implemented quickly and based on evidence. This requires reliable and timely data, which are usually unavailable in such countries. This was particularly true for South Sudan, a country that has faced multiple shocks since its independence in 2011. Recognizing the need for such data in this fragile country to assess economic shocks, the team developed a real-time dashboard to track daily exchange rates and weekly market prices (click here for instructions how to use it).

There is enough evidence on humanitarian cash transfers. Or perhaps not?

Ugo Gentilini's picture

Take these two numbers: 165 and 1. The former is the number of children in millions who are chronically malnourished or ‘stunted’; the latter is the number of robust impact evaluations comparing cash and in-kind transfers on malnutrition.
I emphasize ‘comparing’ since there is plenty of evidence on individual cash and in-kind (and voucher) programs, but very few studies deliberately assessing them under the same context, design parameters, and evaluation framework.

Beyond the quest for "policy implications": Alternative options for applied development researchers

Humanity Journal's picture

This post, written by Michael Woolcock, is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott and Lisa Denney and Pilar Domingo.

Indonesia fills out form on riceMy nomination for development’s ‘Most Insightful, Least Cited’ paper is Ariel Heryanto’s “The development of ‘development.'”[1] Originally written in Indonesian in the mid-1980s, Heryanto’s gem has been cited a mere 79 times (according to Google Scholar), even in its carefully-translated English incarnation. For me, this paper is so wonderful because it makes, in clear and clever ways, two key points that bear endless repetition, especially to today’s junior scholars. The first point is that inference from evidence is never self-evident: significance must always be interpreted through theory. Consider the seemingly obvious fact that the sun rises in the east every morning, he writes. What could be more universally and unambiguously true? The problem, of course, is that the sun does not rise in the east; instead, despite every piece of sensory evidence to the contrary, the earth rotates counterclockwise on its axis and revolves around a stationary sun, making it appear as ifthe sun rises in the east. But we only know this – or, more accurately, claim to know this – because today we happen to have a theory, itself based on more complex forms of observation and theory, that helps us interpret the prevailing evidence, to reconcile it with evidence from analyses of other cosmic phenomena, and thus draw broadly coherent conclusions and inferences.

Heryanto’s second key point is that we are all captives of language, of the limits of any given tongue to convey the subtleties of complex issues. From this premise he proceeds to unpack the clumsy, alluring yet powerful word that in English we call ‘development’, noting that in Indonesian there are at least two very different interpretations of its meaning, and with this, two very different words – perkembangan and pembangunan – connoting two very different teleologies and policy agendas: the former a natural, ‘organic’ process akin to flowers blooming (“software”); the latter to an overt, intentional and ‘constructed’ political project of nation building (“hardware”). When translated into English, however, both perkembangan and pembangunan are typically rendered simply as “development,” thereby collapsing into a singular popular conception what in Indonesian discourse is a distinctly pluralist one. In the opening week of my class at the Kennedy School, which typically has 50 students who between them speak around 30 languages, we begin with a lively discussion of what “development” means in Arabic, Hindi, French, Turkish, Spanish, Swahili, Swedish… It turns out to mean all sorts of things.[2]

I open this way because I think the next article we need in this “genre” – though hopefully one that quickly transcends it because it is both highly insightful and highly cited! – is something akin to what Desai and Tapscott have begun with their ‘Tomayto Tomahto’ paper. In short, echoing Heryanto, we need more development research on development research. Such scholarship, however, would go beyond providing a mere chronology of changing professional styles, methodological emphases and funding characteristics (scale, sources, time horizons, expectations) to explanations of how and why such changes have occurred. Such explanations would be grounded in analyses of the shifting historical experiences and geo-political imperatives different generations of researchers have sought to accommodate, the particular ideas these experiences and imperatives rendered normative, and the concomitant gains and losses these changes have entailed for those finding themselves managing the “trade-offs” (such as they are) between scholarly independence and public utility.

The Politics of Results and Evidence in International Development: important new book

Duncan Green's picture

The results/value for money steamroller grinds on, with aid donors demanding more attention to measurement of impact. At first sight that’s a good thing – who could be against achieving results and knowing whether you’ve achieved them, right? Step forward Ros Eyben, Chris Roche, Irene Guijt and Cathy Shutt, who take a more sceptical look in a new book, The Politics of Results and Evidence in International Development, with a rather Delphic subtitle – ‘playing the game to change the rules?’

Politics of Results and Evidence in International Development book coverThe book develops the themes of the ‘Big Push Forward’ conference in April 2014, and the topics covered in one of the best debates ever on this blog – Ros and Chris in the sceptics corner took on two gung-ho DFID bigwigs, Chris Whitty and Stefan Dercon.

The critics’ view is suggested by an opening poem, Counting Guts, by P Lalitha Kumari after she attended a meeting about results in Bangalore, which includes the line ‘We need to break free of the python grip of mechanical measures.’

The book has chapters from assorted aid workers about the many negative practical and political consequences of implementing the results agenda, including one particularly harrowing account from a Palestinian Disabled People’s Organization that ‘became a stranger in our own project’ due to the demands of donors (the author’s skype presentation was the highlight of the conference).

But what’s interesting is how the authors, and the book, have moved on from initial rejection to positive engagement. Maybe a snappier title would have been ‘Dancing with Pythons’. Irene Guijt’s concluding chapter sets out their thinking on "how those seeking to create or maintain space for transformational development can use the results and evidence agenda to better advantage, while minimising problematic consequences". Here’s how she summarizes the state of the debate:

"No one disputes the need to seek evidence and understand results. Everyone wants to see clear signs of less poverty, less inequity, less conflict and more sustainability, to understand what has made this possible. Development organizations increasingly seek to understand better what works for who and why – or why not. However, disputes arise around the power dynamics that determine who decides what gets measured, how and and why. The cases in this book bear witness to the experiences of development practitioners who have felt frustrated by the results and evidence protocols and practices that have constrained their ability to pursue transformational development. Such development seeks to change power relations and structures that create and reproduce inequality, injustice and the non-fulfillment of human rights.

Enforcing Accountability in Decision-Making

Heather Lanthorn's picture

A recent episode reminded us of why we began this series of posts, of which is this is the last. We recently saw our guiding scenario for this series play out: a donor was funding a pilot project accompanied by a rigorous evaluation, which was intended to inform further funding decisions.

In this specific episode, a group of donors discussed an on-going pilot programme in Country X, part of which was evaluated using a randomized-control trial. The full results and analyses were not yet in; the preliminary results, marginally significant, suggested that there ought to be a larger pilot taking into account lessons learnt.

Along with X’s government, the donors decided to scale-up. The donors secured a significant funding contribution from the Government of X — before the evaluation yielded results. Indeed, securing government funding for the scale-up and a few innovations in the operational model had already given this project a sort-of superstar status, in the eyes of both the donor as well as the government. It appeared the donors in question had committed to the government that the pilot would be scaled-up before the results were in. Moreover, a little inquiry revealed that the donors did not have clear benchmarks or decision-criteria going into the pilot about key impacts and magnitudes — that is, the types of evidence and results — that would inform whether to take the project forward.

There was evidence (at least it was on the way) and there was a decision but it is not clear how they were linked or how one informed the other.

Allowing ‘Revisibility’ in Decision-Making

Heather Lanthorn's picture

Throughout this series of posts (1, 2, 3, 4), we have considered two main issues. First, how can evidence and evaluation be shaped to be made more useful - that is, directly useable - in guiding decision-makers to initiate, modify, scale-up or drop a program? Or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. Second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

For such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. If the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he further argues, consequent fairness to the decision and coherence to decisions over time.

The first two criteria set us up for the third: first, decision-makers agree ex ante to constrain themselves to relevant reasons (determined by stakeholders) in deliberation and, second, make public the grounds for a decision after the deliberation. These first two, we argue, can aid organizational learning and coherence in decision-making over time by setting and using precedent over time - an issue that has been bopping around the blogosphere this week.

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Giving the Poor What They Need, Not Just What We Have

David Evans's picture
Recently, this blog discussed a study on cinematic representations of development, highlighting notable films such as Slumdog Millionaire and City of God. Over the weekend, I was reminded that even forgettable films can underline key development lessons. In The Incredible Burt Wonderstone, a professional magician engages in international charity work. He explains, “I go to places where children have neither food nor clean water, and I give them magic,” as he passes out magic kits in an unidentified low-income rural community. A journalist asks, “Do you also give them food and clean water?” “Well, no. I’m a magician. I bring magic.” Later, his endeavor failed, the magician returns to the United States and meets an old friend:

“What about the poor?”
“Turns out they didn’t want magic: They just wanted food and clean water.”
“Ugh. Fools!”
The Incredible Burt Wonderstone

What is the Evidence for Evidence-Based Policy Making? Pretty Thin, Actually

Duncan Green's picture

A recent conference in Nigeria considered the evidence that evidence-based policy-making actually, you know, exists. The conference report sets out its theory of change in a handy diagram – the major conference sessions are indicated in boxes.


‘There is a shortage of evidence on policy makers’ actual capacity to use research evidence and there is even less evidence on effective strategies to build policy makers’ capacity. Furthermore, many presentations highlighted the insidious effect of corruption on use of evidence in policy making processes.

So What do I take Away from The Great Evidence Debate? Final Thoughts (for now)

Duncan Green's picture

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and 20 pages of top quality comments (I couldn’t face adding the twitter traffic).

(For those of you that missed the wonk-war, we had an initial critique of the results agenda from Chris Roche and Rosalind Eyben, a take-no-prisoners response from Chris Whitty and Stefan Dercon, then a final salvo from Roche and Eyben + lots of comments and an online poll. Epic.)

On the debate itself, I had a strong sense that it was unhelpfully entrenched throughout – the two sides were largely talking past each other,  accusing each other of ‘straw manism’ (with some justification) and lobbing in the odd cheap shot (my favourite, from Chris and Stefan ‘Please complete the sentence ‘More biased research is better because…’ – debaters take note). Commenter Marcus Jenal summed it up perfectly: