Syndicate content

monitoring and evaluation

Getting Evaluation Right: A Five Point Plan

Duncan Green's picture

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)

Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations.  The internal wrangling and pressure to not publish these must have been so high:

  • ‘What will our donors say? How will we justify poor results to our funders and contributors?’
  • ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
  • ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
  • ‘Our field managers will get upset, angry and discouraged when they read these.’
  • ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
  • ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’

The zeitgeist may indeed be transparency, but few organizations are actually doing it.

More on Indices: Evaluating the Evaluators

Shanthi Kalathil's picture

Building partly on a previous post on the value of indices, I'm highlighting this week a new edited volume published by Peter Lang Press, entitled Measures of Press Freedom and Media Contributions to Development: Evaluating the Evaluators. This rich and informative collection of essays, edited by Monroe Price, Susan Abbott and Libby Morgan, focuses a spotlight on well known indices in the area of press freedom and media independence, raising valuable questions about what the indices are measuring, what they are not measuring, and the linkage between assistance to independent media and democratization. I've contributed a chapter to this volume, as have expert colleagues such as Guobin Yang, Andrew Puddephat, Lee Becker and Tudor Vlad, Craig LaMay, fellow CommGAP blogger Silvio Waisbord, and many others.

Communication and the Results Agenda

Anne-Katrin Arnold's picture

The newly launched IEG Annual Review of Development Effectiveness 2009 attests the World Bank a significant increase in development effectiveness from financial year 2007 to 2008. After a somewhat disappointing result last year, 81 % of the development projects that closed in fiscal 2008 were rated satisfactory with regard to the extent to which the operation's major relevant objectives were achieved efficiently.

One crux remains: the measurement of impact. Monitoring and evaluation components in development projects are by far not as frequent as IEG would wish: Two thirds of the projects in 2008 had marginal or negligible M&E components. Isabel Guerrero, World Bank Vice President of the South Asia Region, listed several reasons at the launch of the IEG report this week: the lack of integrative indicators, the Bank's tradition to measure outputs instead of outcomes, the lack of baseline assessments in most projects, and reluctance on the clients' side to realize M&E in projects.