Syndicate content

Evaluating the World Bank’s ICT activities: What IEG got right and wrong and what can be done in the future

David McKenzie's picture

The World Bank Group provided $4.2 billion in support to the ICT (information and communications technology) sector over 2003-2010, including 410 non-lending activities for ICT sector reform and capacity building in 91 countries. The World Bank’s Independent Evaluation Group (IEG) had the unenviable task of trying to answer whether all this activity has been relevant and effective. The recently released report has attracted attention from several blogs for the finding that only 30% of projects designed to increase access to underserved areas met their objectives. But I’d like to take a step back and examine exactly how they try and assess impact, critique these efforts, and suggest how it could be done better.

What does the report do well?

It provides a decent overview of what the World Bank has done in this area over the 2003-10 period. This is a useful stocktaking exercise, and one of the values of a big sector-wide evaluation like this – there is a clear setting out of the World Bank’s strategy for working on ICT issues, the different ways the World Bank, IFC and MIGA have operated in this area, and how the portfolio has evolved over time. There is also a good review of trends and developments in the sector over the same period, highlighting the striking growth in mobile technologies and other such developments.

There is also some process evaluation, through looking through project reports and interviewing TTLs (the operational task leaders who are in charge of implementing projects), to provide insights on why projects were thought to work on some occasions and not others, and the types of factors that might contribute to project success. This is surely useful to those planning such projects.

But it doesn’t tell us much at all about impact

The report notes that there are multiple difficulties in trying to evaluate the performance of these projects, including the fact that many of the lending operations are multi-sector, and that recording of indicators and data have been patchy, particularly at the IFC.  I should also note that i) assessing large infrastructure and reform projects is very difficult even under the best of circumstances, and ii) coming in ex-post and trying to say whether what has been done by others has worked or not without much primary evidence or data to work with is certainly not the best of circumstances. As I’ve discussed here before, I’m suspicious of what one can hope to achieve through evaluating portfolios of projects, and would emphasize more single project evaluations. However, that’s not what IEG is asked to do, so the question is, given the nature of the task, is it successful in assessing impact credibly. Unfortunately I think in the case of this report, there are several serious problems.

Almost all of the discussion lacks reference to a counterfactual at all. Particularly egregious in this respect are before-after comparisons during a time when the ICT industry has been developing rapidly of its own accord. For example, the report states that the Bank supported regulatory reforms in Armenia and that “these reforms led to very fast growth in ICT services – mobile penetration grew from 10 percent in 2005 to 85 percent in 2009”. Other similar cases of “success” are stated. Michael Clemens and Gabriel Demombynes illustrate well the problems of such approaches in this blog post.

Success or failure are judged on whether projects achieve the objectives stated in the World Bank loan documents – again with complete disregard of counterfactuals. The report never actually gives examples of what these objectives are, but my guess based on other Bank project results agreements I’ve seen is that these objectives will be things like “increase the proportion of households with access to mobile phones from 10% to 40% over the 5 years of the project” or “new regulations to open up competition are implemented”. Of course external events could lead to these objectives being achieved without the Bank’s involvement, or lead to the Bank having sizeable positive effects but still not meeting these objectives. IEG needs to actually discuss what these objectives are, and work with TTLs to ensure that what is measured is the effects that can be ascribed to the Bank’s actions, not just whether some arbitrary target can be ticked off as completed or not. (World Bank TTLs reading this might also want to note in the comments the perverse incentives that this method of judging project success has on their behavior)

There are a couple of cases where difference-in-differences appear to be used – but without sufficient explanation, and relegated to a footnote.  The report claims that in countries where the World Bank or IFC supported the ICT sector, the speed of mobile telephone penetration was greater than in countries without such support. This is based on a panel data regression which is never actually shown, and only described in a footnote. There is no discussion of the endogeneity of which countries decide to work with the World Bank, which ones decide to implement policies, nor of whether the control group of countries are good counterfactuals for those aided by the Bank. Since no table of results is shown, one cannot even assess the magnitude of these effects, or carefully inspect these results. I understand that the style of this report is non-technical, but for credible analysis, it is essential that an appendix carefully detail the analysis actually done and address such issues. The same applies to the claim that countries with World Bank support increased competition faster than countries without World Bank support.

I understand that trying to assess impact ex-post when poor records are kept and no impact assessment has been done along the way is a really hard thing to have to do. But it is surprising the report doesn’t also judge projects for not doing proper monitoring and evaluation. After all, 8 years, $4.2 billion in investment, surely a couple of million spent on doing proper evaluation is worth it. The external expert panel, which includes Jenny Aker (one of the few researchers to have done serious work on this topic) politely notes that “the Bank Group should also be reminded to give greater emphasis to M&E inter alia to recognize in a timely manner and learn from its successes and failures; this would involve the definition of suitable indicators to measure the impact of ICT projects”.

Overall then, the IEG report is a useful read if you want a descriptive of what the World Bank has been doing in the sector, and a prescriptive of what the authors of the report and management think they should be doing. However, it falls short of meeting its stated objective of answering whether the World Bank’s activities have been relevant or effective.

Going forward, here are my unsolicited recommendations to IEG and others tasked with the thankless job of trying to do these multi-project large assessments:

·         Evaluation involves counterfactuals – take this seriously. It is going to be hard to form adequate counterfactuals for some of these projects, but there needs to be discussion of the assumptions, theories, and data that are used to try and assess impact,

·         Show your work – if you have interesting results, we want to see how you got them. It is important to have technical appendices explaining the details of impact evaluation, even if your main text is designed for less technical readers.

·         Judging projects on whether they meet stated objectives is neither a necessary or a sufficient condition for them being useful or having important impacts – it is worth thinking about whether it makes sense to move away from this method of evaluation, and incentivizing project managers for taking more credible steps to monitor and evaluate their projects.

·         Make it clearer to the reader how hard your job is - in such reports, I think it is important to set out clearly the limits of what you can say, and recommend more attention to prospective evaluations going forward.

 

Comments

Submitted by david phhillips on
bravo - some heartening wisdom here - the basic point that has to be understood without the endless complications of sampling, matching, interference, contamination etc. I.e. common sense and adaptation to circumstances

Submitted by Alexis Diamond on
David makes an insightful point when he says that "Judging projects on whether they meet stated objectives is neither a necessary or a sufficient condition for them being useful or having important impacts – it is worth thinking about whether it makes sense to move away from this method of evaluation, and incentivizing project managers for taking more credible steps to monitor and evaluate their projects." While it is true that a project that does not meet objectives may still achieve useful and/or important impacts, I would add that objectives do still have a unique and important role to play in the context of evaluation and assessment--after all, it's still useful and important to consider whether and to what extent projects can credibly claim to have achieved their objectives. When considering whether/to what extent projects can credibly claim to have achieved their objectives, I agree with David that one should not just compare objectives to results and declare victory (Mission Accomplished!) without an attempt to discuss counterfactuals. I think that sometimes (perhaps often), when such a discussion is lacking, this not due to lack of data or information or imagination--in many cases there is probably a counterfactual story that could be told and would hang together, and even enable some rough description of confidence intervals or bounds on the estimated causal effects. In many such cases, I think the underlying problem is that counterfactual inference does not come naturally to people unless they have received specific training, and so training may be part of the answer. What does come more naturally to people is historical narrative description that uses facts and evidence to trace the processes of cause and effect in a detailed manner. At a minimum, to be credible, such a narrative should be enhanced with the kind of counterfactual story described above. Indeed, this is what IFC requires in its Project Completion Reports (PCRs) for Advisory Services projects.