June 30 marks the end of the fiscal year at the World Bank, and an annual reminder of the stark irony of working in a bank that does not let you save – money is allocated to a particular fiscal year, and if not spent during this time, disappears into a vortex where it is reallocated elsewhere in the institution. This is a problem that is not unique to the World Bank - last week’s Science news had an article  reporting on the findings of a blue-ribbon panel of business leaders, university presidents, and prominent scientists who were asked to come up with a list of top steps to maintain research excellence. The headline is “More money, fewer rules are the way to stay ahead of the pack”, and refers to “excessive regulatory burdens that put a drag on the efficiency of all university research”. In that spirit, I thought I’d cry out to donors and administrators to note some of the ways current ways of funding research have negative impacts on how impact evaluations are done.
Before beginning, I want to make two points clear. First, I don’t want to seem ungrateful – as researchers we are deeply grateful for the money that various organizations put forward to fund research, and we understand that such funding is unlikely to come without strings attached. Second, that I recognize that there were reasons that many of these rules I am about to complain about were put in place – but just as Doing Business aims to simplify excessive rules that make it difficult for the firms to do their work properly – I think that many of these rules are either outdated, or did not consider that the negative costs of these rules might outweigh any benefits they bring, and thus make it difficult for us to do our work as well as we would like. So with that noted, here’s my wish-list of rules that need reforming:
1. Inflexible dates and windows for spending money: Impact evaluations which aim to implement policy are, in large part, at the mercy of the timetable of the policies being evaluated. When these policies are large government programs, they can be subject to multiple delays. Moreover, even when implementation does take place, there needs to be time for the policies to show effects before measuring them, and then potential subsequent delays in carrying out surveys and doing a thorough job of analyzing the data. Given these conditions, funding impact evaluations on 1-2 year cycles when projects might last 3-7 years entails many risks and problems. I can understand the need for progress reports and concerns about researchers holding onto funds for years for projects that never materialize, but the optimal solution is surely not inflexibility in the other dimension. Deadlines can be useful commitment devices, but only when the ability to meet these deadlines is under the control of the actor this deadline is being imposed upon. Ending the fiscal year rules at the World Bank would be my number one request for internal change.
2. End the anti-capital-purchase fixation: Many donors and especially the World Bank use very blunt regulations to deal with concerns about buying capital equipment. Fieldwork for impact evaluations often requires specialized hardware (e.g. GPS units, measuring boards and scales for anthropometrics, handheld PDAs for electronic data collection, etc.). Often these depreciate quickly when exposed to field conditions and rapidly changing technology, and rental markets are quite thin. But trying to buy say $5000 of hardware that in a year or two will be worth at most $1500 is much much harder bureaucratically than contracting out $100,000 of survey services. A little perspective please…More generally, don’t micromanage how we spend money – costs change all the time in the field, and researchers need flexibility to move money from one budget item to another without having to jump through hoops.
3. Tying funding so much to country, sector, and cross-cutting issue: Naturally donors and funders have interests in particular areas (e.g. malaria prevention, or microfinance, or growth), particular countries (increasingly the poorest countries in Africa and post-conflict countries), and particular cross-cutting issues (gender, the environment, etc.). Funding is one lever to try and encourage more work to be done on these topics. But I am concerned about the number of funding opportunities which target funding very narrowly and try to hit all three at once. For example, if you want to learn about the effectiveness of policies which try and upgrade the technology of medium and large firms, the best opportunities to test this effectively might be a large middle income country with thousands of firms – whereas saying we are only interested in seeing this in post-conflict countries for firms run by women may result in very few research opportunities, and definitely lower the quality of the pool of projects applying. While impact evaluations do generate a lot of country-specific knowledge, the hope is also that they are global public goods, teaching us something more general. Moreover, my view is that it is precisely middle-income countries where high-quality research and impact evaluations are among the most important products institutions like the World Bank can offer – aid funding is less important for such countries, and as they get closer to the World production frontier, fewer of the low-hanging fruit in terms of policy actions are available – most kids are getting vaccinated and going to school, there is a financial sector of sorts, etc. – and so testing out policies which allow the country to improve further become more valuable.
4. One-at-a-time project-specific funding (and reporting): Building evaluations around large-scale government policies (or even interventions by NGOs and researchers) is fraught with a large number of implementation issues. I think these evaluations should be thought of as more like a venture capital model – with perhaps 2 or 3 out of 10 succeeding very well, another 2 or 3 experiencing problems but still delivering some lessons, and the remainder falling through completely. In such a case I would like to see more funding of block grants to researchers or consortiums of researchers, where donors buy into a portfolio – with researchers then promising to deliver say 5 or 6 projects out of 10 proposed, and having ability to move funding around easily within this portfolio and just give a single report on the portfolio rather than 10 separate reports.
5. Finally, let’s stop asking for final reports right after the funding window closes: The assumption that the project is complete once all the money is spent might serve well for some operational projects, but is a bad assumption for impact evaluations. These are often long-term projects, with funding from multiple donors contributing to fund the evaluation over a longer time cycle than any single donor is willing to fund (see point 1). Moreover, even when all data is collected, it takes time to write-up the results, present them in seminars, disseminate the work to policymakers, publish it in journals, etc. So asking for the “impact” of impact evaluation funding too soon risks missing where most of the results and action are. Asking for a copy of a paper or summary note when a draft is available, and then following up 2 years after funding has ended would yield better results.
Readers, if you could choose one regulation or rule that donors/funders put in place that you would like to change, what would it be? Donors, am I way off base here? Administrators in charge of arcane rules, any hope of change?