Performance budgeting (PB) has a deep and enduring appeal. What government would not want to allocate resources in a way that fosters efficiency, effectiveness, transparency, and accountability? However, such aspirations have proven poor predictors of how performance data are actually used.
The potential benefits of identifying and tracking the goals of public spending are undeniable, but have often justified a default adoption of overly complex systems of questionable use. Faith in PB is sustained by a willingness to forget past negative experiences and assume that this time it will be different. Without a significant re-evaluation, PB’s history of disappointment seems likely also to be its future.
A next-generation approach to PB should acknowledge that not only are the transaction costs of such systems significant, they often result in a checklist mentality. A more targeted approach can both reduce administrative costs and make performance data more useful. It requires clear and realistic objectives for performance budgeting, and systematic differentiation between ministries and programs that merit a substantial performance focus, and those where a lighter regime is appropriate. Attention also needs to shift from creating a new set of rules and formats for performance budgeting to instilling routines whereby performance information is used regularly as part of program management, thus instilling a “performance culture” in government.
We developed these insights through a World Bank study responding to sustained interest on the part of budget officials in the Eastern Europe and Central Asia (ECA) region to learn more about the experiences of OECD countries and other governments in implementing PB. Rather than learning about theory and best practice, budget officials wanted to know more about the practical challenges, and how countries had adapted their approaches to their own context. Many countries in the region have started the move to PB, but most have made only limited progress.
The countries selected as case studies (Australia, Estonia, France, Netherlands, Poland, Russia, and the U.S.A.) have been using PB for varying amounts of time. For instance, Australia, France, the Netherlands, and the United States have revised their approach over decades, but Poland, Estonia, and Russia are relatively recent adopters. The full study, including individual case studies, can be downloaded for free here, but our key lessons for practice are set out below.
Measure key strategic goals, but not everything
A common tendency when countries introduce PB is to create a complex architecture of programs, subprograms, and activities, leading to a profusion of performance indicators. Countries with the most experience with PB have steadily reduced the number of programs and indicators over time. For example, both France and the Netherlands cut their number of performance indicators in the budget by about half. This reflects both the administrative burden of reporting and the limited time senior managers have to monitor performance.
Know why you are doing it
Because PB is not well-defined, clear objectives will help to guide its design, manage expectations, and increase the chances of success. The objectives for PB should take into account the administrative culture of the country and how civil servants are likely to interpret and respond to such initiatives. Our findings echo other research suggesting that performance data are never the driving factor for resource allocation, and program managers are the most likely users of the data.
Recognize the capacity required
It is easy for governments and donors to underestimate the transaction costs of PB for both the central budget authority and line ministries, but it is important to ask which parts of government, if any, have the capacity to absorb such costs. Budget analysts typically need new skills to deal with program structures, performance indicators, and the costing of programs. The analysis of performance data and program evaluation also require capacity.
If capacity constraints are significant, governments should consider staged or partial approaches to PB. Examples are piloting in a few priority ministries or programs, excluding fixed and semifixed costs from the performance budget, and limiting the objectives to simple presentation of the budget in a programmatic form, with no attempt to closely link performance data and budget allocations.
Focus on learning and cultural change
Performance budgets are typically built with a great deal of effort, only to be ignored until year-end. To become relevant, managers need to review and discuss performance reports periodically throughout the year so that necessary corrections can be made promptly. The budget preparation process may be too compressed a venue to discuss performance metrics.
Other tools are better suited to re-aligning expenditures and evaluating spending efficiency and effectiveness, among them program evaluation, expenditure reviews, and management learning forums. PB is more likely to succeed when it is part of a broad-based long-term government effort to introduce a more performance-oriented culture rather than being an isolated reform promoted by the central budget authority.
Adapt over time
Governments are too quick to abandon rather than adapt past efforts, and too uncritical of the claimed success stories of others. An “adaptive” approach acknowledges when problems are occurring and avoids dogged implementation of a system that frequently lacks buy-in from users. If PB is to establish credibility, some continuity is vital. How public officials make use of performance data will not take root in the course of a single administration. The success of PB depends upon winning the hearts and minds of public officials, not just changing how they report information. That requires a combination of both a pragmatic persistence and a long-term perspective.
This blog post was first published on the IMF blog.