In two posts (post one and post two) over the last month Adam has tried to get hold of the science – or “sciences” – of delivery. He boils them down to the idea that “the world has invested too much in what to deliver and too little in how to deliver it,” as a result of which millions of people are not reached and fail to benefit from development projects. Adam argues that this message is in fact consistent with “much of the everyday work of the World Bank.” We just need to do better at “managing, synthesizing and disseminating global and local knowledge,” and building that knowledge into our operations. By contrast the deliverology approach of Michael Barber does focus on what happens after the policy is written, but it fails to be systematic about problem-solving or self-evaluation.
It may be true that some people at the Bank have been discussing these issues for several years now, but we think Adam’s formulation may downplay the scope of problem. As a result it risks missing the real urgency to develop a “science of delivery.”
Over the last few decades hundreds of millions of people worldwide have been vaulted out of poverty, but the contribution of development practitioners to that progress is very uncertain. It’s simply not obvious that the sort of development assistance we are trained and paid to do is able to fix major development problems.
The problem is not specific to the Bank—it is faced by the entire machinery of development assistance. As a group, we have shown ourselves adept at times at fixing specific and local development problems. But aside from a few dramatic exceptions (the Marshall Plan, the Green Revolution, smallpox, Korea, HIV treatment), we don’t have a good record taking results to scale.
One way of putting it is that we have trouble joining the large and the small. We don’t know how to design individual projects or engagements in a way that leads to really large-scale results and, conversely, having come to consensus on large-scale solutions, we’re not great identifying the specific interventions that would make a difference. This is, we think, the gap that the “science of delivery” proposes to fill. It may not really matter whether you use the term “science” in the modern sense of a systematic way to build and organize knowledge through testable explanations and predictions or just in the earlier meaning of ‘a body of reliable knowledge.’
Either way, ‘science’ is what gets us from small to large and back again. The “science of delivery” is just a shorthand way of saying that, as development practitioners, we need national/regional level diagnostics that describe the roots of development problems in a tractable way, and we need knowledge about how to seize opportunities to do effective projects that contribute to big results.
What are we doing now since we don’t have the ‘science’ (or whatever it is)?
In the world of governance and public sector management, we have waged an unremitting campaign against standardized “best practices” (‘Do this and you will automatically see that result’). The well-rehearsed argument against them is that they ignore political, institutional, and cultural context and generally don’t end up working the way we think they will. Over the last few years we have swung 180 degrees away from so-called “best practices” towards more agnostic project designs that target specific local results and offer only tentative institutional recommendations accompanied by a commitment to keep iterating the design until we find what works in any particular setting.
At the project level, this is a practical strategy for effective governance and public sector work which enables us to get by amidst considerable uncertainty. Even though we don’t really know what works when and where and under what circumstances, we can still try to solve problems in increments.
Agnosticism about institutions allows us to focus on the context and gives us the chance to be iterative. We can keep an eye on the final target while trying various ways to get there. But agnosticism doesn’t help us with the fact that the knowledge base in development is not being deployed very effectively, and is rarely available where we need it most: at the point of delivery. At the country level we know how development outcomes have changed over time and we can likely suggest targets which are feasible but stretching – and indeed this is what we will be doing (we hope) in relation to the new poverty and shared prosperity targets.
Our science-free discipline means that if we say country X should, given other comparators, be able to achieve development outcome Y in period Z – then we know that this is true in principle but we do not know how to advise them how to do it. In sum, we can iterate our way through at the micro-level, but don’t know how to scale up. We can make a big pitch at the macro-country level, but don’t know what to advise at the level of the detailed inputs necessary to achieve this.
This disconnect between the big and the small is what the “science of delivery” is meant to fix.
Bigger than the Bank
So what is the “science of delivery”? At some level it is simply a convention. Practitioners must agree on what qualifies as “data”— what the essential categories of knowledge are, thus how it can be captured and shared.
At another level SOD is a social network dedicated to the business of gathering observations, relating them, and in the process joining big and little, one and many. Of all the world’s development agencies the Bank may be uniquely well suited to coordinate this political, social and intellectual enterprise – no other organization has the global reach needed.
We would propose three basic principles that distinguish a science of delivery from a simple commitment to improving implementation. First, “Every Delivery is Data.” This demands that we focus on the small. We need to collect information about local effectiveness and necessary preconditions for success or failure. Second, “Learning Happens Through Collaboration.” This demands a focus on the large. We need to accumulate quantities of data from individual settings and use them to construct standard taxonomies of project level indicators – equivalent to “diagnostic related groups.” Third, “Evidence can be Deployed Urgently.” This demands that the large and the small are united, through rapid feedback and decision support. We want practitioners to have evidence of what makes for effectiveness and sustainability as quickly as possible.
These ideas have arisen at a time of significant change for the Bank – and those changes will we all hope lead to a Bank that acts more scientifically and hence is even more effective. But the significance of the endeavor is that it is much bigger than the Bank. If the “science of delivery” is equated with the changes in the Bank, then we are missing much of the point. The idea, we think, is to change how development gets done.
It may be true that some people at the Bank have been discussing these issues for several years now, but we think Adam’s formulation may downplay the scope of problem. As a result it risks missing the real urgency to develop a “science of delivery.”
Over the last few decades hundreds of millions of people worldwide have been vaulted out of poverty, but the contribution of development practitioners to that progress is very uncertain. It’s simply not obvious that the sort of development assistance we are trained and paid to do is able to fix major development problems.
The problem is not specific to the Bank—it is faced by the entire machinery of development assistance. As a group, we have shown ourselves adept at times at fixing specific and local development problems. But aside from a few dramatic exceptions (the Marshall Plan, the Green Revolution, smallpox, Korea, HIV treatment), we don’t have a good record taking results to scale.
One way of putting it is that we have trouble joining the large and the small. We don’t know how to design individual projects or engagements in a way that leads to really large-scale results and, conversely, having come to consensus on large-scale solutions, we’re not great identifying the specific interventions that would make a difference. This is, we think, the gap that the “science of delivery” proposes to fill. It may not really matter whether you use the term “science” in the modern sense of a systematic way to build and organize knowledge through testable explanations and predictions or just in the earlier meaning of ‘a body of reliable knowledge.’
Either way, ‘science’ is what gets us from small to large and back again. The “science of delivery” is just a shorthand way of saying that, as development practitioners, we need national/regional level diagnostics that describe the roots of development problems in a tractable way, and we need knowledge about how to seize opportunities to do effective projects that contribute to big results.
What are we doing now since we don’t have the ‘science’ (or whatever it is)?
In the world of governance and public sector management, we have waged an unremitting campaign against standardized “best practices” (‘Do this and you will automatically see that result’). The well-rehearsed argument against them is that they ignore political, institutional, and cultural context and generally don’t end up working the way we think they will. Over the last few years we have swung 180 degrees away from so-called “best practices” towards more agnostic project designs that target specific local results and offer only tentative institutional recommendations accompanied by a commitment to keep iterating the design until we find what works in any particular setting.
At the project level, this is a practical strategy for effective governance and public sector work which enables us to get by amidst considerable uncertainty. Even though we don’t really know what works when and where and under what circumstances, we can still try to solve problems in increments.
Agnosticism about institutions allows us to focus on the context and gives us the chance to be iterative. We can keep an eye on the final target while trying various ways to get there. But agnosticism doesn’t help us with the fact that the knowledge base in development is not being deployed very effectively, and is rarely available where we need it most: at the point of delivery. At the country level we know how development outcomes have changed over time and we can likely suggest targets which are feasible but stretching – and indeed this is what we will be doing (we hope) in relation to the new poverty and shared prosperity targets.
Our science-free discipline means that if we say country X should, given other comparators, be able to achieve development outcome Y in period Z – then we know that this is true in principle but we do not know how to advise them how to do it. In sum, we can iterate our way through at the micro-level, but don’t know how to scale up. We can make a big pitch at the macro-country level, but don’t know what to advise at the level of the detailed inputs necessary to achieve this.
This disconnect between the big and the small is what the “science of delivery” is meant to fix.
Bigger than the Bank
So what is the “science of delivery”? At some level it is simply a convention. Practitioners must agree on what qualifies as “data”— what the essential categories of knowledge are, thus how it can be captured and shared.
At another level SOD is a social network dedicated to the business of gathering observations, relating them, and in the process joining big and little, one and many. Of all the world’s development agencies the Bank may be uniquely well suited to coordinate this political, social and intellectual enterprise – no other organization has the global reach needed.
We would propose three basic principles that distinguish a science of delivery from a simple commitment to improving implementation. First, “Every Delivery is Data.” This demands that we focus on the small. We need to collect information about local effectiveness and necessary preconditions for success or failure. Second, “Learning Happens Through Collaboration.” This demands a focus on the large. We need to accumulate quantities of data from individual settings and use them to construct standard taxonomies of project level indicators – equivalent to “diagnostic related groups.” Third, “Evidence can be Deployed Urgently.” This demands that the large and the small are united, through rapid feedback and decision support. We want practitioners to have evidence of what makes for effectiveness and sustainability as quickly as possible.
These ideas have arisen at a time of significant change for the Bank – and those changes will we all hope lead to a Bank that acts more scientifically and hence is even more effective. But the significance of the endeavor is that it is much bigger than the Bank. If the “science of delivery” is equated with the changes in the Bank, then we are missing much of the point. The idea, we think, is to change how development gets done.
Join the Conversation