Having just returned from Dartmouth and meetings with the Center for Health Care Delivery Science, I’ve been thinking about the phrase “Delivery Science.” World Bank Group President Jim Yong Kim’s use of the term in recent speeches is related to using evidence-based experimentation to improve poor health, education, water, and basic service outcomes in the developing world.
Reflecting on this, I think, in many ways, “science” and “delivery” are distinct and need to be understood as different but reinforcing principles. So let’s break it down.
Building a stronger evidence base on implementation: the ‘science’
The science part is about getting empirical and transparent about what works, what fails, and the reasons behind both. Do we even understand why things work when they do, and can we consistently deliver basic goods and services to poor citizens in predictable ways at comparable costs? To do this, we need data: empirical, experiential, and lots of it.
The health atlas developed in the United States comes to mind. Based on disciplined and data-driven experiments, we can see how many doctors practice across the United States, the disease burden on the population in particular areas, and how much we spend to treat specific conditions. Once we have the data, we can ask important questions such as: Are improved health outcomes positively correlated to expenditure, the number of doctors, or something else? Are patient preferences considered in deciding what treatments are used? Do we have comparable data against which to ask important questions? The short answer is often no, and without it we can’t develop a science to drive improvements in delivery.
Delivery is ‘art’ as much as a science
But it’s not enough to develop know-how. The hardest problems in the world (water for the urban poor, jobs for youth, or mitigating climate change) are as much about do-how as science. Wicked problems are never purely technical and generally involve many moving pieces.
It reminds me of the challenge of improving performance in sport. We can assemble data and run the analytics, but what about coaching, practice, and continuous improvement? How much of greatness in delivery is about translating know-how into do-how and how much of that is inspiration and motivation combined with good data and disciplined practice?
Coaches, practice, and feedback loops
Michael Barber from Tony Blair’s famous public service delivery unit talks about the importance of focusing on a few things that matter, the importance of developing routines, collecting data, measuring results, learning fast, and iterating. Eric Ries, the famous start-up entrepreneur, writes about a “lean start-up method” characterized by disciplined, data-driven experimentation, iteration, a strong focus on learning, and rapid cycle times. Lant Pritchet, Michael Wolcoock, and Matt Andrews use the language of problem-based iterative adaptation to convey a similar point about experimentation, iteration, and learning from practice to solve problems.
Malcolm Gladwell describes talent as the desire to practice. Master practitioners spend no less than 10,000 hours to hone their skills. Professional athletes and musicians use coaches to continue improving and honing their skills. So how do we translate evidence, knowledge, and data into better results in the delivery of public services?
These insights come from very different places but are connected. They converge around the importance of running data-driven experiments with a clear hypothesis, relentless focus on results, rapid feedback loops, and iteration. And no matter how good a technical solution may appear, execution is always harder than it looks and requires adaptive leadership and drawing on the strengths of multi-skilled teams.
And enabling conditions also matter. The World Bank has a project cycle (scoping, project preparation, loan approval, and evaluation) which is not always aligned with the problems we’re trying to solve (call it the problem cycle). It can take us several years to prepare a project, several more to implement, and then a year or more to evaluate it. Meanwhile, the world has moved on, problems mutate and practitioners need real-time data to learn as they do and respond to shifting client priorities. There is value in conducting disciplined experiments but also value in real-time learning and adaptive iteration. What would it take to do both?
Join the Conversation