Syndicate content

What Is Science and What Is Delivery?

Aleem Walji's picture

Having just returned from Dartmouth and meetings with the Center for Health Care Delivery Science, I’ve been thinking about the phrase “Delivery Science.” World Bank Group President Jim Yong Kim’s use of the term in recent speeches is related to using evidence-based experimentation to improve poor health, education, water, and basic service outcomes in the developing world.

Reflecting on this, I think, in many ways, “science” and “delivery” are distinct and need to be understood as different but reinforcing principles. So let’s break it down.

Building a stronger evidence base on implementation: the ‘science’

The science part is about getting empirical and transparent about what works, what fails, and the reasons behind both. Do we even understand why things work when they do, and can we consistently deliver basic goods and services to poor citizens in predictable ways at comparable costs? To do this, we need data: empirical, experiential, and lots of it.

The health atlas developed in the United States comes to mind. Based on disciplined and data-driven experiments, we can see how many doctors practice across the United States, the disease burden on the population in particular areas, and how much we spend to treat specific conditions. Once we have the data, we can ask important questions such as: Are improved health outcomes positively correlated to expenditure, the number of doctors, or something else? Are patient preferences considered in deciding what treatments are used? Do we have comparable data against which to ask important questions? The short answer is often no, and without it we can’t develop a science to drive improvements in delivery.

Delivery is ‘art’ as much as a science

But it’s not enough to develop know-how. The hardest problems in the world (water for the urban poor, jobs for youth, or mitigating climate change) are as much about do-how as science. Wicked problems are never purely technical and generally involve many moving pieces.

It reminds me of the challenge of improving performance in sport. We can assemble data and run the analytics, but what about coaching, practice, and continuous improvement? How much of greatness in delivery is about translating know-how into do-how and how much of that is inspiration and motivation combined with good data and disciplined practice?

Coaches, practice, and feedback loops

Michael Barber from Tony Blair’s famous public service delivery unit talks about the importance of focusing on a few things that matter, the importance of developing routines, collecting data, measuring results, learning fast, and iterating. Eric Ries, the famous start-up entrepreneur, writes about a “lean start-up method” characterized by disciplined, data-driven experimentation, iteration, a strong focus on learning, and rapid cycle times. Lant Pritchet, Michael Wolcoock, and Matt Andrews use the language of problem-based iterative adaptation to convey a similar point about experimentation, iteration, and learning from practice to solve problems.

Malcolm Gladwell describes talent as the desire to practice. Master practitioners spend no less than 10,000 hours to hone their skills. Professional athletes and musicians use coaches to continue improving and honing their skills. So how do we translate evidence, knowledge, and data into better results in the delivery of public services?

These insights come from very different places but are connected. They converge around the importance of running data-driven experiments with a clear hypothesis, relentless focus on results, rapid feedback loops, and iteration. And no matter how good a technical solution may appear, execution is always harder than it looks and requires adaptive leadership and drawing on the strengths of multi-skilled teams.

And enabling conditions also matter. The World Bank has a project cycle (scoping, project preparation, loan approval, and evaluation) which is not always aligned with the problems we’re trying to solve (call it the problem cycle). It can take us several years to prepare a project, several more to implement, and then a year or more to evaluate it. Meanwhile, the world has moved on, problems mutate and practitioners need real-time data to learn as they do and respond to shifting client priorities. There is value in conducting disciplined experiments but also value in real-time learning and adaptive iteration. What would it take to do both?

Comments

Submitted by Ed Campos on

The scientific method is fundamentally about establishing a hypothesis about a given intervention and testing it through quantifiable means. One very good example comes from the High Value Health Care Collaborative in the US. The Collaborative, which comprises premier research hospitals and health care organizations, proposed to test the efficacy of shared decision making in health treatments: for any given treatment, the doctor would engage the patient in a deep discussion about what the patient really wants and they jointly arrive at a decision on the application of the treatment. The Collaborative has tested this decision making method on various types of treatments, e.g. knee replacement surgery. One indicator it has used to demonstrate the efficacy of the method in terms of bringing down costs is the reduction in the length of stay of patients in the hospital for a given treatment. For knee replacement surgery for instance, the number of days declined significantly across health care regions in the United States. Using this type of evidence, they have been able to argue that shared decision making, which is basically an approach to address the asymmetry of information, is an intervention that improves the delivery of health care.

Shared decision making is an implementation intervention as opposed to a policy intervention. Improving delivery means improving implementation. Implementation in turn requires experimenting with possible interventions and, to learn whether an implementation intervention works or not, one must have indicators and data. In science of delivery, the “science” part is basically about rigorous hypothesis testing of implementation interventions.

Submitted by Cecile Fruman on

Great blog, Aleem. You have very well captured the ingredients of the Art and the Science of Delivery. The Knowledge and Solutions Change Team has been hard at work unpacking what it would take for the WBG to operate more along the lines of what you describe. One of the main insights is that we need more flexible instruments, moving away from the current project cycle, to allow us to engage with the client to diagnose problems, experiment, learn, course-correct, implement and deliver multi-sectoral solutions. This will require incentives geared towards rewarding solving problems vs taking projects to the Board, and a different way of engaging with management and the Board. If anyone reading this blog is interested in working with us to flesh out these ideas, please let me know!

Submitted by Aaron on

Great post, Aleem. Two things stand out for me.
1) we need to focus on the 'do-how'
2) the difference between the "problem-cycle" and the "project-cycle"

Do-how comes from experience and is difficult transfer. We would benefit from spending more time doing, at all levels, and less time talking about it. Two concrete actions we could take are to involve more junior staff earlier on in project preparation AND implementation; and invest in 'experience transfer programs' from ready-to-retire and recently-retired senior bank project leaders and managers. These are our coaches!

On the second point, a 3 - 5 year project is comprised of dozens of problems, each of which has to be addressed for the project to succeed. Project management should account for this reality along with operational governance and systems - design should involve more listening and observation of the intended clients of a given service, procurement should be streamlined and reporting should focus on "what didn't work?" and "what did?" according to the data, so that over the course of 3-5 years we are able to try and try and try again instead of investing in one big try that is most likely to fail as often as it succeeds.

Submitted by Ralph on

It seems to me that the World Bank is trying to address the same old problem (lack of adequate knowldege management or timely learning from operations) with a new name (need to apply the science of delivery). Regardless of name, it's clearly been a challenging undertaking, and it will likely continue to be so unless strong focus and resource allocation are applied to implementation (practice) rather than on theories. The academic and the private sector are good sources of innovation in how organizations can tackle the challenge of innovation, which is generally the outcome of well managed iterative adaptation. One clear driver has been the realization that it makes more sense (in both ethic and economic terms) to try to learn timely and improve continuously throughout the project cycle from its very beggining, than from rigidly framed evaluations carried out when projects have been completed. Perhaps it's time for the World Bank Group, including the IFC (where very limited focus is made on learning), to consider integrating a solid knowledge and learning management component into each operation. It would also need a mechanism to make sure that rigurous capture, validation and dissemination of evidence-based practices are timely made.

Thinking about science and delivery leads to deal with the relationship among science and society (science and societal actors). A key instrument to this purpose is the theory of socialisation of scientific and technological research (STR), with its own applications and its possible developments. One assumption from which this theory starts is that scientific research is currently perceived as not being fully integrated in society. This is probably connected with the difficulty of STR actors to face up with some profound changes occurred in the last decades. Such changes concern:
- the societies (including LDCs), in which knowledge becomes a crucial factor and in which individuals and groups are gaining more and more weight;
- the ways in which science is produced, which in turn increases the demand for a better contextualisation of STR with regard to the different human realities, as well as for a greater application of research results in terms of innovation;
- the rising importance of actors " public, private and non profit" external to the scientific "establishment", but who have an increasing role in orienting the research and its products.
In this new context, which highly industrialised countries, emerging economies and developing countries all share to a certain extent, STR - despite its centrality for economic and social development - is often put into question and tends to be perceived as a sort of foreign body with respect to societies. Hence we may argue that faced up by the challenge of a better integration with society STR is involved in two types of social processes already under way:
- the adaptation to the features, needs and expectations of society and of its members.
- the identity, that is STR acquiring greater control over itself and over the social dynamics (including political, cultural, organisational, or communication ones) increasingly embedded in the research.
STR socialisation has thus to be regarded as a composite and multidirectional process. In order to study it, a constructionist approach has to be followed, identifying the areas in which actors involved in STR construct the relationship between science, technology and society, both on the "adaptation" side and on the "identity" side. 7 areas have to be taken into account:
- Scientific practice (the dynamisms of scientific groups in the strict sense);
- Scientific mediation (the activities aimed at promoting/facilitating a productive cooperation among researchers and other key actors inside and outside their research institutions);
- Scientific communication (as an instrument, not only to inform or dialogue, but also to build a higher and more widespread responsibility on research among the different actors
- Evaluation (practices, programmes and measures aimed at ensuring accountability in the research world, designing policies and coordinating the allocation of funds);
- Innovation (the interactions between research, economics and societal needs);
- Governance (a set of structures and processes for collective decision making, involving both governmental and non-governmental actors);
- Gender (science as an unfriendly environment for women also due to a hidden structure of discrimination; persistence of gender stereotypes identifying science and technology with masculinity; the underrepresentation of women in scientific leadership).
These areas as a whole represent a taxonomy of the domains in which STR socialisation processes can take place, as processes of construction of the relationship between science and society, enabling to detect phenomena which otherwise would risk not to be considered, or not to be grasped in all their relevance. Firstly we can identify, "structural" phenomena, related to existing social structures the actors cope with (such as social norms, behavioural models, social roles, values, etc.) and that can either hinder or facilitate their action. But we can also identify "agential" phenomena, that is referred to the actors and their agency, i.e. to their orientation to modify reality, which can be translated into specific practices.

Submitted by Philippa on

Owen Barder, in his talk on Development and Complexity (http://international.cgdev.org/doc/CGDPresentations/complexity/player.html) provides a great example for the link between science and art in the delivery and the power of iterative adaptation when facing complex problems:
Steve Jones, a now famous evolutionary biologist was asked to improve the shape of a nozzle used for making soap powder as Unilever’s scientists did not seem to get very far with attempting to calculate the optimal shape based on the theory of non-linear fluid dynamics.
Steve Jones tried an empirical approach: He produced 10 randomly distorted copies of the existing nozzle, tested them and produced another 10 randomly distorted copies of the one that had proven to be most efficient. The nozzle resulting from 45 generations of randomly distorted copies was, according to Barder, “hundreds of times more efficient” than the original one. The example is found in “Section 3, making a nozzle” of the video mentioned above.
My question would be whether the WB could embrace such mechanisms by moving away from a fixed project setting to an adaptive approach. Two major issues with this seem to be: 1) The question of measurement: If projects are adaptive with respect to what are they evaluated? 2) How are incentives aligned? If there is no fixed objective, how can staff be incentivised to deliver effectively?
Maybe I am wrong but it does not seem theoretically impossible to tackle these challenges. Following on from the ideas exposed above, it seems that 1) adapting what the WB measures (moving towards indicators focused on problem solving rather than achieving a fixed outcome and real time feedback rather than retrospective evaluation) and 2) changing how indicators are measured (making use of what Kenneth Neil Cukier and Viktor Mayer-Schoenberger call 'datafication', "the ability to render into data many aspects of the world that have never been quantified before", http://www.foreignaffairs.com/articles/139104/kenneth-neil-cukier-and-viktor-mayer-schoenberger/the-rise-of-big-data) could help overcoming these issues.
Disclaimer: This is the view of an uninformed outsider and I could be very mistaken about the facts. Secondly, I am aware of the difficulties with translating this into reality but there does not seem to be any harm in contemplating. I would be interested to hear an opinion from inside on this topic.

Submitted by Dione on

Great story, didn’t know a lot of that. Think that delivery is the key to science. And you have told about it in a very accessible way. People should understand what science is and with help of delivery it becomes approachable for everybody. Of course the way of delivery is very important too.

Submitted by Kirsten Joelle Spainhower on

While data driven decision making is valuable in delivering quality development packages we must be careful not to overlook key drivers of behavior like culture or locality. "Soft" science often does not fit neatly into the parameters of scientific method. I would urge the development community not to get so high on its data that it misses critical elements of the human condition: heart, hope, spirit, or a higher calling that belies measurement all together.

Submitted by Pete Vowles on

Aleem, I have come to this blog several months too late, but given the reform agenda announced this week it seems very relevant. In the UK, we are looking at our programme cycle to see if there is a way to restructure and rebalance our programme rules and tools to allow greater iteration and adaption within our investments, for all the reasons you and others have set out. We are clear that processes are part of the picture, but the wider incentive environment if vital. How to empower front line staff to use their judgement while ensuring we get quality and impact. It would be great to discuss further.

Submitted by ABON on

With reference to Mr. Quinti and Mr. Vowles's comments on the incentive environment and the theory of socialisation of scientific and technological research (STR), the WBG could draw on its fascinating work "Culture and Public Action" as one of the key entry points for improving development outcomes and addressing adaptive iteration in design and implementation of development programs.

Add new comment