Syndicate content

Learning from Data-Driven Delivery

Aleem Walji's picture
Also available in: العربية | Español | 中文 | Français

Given confusion around the phrase “science of delivery,” it’s important to state that delivery science is not a “one-size-fits-all” prescription based on the premise that what works somewhere can work anywhere. And it does not profess that research and evidence ensure a certain outcome.
 
A few weeks ago, the World Bank and the Korea Development Institute convened a global conference on the science of delivery. Several development institutions assembled including the Gates Foundation, the Grameen Foundation, UNICEF, the Dartmouth Center for Health Care Delivery Science, and the mHealth Alliance. We discussed development opportunities and challenges when focusing on the extremely poor, including experiments in health care, how technology is reducing costs and increasing effectiveness, and the difficulty of moving from successful pilots to delivery at scale.
 
The consensus in Seoul was that a science of delivery underscores the importance of a data-driven and rigorous process to understand what works, under what conditions, why, and how. Too often in international development, we jump to conclusions without understanding counterfactuals and assume we can replicate success without understanding its constituent elements.

Our two units (World Bank Institute and World Bank’s ICT Unit) focus on integrating technology and innovative processes into World Bank-supported programs to tackle global challenges —  from youth employment, food security, and climate change to water, sanitation, health, and education. This has made it clear to us that there are no blueprint solutions. Even when we know what works in one context, we need to adapt and adopt solutions and pay close attention to implementation as a science (some call it an art).
 
But let us spell out some principles that emerged in Seoul that we hope will be welcomed by development practitioners.
 
1. The importance of framing the right problem cannot be overstated. Too often we are hammers looking for nails and don’t address the actual causes in our interventions. If we’re working on symptoms rather than underlying causes, we’re unlikely to succeed and likely to frustrate our partners and beneficiaries.
 
2. End-user preferences matter. We shouldn’t assume we know what “users” want or need (whether they are patients, students, or water users) without asking them and allowing them to make informed choices. Too often we over-supply what people don’t need (e.g. prostate surgeries in the United States) and under-supply what they want (a better quality of life).
 
3. Complex problems don’t fit neatly into sector-specific boxes or correspond to the way experts and expert-led institutions are organized. Places like the World Bank have practices dedicated to health, education, water, and transport, for example. But the challenge of youth unemployment is about education, labor markets, private sector development, and access to finance. It doesn’t fit neatly into any one box and requires multi-disciplinary and multi-stakeholder teams to identify binding constraints and leverage strengths from government, the private sector, and civil society to ‘unblock’ them.
 
4. All delivery is data. Technology, and mobile phones in particular, have reduced the cost of data collection, analytics, and visualization. We can leverage this information to make better decisions faster and create feedback loops with our ultimate beneficiaries. The World Bank has always been a data-centric organization, but these new tools for collecting and making sense of data can allow us to become a data-driven organization.
 
5. Implementation matters and requires adaptive execution. Leadership teams must be able to change direction when necessary and use “learning by doing” and “doing by learning” approaches to fail fast, fail small, and fail forward. Development challenges require technical and adaptive skills. Technical specialists tend to under-value process expertise, iteration, and the challenges of implementation. Successful implementation is about turning art into science by building practitioners’ capacity to learn faster, iterate, and change course when needed.
 
Traditional development models capture knowledge when it is too late to apply lessons to live projects. We need ways to develop and challenge our hypotheses while we execute and be less concerned about the accuracy of our original hypotheses. We need to try many things, look for positive deviants where they exist, and better understand why some results are better than others despite similar circumstances. What are we not seeing and what could we learn if only we admitted we might be working with misinformed assumptions? Can we be rigorous without being rigid and open without being undisciplined? And this requires evolving the “knowledge agenda” to what conference participants called the “do agenda.”
 
Development needs a much more robust learning-by-doing approach to solve vexing challenges with (rather than for) clients. This requires humility and the courage to admit what has not worked. What if we could publish an annual learning from failure report and use failure as an opportunity to iterate and improve outcomes? What if our data was open for all to see throughout project implementation, recognizing that solutions can emerge from unexpected places? Why can’t we assemble multi-disciplinary teams with expertise in areas which include behavioral science, technology, and impact evaluation to work with sector experts on the hardest delivery challenges?
 
We need to create a community and a movement to address the most extreme poverty on the planet. The World Bank cannot do it alone, and so we invite partners to join us in a more open, rigorous, and client-centric effort to solving the world’s hardest problems. Are you up for it? 

Comments

Submitted by Richard Holloway on

Dear Aleem,

Good to see you use the term "positive deviants" an important concept tool little employed.

While i agree with you we need to learn as we execute, it is also important to plan for the amount of time that is needed for learning. At PRAN we did a pilot, and moved on to the next big thing before systematically learning what we had achieved.

Richard

Submitted by Chen Hong on

Hello Colleagues, I do like to echo your opinions, it is real the fact that one-size does not fit all, and fully agree that some technical experts need raise awareness the value process expertise, iteration, and the challenges of implementation. The stakeholder analysis and client-oriented strategy is the key element for making the successful practice. It do need to analysis why it is successful and the other doesn’t work.

I do hope there is a open platform for this kind information sharing and exchange.

Dear Aleem,

We worked on a small pilot project using mobile phones to survey monthly beneficiaries who are part of a DFID funded extreme poverty livelihoods programme in Bangladesh. The pilot was a success and we eventually rolled it out on across the programme - a smart phone based monitoring system that surveys every month on a census level over 100,000 beneficiary households! More information can be found here: http://www.shiree.org/extreme-poverty-monitor/cms-2-monthly-snapshot/

The data we receive from the field is processed in real-time and the results of the data available on a visualisation dashboard. The really unique part is that the visualisation not only allows the user to see trends across the country, but has been designed as a MANAGEMENT tool - so you can identify outliers households who are 'failing' and then give targeted support. A so-called learning-by-doing approach.

Whilst not a perfect system, it is possible to go from a pilot to full scale in a short space of time!

confirming Allems and Chris reported principles.

Programs have to accommodate frequent changes of plans and decisions by the independent groups. It becomes clear that only those support organizations that attempt to institutionalize learning mechanisms have a chance to reach their goals, which may also be changing in the process.

Public service agents had particular difficulty accepting the concepts of reflection and self-reflection, generally due to their top-down training and work in hierarchies. They tend to:

• be inclined to emphasize their status and find it difficult to accept the poor as equal partners in development,

• be trained to convince the participants (or worse, to use authority) and they often do not see how ineffective these methods have been.

Consequently, they are unable to see the need to transition to a facilitating role, which would enable the participants to find their own problem solutions, assisted by the civil services where needed.

• have difficulties listening to stakeholders instead of promoting their public organization’s solutions

Add new comment