Ed’s note: This guest blog is by Ben Durbin, Head of International Education for the National Foundation for Educational Research (NFER).
In September this year, the International Initiative for Impact Evaluation (3ie) published an impressive new review of education programmes in low and middle-income countries. It is a rich resource, which stands out in its sheer scope, covering studies investigating a diverse set of interventions and educational outcomes.
The review is packed full of insights, worthy of the attention of anyone with an interest in education and development. However, on reading the review, the most important insight became clearest to me in the section on building new schools. Stepping outside of the technical world of research for a moment, and into the world of common sense, some rather obvious points arise.
A new school building will probably only make a difference to learning if new school buildings are needed where you build them. It will only make a difference if students show up to learn. And it will only make a difference if there are well-trained, well-resourced teachers available to teach in it.
Asking about the impact of a new school building is like asking whether a microwave dinner is going to help deal with someone’s hunger. It might. But only if they have a microwave oven to cook it in. And giving someone a microwave oven is only going to make sense if they also have a reliable supply of microwave meals.
This may seem like a rather inane example, but it illustrates an important point that is evident throughout the review. As its ‘conceptual map’ shows, education systems are just that: systems, with complex and interacting parts that all have to work in harmony in order to achieve their desired impact. This explains a key conclusion made by the authors: “program effectiveness is often dependent on programme design and implementation, and the local context”.
This is why impact evaluations, whether adopting a randomised controlled trial or quasi-experimental design, are of very little value in isolation. It is not enough to solely ask: “Is this programme effective?”, or even to ask the more accurate question “Was this effective previously?”
The most helpful question to ask is: “Could this intervention be effective in my context, and if so how?”
Lessons to keep in mind
In order to provide the evidence necessary to answer this question, there are some important lessons. These lessons can be applied equally to anyone considering the evidence on programs in low and middle-incomes considered by 3ie as well as to high income countries (for example, the randomised controlled trials NFER conducts in the UK on behalf of the Education Endowment Foundation).
For those commissioning and delivering evaluations:
- Studies must capture the context to the program and report this in a clear and transparent manner (ideally adopting a consistent approach across studies). This should include the condition of the other parts of the local and national system into which the program was introduced; and the specific issue the program was supposed to address.
- A thorough process evaluation should accompany the research in order to provide evidence on why it did or didn’t succeed, what the challenges to implementation were, and how these could be addressed in future.
- Again, it is essential first to have a clear understanding of the context. What are the pre-existing strengths and weaknesses of the system? What are the barriers to improvement? Is there a single point of failure or bottleneck in the system that could be addressed through focused intervention? Or are there multiple issues which need to be addressed simultaneously in order to achieve any real improvement?
- When examining previous studies, consider what it was that lead to their success or failure, and what implications this might have for your own implementation. And don’t just focus on the successful programs – often just as much can be learned from the unsuccessful ones (which is why initiatives such as AllTrials, intended to address bias against publication of null or negative results, are so important).
Policymakers tempted to adopt the policies of countries performing best in PISA and TIMSS this year should ask themselves: Could the same policy be effective in a different national context (what is the existing condition of my country’s education system – and is it in any way comparable to the country where the policy originates)? And how could the policy be successfully implemented (for example, what resources were required, were all stakeholders successfully engaged, did multiple elements need to be coordinated)?
These issues were explored in greater detail in a Special Issue of NFER’s journal Educational Research earlier this year. The editorial made the point that “Applying lessons learned from other contexts can, and should, be a powerful tool… Like all tools, though, it is misuse that leads to damage and the fault lies with the users rather than the tool”.
Indeed, without an appropriate level of diligence, we risk nations around the world investing in the national equivalents of shiny new school buildings empty of teachers and students.
Find out more about World Bank Group education on Twitter and Flipboard.