Interview by Kathy Chen, Consultant at the World Bank's Strategic Impact Evaluation Fund (SIEF).
What happens when a study of a program’s impact finds that the intended effects are negligible or even negative? When the Government of Cambodia, supported by the World Bank, launched a project in 2008 to expand preschools and other preprimary education programs, Deon Filmer and his research colleagues tracked the impact on children from low-income households. The study found that problems with the program’s rollout and lack of demand from parents resulted in the program having little, and sometimes even negative impacts on children’s cognitive and social-emotional development. Instead of giving up, the research team worked with the Cambodian government and World Bank operational colleagues to design a new program that incorporated lessons learned from the impact evaluation. In an interview, Filmer explains what happened:
The first time around, your team set out to measure the impact of three types of preprimary programs in Cambodia. What was the genesis for your research project?
A lot of research shows the benefit of early child stimulation and investing early in education and in life. But the question is how to implement interventions to ensure that type of beneficial stimulation in a low-income environment. We’re still learning, so the idea was to generate knowledge on how to deliver programs to do this.
The Ministry of Education had one track to embed preschools into existing primary schools, while non-governmental organizations liked community-based preschools. And a third approach was to train key women in communities (known as “core mothers”) on early childhood development and then have them train other mothers. No one knew which approach to take. The costs for each were different and it wasn’t clear how effective each would be. So, we designed a prospective evaluation of the programs that the government planned to put in place.
The program ran into a number of implementation problems.
How soon did you get an idea that things weren’t proceeding as you would have liked?
What we saw wasn’t unusual. It took a long time to get formal preschool construction going; it involves contracts, government procurement, and so on. For the community preschools, because the teachers were to receive a stipend from a different ministry, it took some time to get these people paid, so a lot of them quit. The home-based model required a lot of coordination to train core mothers and for core mothers to gather people in the community. A lot of things were supposed to happen, but they didn’t.
About 12 months after the program was launched, things weren’t looking that promising. But we still collected the endline data and analyzed it. The first round of analysis showed that the program wasn’t having such a positive outcome and that one clear reason was that the program hadn’t been implemented as expected.
So you actually got negative impact in some cases?
That was eye opening. And it highlighted the fact that it wasn’t just the implementation problems that were at play. There was a lot of underage enrollment in primary school before the government began the push for preprimary education. We found that some skills for children at endline were worse than children in the control group. What happened was that instead of attending primary school like many in the control group, preschool age children in treatment groups didn’t enroll in any school at all, or they enrolled in preschool instead of primary school. Enrolling in preschool instead of primary school may have put them at a disadvantage if preschool teachers were not as prepared and motivated as primary school teachers or if the preschool curriculum was geared towards the youngest children in the classroom.
Your findings highlighted an outcome that researchers, and governments, might want to avoid discussing – that sometimes programs aren’t implemented correctly, or that we simply don’t see the positive impact we expected.
The results weren’t what we or the government had expected or had hoped for. Certainly, there are a lot of impact evaluations where the baseline is collected and then, before final results are collected, the intervention goes awry for one reason or another. In many cases, people just drop the evaluation or don’t publicize the results. People think, what are the incentives to complete an evaluation that isn’t going to show a good result? Or why publicize failure? But I believe, and this evaluation showed, that there are still lessons to be learned from programs that don’t work as expected.
What was the Cambodia government’s response to your findings?
Previously, we had worked with the government to evaluate a secondary school scholarship program. That evaluation showed pretty big impacts on keeping kids, particularly girls, in schools. The trust we had built over time meant that our Cambodian government counterparts knew we weren’t out to “get” them or to shame them, but to help understand what was happening. We kept them informed of what we were finding, and there was lots of discussion about what it meant.
How did your impact evaluation influence the design of the new project that Cambodia is now implementing to improve access to and quality of preschools?
We identified two things that we thought contributed to the outcomes we had found. One was the somewhat poor quality of service provision—the supply side. It took a long time to build preschools, to staff them up, to get materials in both formal and community schools. Second was the low demand for services—the demand side. Parents didn’t seem to value the new preschools. So the new project has tried to address both these issues: When we deploy services, let’s work hard to ensure they are of high quality and then let’s make sure the community is aware of the value of preschool.
What is the right way to think about evaluations that don’t show impact?
There are always local lessons and global lessons. For local lessons, we’re engaging so the best programs are implemented to get the best outcomes for the largest number of people. It’s about using context-specific evidence to improve the welfare of a population. For global lessons, you line up what you find in Cambodia with what people find in other countries and you try to tease out lessons that might guide the next intervention in a different context. It’s not about running a few experiments here and there and drawing a global conclusion, but about working with a country to help it do the best it can for its people.
Join the Conversation