Is your education program benefiting the most vulnerable students?


This page in:

Just about every article or report on education that we read these days – and some that we’ve written – bemoan the quality of education in low- and middle-income countries. The World Bank’s World Development Report 2018 devoted an entire, well-documented chapter to “the many faces of the learning crisis.” Recent reports on education in Latin America and in Africa make the same point.

But within low- and middle-income countries, not all education is created equal, and not all students face the same challenges. As Aaron Benavot highlights, “policies found to be effective in addressing the challenges facing ‘average’ or typical learners” will not necessarily be effective in addressing those “faced by learners from marginalized groups.”

Indeed, we know that within a given classroom, there can be massive variation in learning across students. As you can see in the figure below, from a group of students in New Delhi, India, in a 9th grade class you have students reading at the 8th grade level and at the 6th grade level. For math, they’re performing at the 3rd grade level and the 5th grade level. So if an intervention increases average performance, are we helping those students who were already ahead or those who are furthest behind? (In this case, no one’s really ahead, since even the top performers are way behind grade level. But the students in the bottom 25th percentile are doubly disadvantaged – behind in learning in a low-performing school system.)

Source: World Development Report 2018, using data from Muralidharan, Singh, and Ganimian (2017).

How can education systems help the students who need help the most? A first, a crucial step is to see whether innovations in education are indeed benefiting those students. As Lord Kelvin said, “To measure is to know.” For a short article in that same book, we analyzed 281 impact evaluations of learning interventions. We found that only about 1 in 10 reported outcomes separately for low-income students, fewer than 1 in 4 reported outcomes for students with initially low learning levels, and just 1 in 3 reported separately for girls. We’ll never know if education innovations are helping the students who need them the most if we don’t measure the benefits – of lack of benefits – for those students.

Source: Evans and Yuan (2018)

There’s another saying that’s relevant here: “Weighing a pig doesn’t fatten it.” Measuring impacts for the most disadvantaged or lowest-performing students is essential, but it isn’t enough. That’s why interventions like those that help teachers to “teach at the right level” are so essential. Teaching at the right level interventions help teachers to teach students where they are, rather than adhering strictly to a curriculum that will leave most students behind, especially the lowest performing. Teaching at the right level can include reading camps carried out during school holidays (in India), providing remedial teaching and learning materials (in India), grouping students by ability rather than age (in Kenya), even for an hour a day (in India) or some other portion of the day or year (in Zambia).

In many countries, the majority of students aren’t learning all that they should be. But there are students who are struggling to learn anything at all, and this feeling can contribute to school dropout. Students in Kenya who had dropped out gave statements like this one: “I failed, no matter how hard I tried.” So yes, let’s work to improve the quality of education overall. But we cannot afford to forget the students who need help the most, both by specially targeting interventions to help them and measuring whether they are being in fact learning.

Bonus reading

This post is the second of two posts on how much impact evaluations tell us about improving life outcomes for the most vulnerable. Post 1 – by Markus Goldstein and Aletheia Donald – was published last Wednesday. 


David Evans

Senior Fellow, Center for Global Development

Owen Ozier
September 24, 2018

Breaking Down the Bar Graph.
Dear Dave and Fei,

Thought-provoking post!

I wanted to follow up on the bar graph at the end.  In case anyone is looking further into whether studies show treatment effects on these subpopulations, I thought there were a few points worth exploring.

One is the likelihood of reporting such heterogenous effect estimates (in relation to gender, SES, learning levels) conditional on doing any heterogeneity analysis at all.  After all, some journals or papers are pretty tightly space-constrained.  (Though not in the online appendix!)

Another is whether the likelihood of reporting such effects is in any way predicted by the statistical power in the main effects: when you split your sample, estimation becomes less precise, but this is less of an issue when you start with high precision in the first place.

Yet another is that for the SES and baseline learning levels, it would be interesting to ask how many of the studies that didn't report such heterogeneous effects couldn't do so because they didn't collect the data (so this is a call to collect more data when possilbe, but it has a potentially substantial cost), or simply don't report doing so (lower-hanging fruit, then, to re-analyze the data now).

In any event, thank you for your post!