What is driving the learning crisis? Clues from the initial rollout of the Global Education Policy Dashboard


This page in:

A classroom in the public primary school of Ianjanina in rural Madagascar
Indicators from the Global Education Policy Dashboard show that teachers’ skills and student preparedness need improvement across all countries, regardless of income level. Copyright: Mohammad Al-Arief/World Bank

With an estimated 70% of 10-year-olds unable to read a simple text, education systems are in a learning crisis. There is an urgent need to invest in education, but given strained budgets, that investment ought to be strategic.

This is where the Global Education Policy Dashboard (GEPD)—a low-cost, nimble, but comprehensive tool that measures key drivers of learning throughout an education system—comes in. GEPD is a critical initiative funded by the Foundational Learning Compact, a World Bank multi-donor umbrella trust fund for early childhood, primary and secondary education.

The GEPD has so far been implemented in six countries—Peru (2019), Jordan (2019), Rwanda (2020), Ethiopia (2020 & 2021), Madagascar (2021), and Sierra Leone (2022) —typically for less than $150k per country. For an example of how the dashboard has been applied, see our previous post.

What are some common obstacles to learning?

While the challenges faced by each country are unique, it is possible to identify some general patterns by pulling together detailed GEPD data from multiple countries. When looking at all the school-level indicators, it becomes clear that aspects such as teachers’ skills and student preparedness need improvement across all countries, regardless of income level.

Yet, certain aspects, like basic inputs and infrastructure, as well as teacher presence, are performing well in middle-income countries like Peru and Jordan but continue to need improvement in lower-income settings like Sierra Leone and Madagascar.

Even for aspects that show a need for improvement across all systems, there are significant differences in the depth of that need. Students and teachers are struggling in every system we have surveyed, but the disparities across countries are large. In Peru, only around 33% of students reached the proficiency level (around 80% of the items correct) on the fourth-grade student assessment, compared to less than 4% in Ethiopia, Madagascar, Rwanda, and Sierra Leone.

For teacher pedagogical skills, as measured by the TEACH classroom observation instrument, scores ranged from 19% in Ethiopia to 68% in Jordan. Likewise, teachers often lack the necessary content knowledge. In Peru, where teachers scored the highest, only 38.4% are proficient on the content they teach. And in Madagascar, which received the lowest scores, the share is just 0.3%. On student readiness, as measured by afirst-grade oral assessment, the percentage of students who could get 80% of the items correct ranged from 2.4% in Madagascar to 53% in Peru.

Another consistent finding is that rural areas tend to struggle even more than the national totals suggest. In most cases, teachers and students in rural areas perform worse than those from urban areas. For instance, in Peru, children in urban areas answered around 79% of the items correctly on the fourth-grade assessment, while children in rural areas answered around 66% of the items correctly. The disparities extend to other areas of the education system as well. In school inputs and infrastructure, rural areas are typically much worse off than urban areas. For example, in Peru, while around 65% of urban schools have a working internet connection, only around 8% of rural schools have one. Sixty-three percent of children in urban schools in Ethiopia have access to a working tablet or computer, but only around 14% have access in rural areas. In Madagascar, 27% of urban schools have adequate access (at least 80% of students) to pens, pencils, textbooks, and exercise books—a low rate—but the rural share is even worse, at only around 10%.

The de jure and de facto divide

One factor that drives conditions and behaviors is the quality of the policy environment, which the GEPD measures in two ways. First, it evaluates the de jure policy environment—the policies on the books. Second, it measures the de facto policy environment, meaning how the policies are implemented by the teachers and principals in schools. It turns out that the de jure and de facto policy environments often differ markedly. The figure below shows the de jure policy scores in yellow and the de facto scores in red. Both sets of scores are on the same 1-5 scale, with 5 as the highest score and 1 as the lowest score. Overall, the de jure scores are higher than the de facto scores, meaning that although laws supporting learning are on the books, the implementation in the schools is often lacking. For example, while Jordan has a policy on the books encouraging professional development, only 28% of teachers reported participating in a professional development program in the past 12 months.

Conversely, there are some cases where a country has an incomplete de jure policy environment, but teachers and principals have created informal systems to fill in the policy gaps. In such cases, a country can score better on the de facto measure than the de jure. In the case of Ethiopia, although no policy assigns responsibility for monitoring teacher performance, nearly 80% of teachers report receiving evaluations in the past 12 months.

Analyzing the practices, policies and politics of learning at the system level

A key advantage of the GEPD is that it allows analysis of drivers of learning across layers of the education system. In a linear regression with the school-level average of the fourth-grade student achievement examination as the outcome variable, school-level practice indicators explain around 51% of the variation in student learning. When de facto policy indicators are added, this rises to 60% of the variation explained. Finally, the percentage explained rises to around 63% when we include politics indicators. The GEPD indicators explain a sizeable share of the between school variation. Within schools there is variation in student performance as well, driven by things like the home environment of the student as well as the student’s baseline performance. In the GEPD data, around 50% of the variation in student performance is within schools and 50% is between schools for most countries.

Variation in 4th-grade student achievement explained by school practices, policies, and politics indicators

Variation in 4th-grade student achievement explained by school practices, policies, and politics indicators

Identifying trends and opportunities

We have presented a few examples of the types of insights that can be drawn from cross-country comparisons, which can help enhance countries’ ability to contextualize their performance and identify peers they can learn from and collaborate with. There are also important lessons for development partners and international actors. The GEPD data can help identify trends and emerging needs, opportunities for regional programs, and help tailor global efforts to specific country needs.

To improve learning outcomes, countries will need to be deliberate and careful about how scarce education resources are allocated and how priorities are set. A growing number of countries are benefiting from the GEPD data, and as the number increases, so does our ability to leverage it for impactful cross-country analysis. In our next blog, we will showcase how countries are already using this data to translate insights into actual policy changes.



Brian Stacy

Data Scientist, Development Data Group, World Bank

Sergio Venegas Marin

Young Professional, World Bank Group

Adrien Ciret

Research Analyst, Education Global Practice, World Bank

Halsey Rogers

Lead Economist, Education Global Practice

Join the Conversation