Annually, governments around the world spend substantial portions of their budgets to design and implement in-service teacher professional development (TPD) programs.
Monitoring and evaluation (M&E) systems play an integral part in identifying what is working or where support is needed. When designed well, M&E systems help guide a TPD program toward its objectives of improved teaching practice, better quality student-teacher interactions, and ultimately, improved student learning outcomes.
These challenges may include but are not limited to ineffective use of data for decision-making, local technical capacity constraints, and limited financial resources.
Coach program provides guidance on how to address some of these challenges when designing, implementing, and using a TPD M&E system. Here are five key takeaways we’ve gleaned from research literature and implementation experience:A recent note by the World Bank’s
Takeaway 1. Build feedback loops between the classroom and the broader education system – and listen to them. Effective M&E systems require tight feedback loops that iteratively direct information into decision-making processes. These tight loops can help agencies and implementers learn, innovate, and improve the design and implementation of TPD programs.
For example, Tusome, a successful national program in Kenya to improve early grade literacy, built in tight feedback loops to monitor teachers’ progress toward desired outcomes and to tailor feedback to teachers as appropriate. At each visit, the Curriculum Support Officer (CSO) recorded whether the teacher employed specific techniques they have been trained on during their lesson and provided feedback to the teacher accordingly. Similarly, data on the number of classroom visits for each school were used to ensure CSOs conducted their allocated visits and were tied to CSOs’ travel reimbursement.
Takeaway 2. What gets measured gets done – don’t forget about your education outcome indicators. M&E plans should include clear and measurable indicators that go beyond input indicators to measure outcomes. Intermediate outcome indicators can suggest whether the program is on track to meet its ultimate learning objectives. For example, focusing on proximal indicators (like teaching practices) can give us intermediate information about achievement of longer-term/distal outcomes (like student outcomes). Prioritizing outcome and intermediate indicators can also help draw the attention of policymakers and managers to results as opposed to input-oriented indicators.
To incentivize progress toward results, indicators may also be linked to financing. For example, improvement in teaching practices (based on data from classroom observation tools such as Teach), or improvement in student learning, could be linked to financing. However, with such results-based financing programs, details matter, and these programs need to be designed carefully to ensure that they do not lead to unintended consequences.
Takeaway 3. Collect education data judiciously—more isn’t always better! For the M&E system to be a useful management tool, it needs to be manageable and not overloaded with indicators. If too many indicators are chosen, too much time will be spent managing the M&E system itself, rather than using the data to manage the TPD program. Also, with too many indicators decision makers can lose track of what is most important for learning. It’s critical that the M&E system does not devolve into feeding long checklists and indicators to higher authorities without using these data for analysis.
It is important to define roles and responsibilities for collecting and using data, and to ensure transparent processes for how data is accessed and used for decision-making and by whom. For example, under Peru’s Acompañamiento Pedagogico Multigrado program, coaches periodically observed classroom sessions and assessed teachers on a broad range of instructional practices. These practices included lesson planning, time management, student engagement, feedback, and management of classroom environment. Coaches then discussed progress with the teachers and developed a plan for improvement. Coaches also shared monthly and quarterly reports on teachers’ progress and areas for improvement with the local education authority and school principals.
Takeaway 4. Partner up to build implementation capacity. Collaborations with NGOs, development partners, or other actors can facilitate data collection and be used to monitor implementation fidelity of TPD programs and progress toward desired outcomes. At the same time, it is vital that these partnerships ensure that the skills and capacity of the public bureaucracy and governmental actors are strengthened, as this has implications for uptake and long-term sustainability of the program.
For example, the World Bank established a multilayered partnership with the government of Punjab, Pakistan to implement a classroom observation tool (based on Teach) to assess and evaluate teaching practices. The staggered rollout ensured early opportunities to integrate lessons learned into program design and implementation. Consultative sessions with government stakeholders, teachers, and district managers along with a process of obtaining continuous feedback from the school leaders helped document lessons learned and facilitated ongoing improvements in the program. The system conducts around 7,000 such observation-and-mentoring sessions on a typical school day, and each of the 160,000 primary school teachers is provided this support at least once per month.
Takeaway 5. Political commitment matters — make the M&E system work for policy actors and for learning. Often, the politics of M&E are more difficult than getting the technical pieces right. While M&E systems provide policy actors with crucial information about progress of the TPD program and can help build support for the program, M&E results can also pose political challenges around transparency and accountability. A well-functioning M&E system requires strong political support and buy-in from key stakeholders who are willing to use the system to make evidence-based decisions. After all, an M&E system is only as good as the data collected and the capacity and willingness of the governments and key actors to use it to make evidence-based decisions. You can read more about evidence-based and cost-effective education programs in the new Global Education Advisory Panel report.
The Monitoring and Evaluation for In-Service Teacher Professional Development Programs guidance package contains a technical guidance note, selecting indicators sheet, and summary PPT. As the Coach team prepares a final version of the note for publication, we invite your comments and feedback to ensure that the guidance is comprehensive, clear, and useful for policymakers and other stakeholders in a range of contexts.
Please reach out to us at firstname.lastname@example.org to share your feedback by March 18 and stay tuned for more opportunities to engage with our work.