The COVID-19 pandemic accelerated interest in scaling up interactive online education. However, the experience of many of the pre-pandemic massive open online courses (MOOCs) has been for very low course completion rates, with fewer than 10-20 percent of those starting completing courses. Online platforms like Coursera, edX and Khan academy use a mix of student-level reminders, reward badges, gamification, and other behavioral nudges to try to engage students. But perhaps these are less needed when we move from voluntary learners to making online education part of compulsory schooling, and student participation could also depend on the actions of teachers and system-level management by the Education Ministry.
In a new paper published this week in the Proceedings of the National Academy of Sciences (PNAS), I worked with a large group of co-authors (Igor Asanov, Anastasiya-Mariya Asanov, Thomas Åstebro, Guido Buenstorf, Bruno Crépon, Francisco Pablo Flores, Mona Mensmann, and Mathis Schulte) to test the impact of rapid behavioral science interventions at the student-, teacher-, and system-levels in getting high school students to take part in online education. We find the largest impacts come from a system-level intervention: using centralized management over schools, rather than decentralized self-management.
Context
Our team had developed online course modules in entrepreneurial education, statistics and scientific thinking, and Spanish and English language, designed for students in the final years of high school in Ecuador. Collaborating with the Ministry of Education, we had started offering these through computer labs in schools in one educational Zone of the country in September 2019. When the COVID-19 pandemic hit, the Ministry asked if we could scale this quickly nationwide, so that students could use this from their homes. We ended up covering 1,151 schools and more than 45,000 students, rolling it out in phases since the educational calendar differs in some parts of the country. We were concerned about whether students would use the material, so tested a series of light-touch interventions that could be delivered rapidly at scale.
Interventions
We implemented interventions at the student, teacher, and system level.
Student level:
· Financial incentives: students in these groups were given lottery tickets for monetary prizes each time they finished a lesson, and in a second version, also for scoring well on tasks within the lesson.
· Encouragement messages: these were designed to help overcome internal constraints by aiming to convince students they could still finish the course despite the challenges of the pandemic.
· Study plans: these aimed to induce self-regulated learning by asking students to form plans of how they would study and share them with others in their household
· Team up with a peer: this encouraged students to team up remotely with a peer to work together with the idea that they might also hold each other accountable.
Teacher level:
· Benchmarking: treated teachers received a weekly email that showed the performance and progress of their class and compared to that of other teachers of the same class type. This can be viewed as a social comparison nudge.
· Reminder nudges: simple administrative SMS messages to teachers to remind them to ensure their class finishes their lessons on time.
· Encouragement emails: teachers were sent a video showcasing the experiences of teachers and students that had previously finished the course and thanking them for their efforts to help students finish – which may serve as a social information nudge.
System level:
· We developed a real-time online management system, and randomize schools to either centralized management, in which Ministry of Education personnel have access and get weekly reports about each school, or to self-management in which only teachers receive information from the management system about their class.
Impacts
The good news is that even in the control conditions, sustained usage of the online learning platform was pretty high, with the average student completing 23.6 of the assigned 27 lessons, and spending just over 29 hours using the platform. However, there was still room for improvement in both time spent (55-82% completed all modules depending on the wave), and in how much students learned (average scores on subject knowledge tests were below 50 percent). We then measure impacts on two main outcomes: how much time students spend using the platform, and how much they learn on it.
Among the student level interventions, only the lottery ticket financial incentives had a significant impact on study time, with a 0.08 to 0.10 S.D. increase, or 76-91 more minutes of time spent on the platform. The other three treatments have smaller impacts on usage time which are not statistically significant. None of the student-level treatments had a significant impact on learning, with point estimates of 0.025 S.D. or lower. None of the teacher-level interventions significantly improved either study time or knowledge acquisition on average – there is some evidence to suggest that benchmarking has heterogenous effects, having positive impacts for low-performing teachers (who learn they are behind peers) but offsetting negative impacts on initially high-performing teachers (who learn they are ahead of the progress of other teachers).
The strongest impacts come from our system-level intervention of using centralized management rather than decentralized self-management. After 8 weeks of the programs, students in schools assigned to centralized management had completed 2.1 more lessons or 125 minutes more on the platform. In response the Ministry of Education re-exerted its central authority over all schools and sent strong messages in a formal letter in week 9 to urge schools to get teachers and students to complete the course, causing these schools to spend more time in catch-up. However, students in centrally-managed schools learn more, with a 0.126 S.D. (S.E. 0.056) higher score on a knowledge test. A machine-learning policy tree heterogeneity analysis finds that decentralization does no worse than centralized management in schools that were performing above average on the national exams, but self-management does poorly in schools that had below average test scores to begin with.
Lessons
As noted, the first lesson was a positive one, which is that online education can have high usage in a developing country setting. Early in the pandemic we did a rapid response survey on time-use and online access to learn whether students had the technology to access content online, finding internet access was feasible for the majority of students – and coupled with making this part of school activities, take-up and sustained usage was much higher than most MOOCs.
Each of the interventions we tried were motivated by theoretical reasons, and often some empirical evidence, and yet we find that many of these light-touch interventions have small and insignificant impacts. This shows the importance of testing, but also can reflect the general phenomenon of nudge-type interventions having smaller effects when implemented at scale and with less in-person content. In contrast, the online centralized management approach was highly cost-effective in improving usage and learning. It provides almost real-time data on student effort and performance, and personnel from the Ministry of Education had both the willingness to use the data for monitoring, and tools they could use to hold teachers accountable – for example, by reminding teachers that this course was considered part of their required activities for which salaries are paid. Our findings suggest the need to move beyond just student-level interventions and understand how the system can be used to maximize take-up and learning.
Join the Conversation