Assessing outside of the “classroom box” while schools are closed: The potential of phone-based formative assessments to support learning continuity

|

This page in:

Image
In low-resource contexts, however, limited internet connectivity and low access to ”smart” devices have prompted organizations and policymakers to explore implementing phone-based assessments
In low-resource contexts, however, limited internet connectivity and low access to ”smart” devices have prompted organizations and policymakers to explore implementing phone-based assessments.

The COVID-19 pandemic shocked education systems worldwide, affecting where and how students learn, how they interact with teachers and peers, and how their learning is monitored and supported. At the same time, this shock has provided an opportunity to think outside the "classroom box" and to explore innovative approaches to support learning remotely. While the provision of remote learning content has been necessary to support learning continuity, simply making content available is not enough - it is important to know whether and how students are in fact learning  from the remote resources. Assessing students to determine what they know, understand, and can do is the only way to identify where students need support and to plan actions to address learning needs.

Learning assessments have typically been implemented in person, however ensuring safety during the COVID-19 pandemic made such implementation challenging. In places where internet connectivity and access to computers and other devices are widespread, learning management systems and online platforms have facilitated remote learning assessment administration. In low-resource contexts, however, limited internet connectivity and low access to ”smart” devices have prompted organizations and policymakers to explore implementing phone-based assessments through direct phone calls , short message services (SMS), and/or interactive voice response (IVR) technologies.

Such phone-based assessment solutions can be used to conduct low-stakes formative assessments to, for example, understand how much content students have absorbed, identify any misconceptions in students’ understanding, provide constructive feedback to students, and offer additional resources to support learning at home. Such solutions can also be used to conduct impact evaluations for interventions, such as those introduced in response to the pandemic. With either purpose, as policymakers and other stakeholders design and implement phone-based assessments, they should consider educational measurement principles and standards to help prevent potential biases in the assessment results and their interpretation. These principles and standards are related to key concepts of validity and reliability.

Validity is related to the correct use and interpretations of the assessment results. In adapting assessments to administration over the phone, it is necessary to consider several key questions related to validity:

  • What are the assessment's objectives and intended uses? A clear statement identifying the purpose of the assessment can help determine what additional information has to be documented to ensure that the phone-based assessment can produce valid results. For example, an assessment based on television or radio learning content could be administered over the phone with the objective of determining whether students are indeed learning through these modalities.
  • Is the assessment content delivered by phone relevant and related to the learning content to be measured? Any learning assessment should be aligned with specific learning content in order to produce meaningful information about what students know and can do. In this case, the question implies the existence of a comprehensive definition of learning content that students have to study while they are not physically in school, and this learning content can be measured remotely.
  • Is there any learning content that will be omitted from the assessment if it cannot be assessed by phone? If key learning content cannot be assessed by phone, this must be clearly documented to facilitate the interpretation of assessment results. For example, when a paper-based assessment includes plots, graphs, figures, manipulation of objects or long reading passages that cannot be included in a phone-based assessment, these omitted elements may alter the interpretation of what students know and can do.
  • Is the use of phones making it harder for students to understand and answer assessment questions? Because of the relatively small and recent use of phones to deliver remote learning assessments, it is critical to ensure that this assessment modality does not add complexity for students in demonstrating what they know and can do.
  • What thinking processes are students using to answer the questions over the phone? Assessment developers may need to conduct a small pilot study in which students explain how they solved each question posed to them over the phone. It is expected that students describe in their own words specific thinking processes they used to solve the tasks included in the assessment.

In addition to validity, test scores must characterize student achievement reliably. When it comes to reliability, the key questions to consider are:

  • Are phone-based assessment scores accurate? It is important to have in mind that the implementation of assessments over the phone may add new sources of bias to the measurement of learning. Thus, depending on how the assessment study is conducted, one or more statistical analyses may inform whether the assessment results are an accurate representation of students’ knowledge and skills.
  • Are item scores consistent among themselves when administered over the phone? Once assessment data is gathered, this type of reliability is captured by statistical techniques that determine whether the items show a consistent scoring pattern (i.e., students who answer correctly question 1 also tend to answer correctly other similar questions in the same test).
  • Finally, with respect to the enumerator accuracy, are assessment scores produced by different enumerators administering the assessment over the phone consistent with each other? Enumerators must be trained on scoring phone-based assessments to ensure precision and consistency. Scoring rehearsals and simulations can be used as part of the enumerator training. Teams developing and administering phone-based assessments can also increase the reliability of phone-based assessment results by developing standardized assessment administration protocols and training materials for enumerators to maximize equivalence of results.

As the COVID-19 pandemic has prompted education leaders to explore and introduce innovations to support learning continuity, it is critical that these innovations, especially those that support the learning process outside of school walls, are in line with the established standards in educational assessment. While student learning can only be assessed remotely in many places, basing assessment findings, whether from in-person, computer-based, or phone-based assessments, on valid and reliable evidence makes them more valuable to support students in their learning process. These considerations for phone-based assessment should motivate improvements in learning assessment tools and strengthen resilient assessment practices that can cope with future shocks to education systems.

Join the Conversation