The first time a World Bank education team tried classroom observations in Brazil, it nearly provoked a state-wide teachers’ strike. It was October 2009 in the northeast state of Pernambuco and two members of the team, Barbara Bruns and Madalena Dos Santos, had handed out stopwatches to school supervisors newly trained in using the Stallings “classroom snapshot” method to measure teacher activities.
Two days later, the stopwatches were on the front page of Pernambuco’s leading newspaper: the teachers’ union called for a state-wide strike to protest an evaluation tool they dubbed the “Stalin method.”
“I thought the grant money we had used to train observers was down the drain,” recalled Bruns, a World Bank retiree now a visiting Fellow at the Center for Global Development. “But the governor, Eduardo Campos, was unfazed. He publicly declared: ‘No one is going to stop me and my secretariat from going into public schools to figure out how to make them better.’ The union backed down and the fieldwork went ahead.”
For all of the recent explosion in data related to learning -- as a result of standardized tests, etc. -- remarkably little is known at scale about what exactly happens in classrooms around the world, and outside of them, when it comes to learning, and what the impact of this has.
This isn't to say that we know nothing, of course:
The World Bank (to cite an example from within my own institution) has been using standardized classroom observation techniques to help document what is happening in many classrooms around the world (see, for example, reports based on modified Stallings Method classroom observations across Latin America which seek to identify how much time is actually spent on instruction during school hours; in many cases, the resulting data generated are rather appalling).
Common sense holds various tenets dear when it comes to education, and to learning; many educators profess to know intuitively what works, based on their individual (and hard won) experience, even in the absence of rigorously gathered, statistically significant 'hard' data; the impact of various socioeconomic factors is increasingly acknowledged (even if many policymakers remain impervious to them); and cognitive neuroscience is providing many interesting insights.
But in many important ways, education policymaking and processes of teaching and learning are constrained by the fact that we don't have sufficient, useful, actionable data about what is actually happening with learners at a large scale across an education system -- and what impact this might have. Without data, as Andreas Schleicher likes to say, you are just another person with an opinion. (Of course, with data you might be a person with an ill-considered or poorly argued opinion, but that’s another issue.)
|side observation: Echoing many teachers (but, in contrast to teaching professionals, usually with little or no formal teaching experience themselves), I find that many parents and politicians also profess to know intuitively ‘what works’ when it comes to teaching. When it comes to education, most everyone is an ‘expert’, because, well, after all, everyone was at one time a student. While not seeking to denigrate the ‘wisdom of the crowd’, or downplay the value of common sense, I do find it interesting that many leaders profess to have ready prescriptions at hand for what ‘ails education’ in ways that differ markedly from the ways in which they approach making decisions when it comes to healthcare policy, for example, or finance – even though they themselves have also been patients and make spending decisions in their daily lives.|
One of the great attractions of educational technologies for many people is their potential to help open up and peer inside this so-called black box. For example:
- When teachers talk in front of a class, there are only imperfect records of what transpired (teacher and student notes, memories of participants, what's left on the blackboard -- until that's erased). When lectures are recorded, on the other hand, there is a data trail that can be examined and potentially mined for related insights.
- When students are asked to read in their paper textbook, there is no record of whether the book was actually opened, let along whether or not to the correct page, how long a page was viewed, etc. Not so when using e-readers or reading on the web.
- Facts, figures and questions scribbled on the blackboard disappear once the class bell rings; when this information is entered into, say, Blackboard TM (or any other digital learning management system, for that matter), they can potentially live on forever.
|A few years ago I worked on a large project where a government was planning to introduce lots of new technologies into classrooms across its education system. Policymakers were not primarily seeking to do this in order to ‘transform teaching and learning’ (although of course the project was marketed this way), but rather so that they could better understand what was actually happening in classrooms. If students were scoring poorly on their national end-of-year assessments, policymakers were wondering: Is this because the quality of instruction was insufficient? Because the learning materials used were inadequate? Or might it be because the teachers never got to that part of the syllabus, and so students were being assessed on things they hadn’t been taught? If technology use was mandated, at least they might get some sense about what material was being covered in schools – and what wasn’t. Or so the thinking went ....|
Yes, such digital trails are admittedly incomplete, and can obscure as much as they illuminate, especially if the limitations of such data are poorly understood and data are investigated and analyzed incompletely, poorly, or with bias (or malicious intent). They also carry with them all sorts of very important and thorny considerations related to privacy, security, intellectual property and many other issues.
That said, used well, the addition of additional data points holds out the tantalizing promise of potentially new and/or deeper insights than has been currently possible within 'analogue' classrooms.
But there is another 'black box of education' worth considering.
In many countries, there have been serious and expansive efforts underway to compel governments make available more ‘open data’ about what is happening in their societies, and to utilize more ‘open educational resources’ for learning – including in schools. Many international donor and aid agencies support related efforts in key ways. The World Bank is a big promoter of many of these so-called ‘open data’ initiatives, for example. UNESCO has long been a big proponent of ‘open education resources’ (OERs). To some degree, pretty much all international donor agencies are involved in such activities in some way.
There is no doubt that increased ‘openness’ of various sorts can help make many processes and decisions in the education sector more transparent, as well as have other benefits (by allowing the re-use and ‘re-mixing’ of OERs, teachers and students can themselves help create new teaching and learning materials; civil society groups and private firms can utilize open data to help build new products and services; etc.).
What happens when governments promote the use of open education data and open education resources but, at the same time, refuse to make openly available the algorithms (formulas) that are utilized to draw insights from, and make key decisions based on, these open data and resources?
- Are we in danger of opening up one black box, only to place another, more inscrutable back box inside of it?
It’s the classic conundrum that governments typically grapple with. Which projects are most beneficial in the long-term? How do large, expensive projects impact on the debt dynamics and macroeconomic stability? While there is a need for large infrastructure investment in the developing world it is often difficult for governments to determine the most beneficial projects.
One of the many baffling aspects of the post-2015/Sustainable Development Goal process is how little research there has been on the impact of their predecessor, the Millennium Development Goals. That may sound odd, given how often we hear ‘the MDGs are on/off track’ on poverty, health, education etc, but saying ‘the MDG for poverty reduction has been achieved five years ahead of schedule’ is not at all the same as saying ‘the MDGs caused that poverty reduction’ – a classic case of confusing correlation with causation.
So I gave heartfelt thanks when Columbia University’s Elham Seyedsayamdost got in touch after a previous whinge on this topic, and sent me her draft paper for UNDP which, as far as I know, is the first systematic attempt to look at the impact of the MDGs on national government policy. Here’s the abstract, with my commentary in brackets/italics. The full paper is here: MDG Assessment_ES, and Elham would welcome any feedback (es548[at]columbia[dot]edu):
"This study reviews post‐2005 national development strategies of fifty countries from diverse income groups, geographical locations, human development tiers, and ODA (official aid) levels to assess the extent to which national plans have tailored the Millennium Development Goals to their local contexts. Reviewing PRSPs and non‐PRSP national strategies, it presents a mixed picture." [so it’s about plans and policies, rather than what actually happened in terms of implementation, but it’s still way ahead of anything else I’ve seen]
One of the fascinating benefits of working at a place like the World Bank is the exposure it offers to interesting people doing interesting things in interesting places that many other folks know little about. Small countries like Uruguay and Portugal, for example, are beginning to attract the attention of educational reform communities from around the world due to their ambitious plans for the use of educational technologies. Much is happening in other parts of the world as well, of course, especially in many countries of Eastern Europe and Central Asia. The largest stand-alone World Bank education project to date that focused on educational technologies, for example, was the Russia E-Learning Support Project. Macedonia gained renown in many corners as the first 'wireless country', with all of that Balkan country's primary and secondary schools online since the middle of the last decade -- although other countries, like Estonia and the tiny Pacific island nation of Niue, also lay claim to versions of this title. (If you are looking for more information on the Macedonian experience, you can find it here and here [pdf]). Much less well known, however, is the related experience of the small country of Georgia, located at the crossroads of Eastern Europe and Western Asia, where small laptops are being distributed to primary school students and where school leaving exams are now conducted via online computer-adaptive testing.