Open data, closed algorithms, and the Black Box of Education

|

This page in:

hey, what's going on in there?
hey, what's going on in there?
Education is a ‘black box’ -- or so a prevailing view among many education policymakers and researchers goes.

For all of the recent explosion in data related to learning -- as a result of standardized tests, etc. -- remarkably little is known at scale about what exactly happens in classrooms around the world, and outside of them, when it comes to learning, and what the impact of this has.

This isn't to say that we know nothing, of course:

The World Bank (to cite an example from within my own institution) has been using standardized classroom observation techniques to help document what is happening in many classrooms around the world (see, for example, reports based on modified Stallings Method classroom observations across Latin America which seek to identify how much time is actually spent on instruction during school hours; in many cases, the resulting data generated are rather appalling).

Common sense holds various tenets dear when it comes to education, and to learning; many educators profess to know intuitively what works, based on their individual (and hard won) experience, even in the absence of rigorously gathered, statistically significant 'hard' data; the impact of various socioeconomic factors is increasingly acknowledged (even if many policymakers remain impervious to them); and cognitive neuroscience is providing many interesting insights.

But in many important ways, education policymaking and processes of teaching and learning are constrained by the fact that we don't have sufficient, useful, actionable data about what is actually happening with learners at a large scale across an education system -- and what impact this might have. Without data, as Andreas Schleicher likes to say, you are just another person with an opinion. (Of course, with data you might be a person with an ill-considered or poorly argued opinion, but that’s another issue.)
 
side observation: Echoing many teachers (but, in contrast to teaching professionals, usually with little or no formal teaching experience themselves), I find that many parents and politicians also profess to know intuitively ‘what works’ when it comes to teaching. When it comes to education, most everyone is an ‘expert’, because, well, after all, everyone was at one time a student. While not seeking to denigrate the ‘wisdom of the crowd’, or downplay the value of common sense, I do find it interesting that many leaders profess to have ready prescriptions at hand for what ‘ails education’ in ways that differ markedly from the ways in which they approach making decisions when it comes to healthcare policy, for example, or finance – even though they themselves have also been patients and make spending decisions in their daily lives.

One of the great attractions of educational technologies for many people is their potential to help open up and peer inside this so-called black box. For example:
  • When teachers talk in front of a class, there are only imperfect records of what transpired (teacher and student notes, memories of participants, what's left on the blackboard -- until that's erased). When lectures are recorded, on the other hand, there is a data trail that can be examined and potentially mined for related insights.
  • When students are asked to read in their paper textbook, there is no record of whether the book was actually opened, let along whether or not to the correct page, how long a page was viewed, etc. Not so when using e-readers or reading on the web.
  • Facts, figures and questions scribbled on the blackboard disappear once the class bell rings; when this information is entered into, say,  Blackboard TM (or any other digital learning management system, for that matter), they can potentially live on forever. 
And because these data are, at their essence, just a collection of ones and zeroes, it is easy to share them quickly and widely using the various connected technology devices we increasingly have at our disposal.
 
A few years ago I worked on a large project where a government was planning to introduce lots of new technologies into classrooms across its education system. Policymakers were not primarily seeking to do this in order to ‘transform teaching and learning’ (although of course the project was marketed this way), but rather so that they could better understand what was actually happening in classrooms. If students were scoring poorly on their national end-of-year assessments, policymakers were wondering: Is this because the quality of instruction was insufficient? Because the learning materials used were inadequate? Or might it be because the teachers never got to that part of the syllabus, and so students were being assessed on things they hadn’t been taught? If technology use was mandated, at least they might get some sense about what material was being covered in schools – and what wasn’t. Or so the thinking went ....

Yes, such digital trails are admittedly incomplete, and can obscure as much as they illuminate, especially if the limitations of such data are poorly understood and data are investigated and analyzed incompletely, poorly, or with bias (or malicious intent). They also carry with them all sorts of very important and thorny considerations related to privacy, security, intellectual property and many other issues.

That said, used well, the addition of additional data points holds out the tantalizing promise of potentially new and/or deeper insights than has been currently possible within 'analogue' classrooms.

But there is another 'black box of education' worth considering.

In many countries, there have been serious and expansive efforts underway to compel governments make available more ‘open data’ about what is happening in their societies, and to utilize more ‘open educational resources’ for learning – including in schools. Many international donor and aid agencies support related efforts in key ways. The World Bank is a big promoter of many of these so-called ‘open data’ initiatives, for example. UNESCO has long been a big proponent of ‘open education resources’ (OERs). To some degree, pretty much all international donor agencies are involved in such activities in some way.

There is no doubt that increased ‘openness’ of various sorts can help make many processes and decisions in the education sector more transparent, as well as have other benefits (by allowing the re-use and ‘re-mixing’ of OERs, teachers and students can themselves help create new teaching and learning materials; civil society groups and private firms can utilize open data to help build new products and services; etc.).

That said:
  • What happens when governments promote the use of open education data and open education resources but, at the same time, refuse to make openly available the algorithms (formulas) that are utilized to draw insights from, and make key decisions based on, these open data and resources?
     
  • Are we in danger of opening up one black box, only to place another, more inscrutable back box inside of it?

Not sure what an ‘algorithm' is? Think of it as a recipe that provides step-by-step instructions about how to do something, i.e. taking a bunch of ingredients and transforming them into a (metaphorical) meal of some sort. In our context here, we are talking about the directions for formulas that are used to manipulate data to some end. To take an example from education: In computer adaptive testing, an algorithm could be used to determine which questions are presented to a student, based on her answers to prior questions., and another algorithm could be used to help provide a 'score' at the end of the test. Some countries are finding that there are difficulties in using adaptive exams for university entrance exams because they aren't auditable in the same as are exams where every student takes the same test, and answers to individual questions can be easily audited and analyzed. When students take ‘different’ tests, however, how are we to know that they are, on some fundamental level and in a manner that is fair, 'equivalent'?

As more administrative data become available about students, and as more is known about how students spend their time in schools, there are opportunities to target various services and approaches to better meet the needs of individual learners. For many folks, this is the ideal of more ‘personalized learning’, which on its face sounds like a wonderful thing. And, where/when it ‘works’, no doubt it is. More open data can help make available more ‘big data’, which can then be mined for all sorts of useful insights about teachers and learners. Exciting stuff, to be sure! (‘Utopia is just around the corner’, one vendor of a ‘cutting edge big data for education learning analytics solution’ prophesized to me last year.)

But:
  • As we move increasingly into an era where big data and learning analytics inform decision making in education, what new issues might arise related to transparency and bias?
  • As certain types of data become more and more 'open' (i.e. publicly available for download, use and analysis), and as tools to enable related analyses proliferate and become less and less expensive, an argument can be made that this will help to boost transparency around various decisions and decision making processes -- and that this boost will be, on the whole, something positive. But how about the algorithms that are utilized along the way to help make sense of these data? How should we be thinking about them?
  • If you make datasets of exam results available as open data, or even just share them semi-publicly with key stakeholder groups (students, parents, teachers, schools), do you publish the underlying algorithms used as well?
it's all Greek to me
it's all Greek to me
There is no shortage of hype and excitement (and even some action) in some education systems related to the use of open educational resources, but there is comparatively little about the potential use and utility of open educational assessment engines. As OER efforts leave the impression among some policymakers that ‘content should be free’ (i.e. available at no cost), many publishers are increasingly looking to generate revenue and profits by expanding into the assessment and testing business, which in many cases utilizes, and indeed seeks competitive advantage through the use of, proprietary algorithms. The hope (at least for the firms pursing this strategy) is that this ‘pivot’ offers the opportunity to make up for some of the income lost as a result of OER use -- especially where proprietary testing platforms help increase the likelihood for ‘vendor lock-in’ (and thus greater profits).

This is not to say that there aren't significant biases built into all sorts of traditional and conventional education-related practices and analyses, whether or not the data at hand are ‘open’ or ‘closed’. There certainly are. But in many places around the world, there is a rich experience and improving expertise in identifying such biases, and so understanding is something that can, at some level and to some extent, be achieved. (Whether anything can or will be done as a result of this understanding is something else entirely, and is often more a consequence of politics, or prejudice, or money, as anything that is a result of reasoned science-based discourse.)

Within the education sector, most of the discussion around this sort of stuff has traditionally been most prominent in many places when it comes to testing (or assessment), and indeed, within ministries of education, it is the assessment folks who are most clued into e.g. the psychometric models (and the algorithms that flow from them) that underlie given test offerings. Where new technologies enable the collection and analyses of new, large data sets across an education system, it is often the ‘technology folks’ constructing the algorithms who are most involved in related discussions about openness and transparency -- to the extent that such discussions are happening at all.
  • If algorithms are used to assign a student a score on a test, as a way to assess her understanding of a given topic or concept, and the student next to her receives a different test, how are we to compare the related grades that they receive? 
Thinking a bit more expansively:
  • If algorithms are used, for example, to predict, and thus in many ways determine, the optimal educational path for your child (what courses she takes, what school she should attend, what teachers and classmates she should have, etc.), how can we understand them, let alone attempt to understand if they are fair or accurate (whatever those terms might mean)?
The point here is not to argue for/against any of these practices. Rather, it is to note these sorts of things will in all probability increasingly occur.

Tim O’Reilly has opined that the great question of the 21st century will be
  • Whose black box do you trust?
Increasingly, this will be a relevant question for education policymakers to consider -- let alone students, parents, teachers and other groups involved in or impacted by what happens in our schools. 

And once this question is answered, or in the course of trying to answer it, an inevitable follow-up might be:
  • How will we know?
Algorithms aren't biased themselves (they are just a bunch of numbers and weird symbols and perhaps arcane computer code, after all) but they are constructed by humans, and so may well reflect the biases of their creators. In addition, as things like machine learning and neural networks are used to construct and deploy algorithms to help analyze various data sets and make related recommendations and decisions, some of the algorithms that result may be increasingly difficult for anyone to truly understand (making them potential "weapons of math destruction", to adopt the clever coinage of mathematician and data scientist Cathy O’Neil.)
  • What is the net impact on transparency within an education system when we advocate for open data but then analyze these data (and make related decisions) with the aid of 'closed' algorithms?
  • To what extent do education systems (policymakers, educators) understand the algorithms that power the technology tools that vendors are selling them?
  • Where the primary rationale for more ‘openness’ is related to the potential for cost savings, to what extent should we also be concerned about new costs that are introduced as a result of proprietary tools, platforms and analyses that are enabled by the proliferation of more ‘open’ education data and educational resources?
  • How can policymakers and government leaders engage in public discussions around fairness and transparency in education where the use of increasingly complex algorithms makes the audit trail generated in the course of making various decisions increasingly complex and convoluted?   
Conversations related to ‘algorithmic transparency’ are increasing common in public policy debates in many sectors, like health. When I raise related sets of questions with education folks in many of the places around the world where I work who are interested in exploring frontier uses of new technologies, however, I am often met with polite but blank stares, and in some instances protestations that 'education is different'. And no doubt it is. But this doesn't mean that some of the fundamental questions that we ask need be.
 

You may also be interested in the following posts on the EduTech blog:  
Note: The image used at the top of this blog post of a darkened classroom at the Hogwarts School of Witchcraft and Wizardry ("hey, what's going on in there?") comes from the Wikipedian Freddo via Wikimedia Commons and is used according to the terms of its Creative Commons Attribution-Share Alike 4.0 International license. The image of the traffic sign used later in the blog post was originally uploaded to to Flickr by aprillynn77; it comes via Wikimedia Commons and is used according to the terms of its Creative Commons Attribution 2.0 Generic license.
 
 

Authors

Michael Trucano

Global Lead for Innovation in Education, Sr. Education & Technology Policy Specialist

Nick Kind
November 22, 2016

Mike, as you know this is a hobby horse of mine. IMHO you are asking all the right questions. One potential and partial solution to think about is what might be thought of as the "mobile device method". Let us assume that we can agree that students (or their parents for minors) own the data they create (this is not straightforward in itself as many vendors will claim they own the data that is analysed by their systems - there may be a way forward here in identifying "primary" and "secondary" data, "secondary" data being that produced after analysis and algorithmic processing). Systems can then give students visibility, or ideally choice, about what algorithms are applied to their data. A mobile device operating system asks you "app x wants to use your location. Is this OK?". We can do the same in education. We can alert students to a) which data is going to be processed (for example, the results of their end-of-unit tests, their age, their gender) and b) what results are going to be given to whom (recommendations for them to learn what they have not mastered, indications of individual and class-level weaknesses to teachers, aggregated test results to government). At the very least, this might encourage people to think about how their data is being processed and force some degree of transparency on all those analysing educational data.

Michael Trucano
November 22, 2016

Thanks for your comment, Nick.

For those those who might not be familiar with it: Nick publishes an excellent blog that touches on many of the sorts of topics touched on in the post above (to wit: https://goo.gl/fccrf9).