Recent headlines from places as diverse as Kenya ("6,000 primary schools picked for free laptop project") and California ("Los Angeles plans to give 640,000 students free iPads") are just two announcements among many which highlight the increasing speed and scale by which portable computing devices (laptops, tablets) are being rolled out in school systems all over the world. Based on costs alone -- and the costs can be very large! -- such headlines suggest that discussions of technology use in schools are starting to become much more central to educational policies and planning processes in scores of countries, rich and poor, across all continents.
Are these sorts of projects good ideas? It depends. The devil is often in the details (and the cost-benefit analysis), I find. Whether or not they are good ideas, there is no denying that they are occurring, for better and/or for worse, in greater frequency, and in greater amounts. More practically, then:
What do we know about what works,
and what doesn't (and how?, and why?)
when planning for and implementing such projects,
what the related costs and benefits might be,
and where might we look as we try to find answers to such questions?
There are, broadly speaking, two strands of concurrent thinking that dominate discussions around the use of new technologies in education around the world. At one end of the continuum, talk is dominated by words like 'transformation'. The (excellent) National Education Technology Plan of the United States (Transforming American Education: Learning Powered by Technology), for example, calls for "applying the advanced technologies used in our daily personal and professional lives to our entire education system to improve student learning, accelerate and scale up the adoption of effective practices, and use data and information for continuous improvement."
This is, if you will, a largely 'developed' country sort of discourse, where new technologies and approaches are layered upon older approaches and technologies in systems that largely 'work', at least from a global perspective. While the citizens of such countries may talk about a 'crisis' in their education systems (and may indeed have been talking about such a crisis for more than a generation), citizens of many other, much 'less developed' countries would happily switch places.
If you want to see a true crisis in education, come have a look at our schools, they might (and do!) say, or at least the remote ones where a young teacher in an isolated village who has only received a tenth grade education tries to teach 60+ children in a dilapidated, multigrade classroom where books are scarce and many of the students (and even more of their parents) are often functionally illiterate.
Like so many things in life, it all depends on your perspective. One country's education crisis situation may be (for better or for worse) another country's aspiration. While talk in some places may be about how new technologies can help transform education, in other places it is about how such tools can help education systems function at a basic level.
The potential uses of information and communication technologies -- ICTs -- are increasingly part of considerations around education planning in both sorts of places. One challenge for educational policymakers and planners in the remote, low income scenario is that most models (and expertise, and research) related to ICT use come from high-income contexts and environments (typically urban, or at least peri-urban). One consequence is that technology-enabled 'solutions' are imported and (sort of) 'made to fit' into more challenging environments. When they don't work, this is taken as 'evidence' that ICT use in education in such places is irrelevant (and some folks go so far to state that related discussions are irresponsible as a result).
There is, thankfully, some emerging thinking coalescing around various types of principles and approaches that may be useful to help guide the planning and implementation of ICT in education initiatives in such environments. As part of my duties at the World Bank, I have been discussing a set of such principles and approaches with a number of groups recently, and thought I'd share them here, in case they might be of wider interest or utility to anyone else. Are they universally applicable or relevant? Probably not. But the hope is that they might be useful to organizations considering using ICTs in the education sector in very challenging environments -- especially where introducing these principles and approaches into planning discussions may cause such groups to challenge assumptions and conventional wisdom about what 'works', and how best to proceed.
Not a week goes by where I don't receive an unsolicited email from a company touting the benefits of its new 'educational videogame'. Indeed, just last week I opened my inbox to find two separate emails proclaiming how two different mobile gaming apps were destined to "transform learning!!!" Now, in a lot of the cases, I must confess that I am not always sure why something is an 'educational game', and not just a 'game' (although if I am in a difficult mood, I might offer that in too many instances an 'educational game' is 'a game that really isn't much fun'). That said, there is no denying that videogames are big business around the world. So -- increasingly -- is education. Even most people who fear that potential negative effects of some (or even most) videogames on young people would, at the same time, acknowledge the promise and potential for videogames to offer enriching learning experiences. The history of the introduction of educational technologies is in many ways long on promise and potential, however, and short on actual evidence of how they impact learning in tangible and fundamental ways.
Much is made of the potential for ICTs to be used to promote more personalized learning experiences through the introduction of various types of ICT-enabled assessment systems. For me, it has long seemed like the most powerful real-time learning assessment engines have been found in videogames, where actions (or inactions) are often met with near instantaneous responses, to which the player is then challenged to respond in turn. This feedback loop -- taking an action, being presented with information as a result, having to synthesize and analyze this information and doing something as a result -- might meet some people's definition of 'learning'. A good videogame engages its users so strongly that they are willing to fail, and fail, and fail again, until they learn enough from this failure that they can proceed with the game. Even where educational software is not explicitly labeled as a 'game', designers are increasingly introducing game-like elements (badges, achievement bonuses, scoring systems) as a way to promote user (learner? player?) engagement as part of a process known as 'gamification'.
The use of videogames for educational purposes, or at least in educational contexts, is far from an OECD or U.S. phenomenon. Whether I am visiting a school computer lab after hours in central Russia, an Internet cafe filled with students in Indonesia or standing behind some schoolgirls carrying phones between classes in Tanzania, 'educational' videogames seem to be nearly everywhere. Past posts on the EduTech blog have profiled things like the use of video games on mobile phones to promote literacy in rural India and EVOKE, an online game for students across Africa which the World Bank helped sponsor a few years ago. When I speak with young software entrepreneurs in Nairobi or Accra or Manila, they often talk excitedly about the latest educational game they are developing (for markets local and distant).
Do educational games 'work'? Are they 'effective'?
And if so: How can they be used in schools?
Questions such as these are of increasing interest to scholars. Given both their potential for learning, and how aggressively videogames are being marketed to many education systems, they should be of increasing interest to educational policymakers as well. Some recent research brings us a little closer to a time when we can answer some of them.
While I have no data to cite here (perhaps this is an idea that could be explored by an enterprising PhD student?), it is my strong suspicion, based on years of observation and work with groups introducing new technologies into education systems and communities in poor and middle income countries, that a 'Matthew Effect in Educational Technology' is observable -- and worth considering.
Just what is a 'Matthew Effect' -- and why should we care about it?
Almost a half-century ago, the sociologist Robert Merton observed (here's the original paper [pdf]) that famous scientists often get more credit for a research finding than a lesser (or un)known scientist does, even where the work of both scientists is very similar. He labeled this phenomenon the 'Matthew Effect', after a verse in the Bible (Book of Matthew25:29) which roughly translates as 'the rich get richer'. In the words of the sociologist Daniel Rigbey, who wrote a book on the subject:
"Matthew effects tend to confer further advantages on the already-advantaged, other things equal. Of course, other things are never entirely equal. Multiple interacting factors are at play in a complex and connected world. Nonetheless, more than forty years of research findings suggest that Matthew effects are real and potentially powerful determinants of social outcomes in their own right, and especially when they are not countervailed. We simply cannot understand the dynamics of social inequalities in the world today without taking Matthew effects seriously into account."
Following Merton, KeithStanovich spoke about a Matthew Effect in an educational context, noting that early successes in developing reading skills usually lead to greater successes with reading -- and thus with learning other new skills that build on the existence of good reading skills going forward.
It can be deceptively easy to propose a solution to a problem when you don't really understand the problem (especially if you think you do!). The 'failure' of many projects to introduce new technologies in education can, to some degree, be traced back to this simple truism. If you are pointed in the wrong direction, technology can help you move in that direction more quickly. To paraphrase the technologist Bruce Schneier (who was himself paraphrasing someone else): If you think technology can solve your education problems, then you don't understand the problems and you don't understand the technology. The solution lies in process and systems -- and people. Technology can help in all of these areas -- but first we need to make sure we understand what it really is that we need to do.
When I was back in school, and long before I had come across names like Wilbur Schramm or Manuel Castells, I remember learning about the power of new information and communication technologies to help change societies. Even from the (perhaps rather limited) perspective of someone growing up in a prairie state in the American Midwest, whether it was the role of pamphlets in the American Revolution or the more contemporary examples of audiocassettes in the Iranian revolution or photocopiers helping to spread samizdat culture and messages in the countries of the former Soviet Bloc, it was clear that the emergence, adaptation and innovative uses of new 'ICTs' could help committed groups of people upend the existing status quo.
(Whether such 'upending' is a good thing or not depends, I guess, on your perspective, and the specific circumstances and context. Flip through the pages of UNESCO's Community radio handbook, for example, and you may well be inspired, but read a recent paper from a researcher at Harvard about the role of RTLM radio in the Rwandan genocide and you will be chilled to the bone. Technology is a magnifier of human intent and capacity, as my friend Kentaro Toyama likes to say.)
More recently, the events of the 'Arab Spring' have been popularly attributed, in part, to the use of new ICTs and ICT tools like Twitter and SMS. Whether or not one agrees with this attribution (and about this there is much scholarly debate), there is no denying that rhetoric around 'ICTs' and the Arab Spring has increasingly marked and colored discussions about the use of educational technologies in many Arab countries. In announcing a recent report documenting technology use in education in the region, for example, the UNESCO Institute for Statistics (UIS) begins by noting that, "Against the backdrop of the Arab Spring, arguably the most significant ICT-assisted “learning” phenomena of the recent past, data from five countries provide a snapshot of ICT integration in education." It continues:
"Great strides have been made in the last decade to harness the power of Information and Communication Technologies (ICT) to help meet many development challenges, including those related to education. However, evidence shows that some countries in the Arab States continue to lag behind in fully implementing ICT in their education systems.
According to a UIS data analysis, which was based on a data collection process sponsored and conducted by the UNESCO Communication and Information Sector and the Talal Abu-Ghazaleh Organization (TAG-Org), policies for the implementation and use of ICT in primary and secondary education systems have not necessarily translated into practice. This is revealed in the newly released data from five participating countries."
Almost a decade ago, delegates from over 175 countries gathered in Geneva for the first 'World Summit on the Information Society', a two-part conference (the second stage followed two years later in Tunis) sponsored by the United Nations meant to serve as a platform for global discussion about how new information and communication technologies were impacting and changing economic, political and cultural activities and developments around the world. Specific attention and focus was paid to issues related to the so-called 'digital divide' -- the (growing) gap (and thus growing inequality) between groups who were benefiting from the diffusion and use of ICTs, and those who were not. One of the challenges that inhibited discussions at the event was the fact that, while a whole variety of inequalities were readily apparent to pretty much everyone, these inequalities were very difficult to quantify, given the fact that we had only incomplete data with which to describe them. The Partnership on Measuring ICT for Development, an international, multi-stakeholder initiative to improve the availability and quality of ICT data and indicators, was formed as a result, and constituent members of this partnership set out to try to bridge data gaps in a variety of sectors. The UNESCO Institute for Statistics (UIS) took the lead on doing this in the education sector, convening and chairing a Working Group on ICT Statistics in Education (in which the World Bank participates, as part of its SABER-ICT initiative) to help address related challenges. At the start, two basic questions confronted the UIS, the World Bank, the IDB, OECD, ECLAC, UNESCO, KERIS and many other like-minded participating members of the working group (out of whose acronyms a near-complete alphabet could be built):
What type of data should be collected related to ICT use in education?
Not to mention:
What type of data could be collected,
given that so little of it was being rigorously gathered
across the world as a whole,
relevant to rich and poor countries alike,
in ways that permitted comparisons across regions and countries?
Comparing ICT use in education across all countries was quite difficult back then. In 2003/2004, the single most common question related to the use of ICTs in education I was asked when meeting with ministers of education was: What should be our target student:computer ratio? Now, one can certainly argue with the premise that this should have been the most commonly posed question (the answer from many groups and people soon became -- rather famously, in fact -- '1-to-1', e.g. 'one laptop per child'). That said, the fact that we were unable to offer globally comparable data in response to such a seemingly basic question did little to enhance the credibility of those who argued this was, in many ways, the wrong question to be asking. Comparing ICT use in education across all countries remains difficult today -- but in many regards, this task is becoming much easier.
One consistent theme that I hear quite often from policymakers with an interest in, and/or responsibility for, the use of ICTs in their country's education system is that they want to 'learn from the best'. Often times, 'best' is used in ways that are synonymous with 'most advanced', and 'most advanced' essentially is meant to describe places that have 'lots of technology'. Conventional wisdom in many other parts of the world holds that, if you want to 'learn from the best', you would do well to look at what is happening in places like the United States, Canada, Australia, the United Kingdom, South Korea and Singapore. (Great internal 'digital divides' of various sorts persist within some of these places, of course, but such inconvenient truths challenge generalizations of these sorts in ways that are, well, inconvenient.) Policymakers 'in the know' broaden their frame of reference a bit, taking in a wider set of countries, like those in Scandinavia, as well as some middle income countries like Malaysia and Uruguay that also have 'lots of technology' in their schools. Whether or not these are indeed the 'best' places to look for salient examples of relevance to the particular contexts at hand in other countries is of course a matter of some debate (and indeed, the concept of 'best' is highly problematic -- although that of 'worst' is perhaps less so), there is no question that these aren't the only countries with lots of ICTs in place (if not always in use) in their education systems.
What do we know about what is happening across Europe
related to the use of ICTs in schools?
Technology use in schools at reasonably large scale began in many OECD countries in earnest in the 1980s and then accelerated greatly in the 1990s, as the Internet and falling hardware prices helped convince education policymakers that the time was right to make large investments in ICTs. In most middle and low income countries, these processes began a little later, and have (until recently) proceeded more slowly. As a result, it was only about ten years ago, as education systems began to adopt and use ICTs in significant amounts (or planned to do so), that efforts to catalog and analyze what was happening in these sets of countries began in earnest. UNESCO-Bangkok's Meta-survey on the Use of Technologies in Education in Asia and the Pacific, published in 2003, was the first notable effort in this regard. A trio of subsequent efforts supported by infoDev (Africa in 2007; the Caribbean in 2009; and South Asia in 2010) helped to map out for the first time what was happening in other regions of the world related to the use of ICTs in education. While the information in such regional reports can rather quickly become dated in some cases, given the pace of technological change, they still provide useful points of departure for further inquiry. In some other parts of the world, even less has been published and made available for global audiences about how ICTs are being used in education.
Information about developments in many of the countries of the Soviet Union, for example, has not, for the most part, been widely disseminated outside the region (indeed, for many within the region as well!). The Moscow-based UNESCO Institute for Information Technologies in Education (IITE) has been perhaps the best 'one-stop shop' for information about ICT use in the region. Recent work by the Asian Development Bank has gone much further to help to fill in one of the most apparent 'blind spots' in our collective global understanding of how countries are using ICTs to help meet a variety of objectives within their formal education systems. ICT in Education in Central and West Asia [executive summary, PDF] summarizes research conducted over five years (2006-2011) in Azerbaijan, Kazakhstan, the Kyrgyz Republic, Tajikistan, and Uzbekistan, with shorter studies on Afghanistan, Armenia, Georgia, and Pakistan.
Over the past month, the EduTech Debate site has been featuring posts and comments from authors exploring various issues and opportunities presented by the phenomenon of so-called Massive Online Open Courses. While perhaps it hasn't been a 'debate' per se, it has featured responses and reactions from the authors to each other's posts, and I thought I'd quickly highlight the conversation that has been occurring over there, in case you may have missed it and doing so might be useful.