As climate change intensifies, catastrophic, record-setting natural disasters look increasingly like the “new normal” – from Hurricane Matthew killing at least 1,300 people in September to Typhoon Lionrock, the previous month, causing flooding that left 138 dead and more than 100,000 homeless in North Korea.
What steps can we take to limit the destruction caused by natural disasters? One possible answer is using data to improve relief operations.
Let’s look at the aftermath of the April 2015 Gorkha earthquake, the worst to hit Nepal in over 80 years. Nearly 9,000 people were killed, some 22,000 injured, hundreds of thousands were rendered homeless and entire villages were flattened.
Yet for all the destruction, the toll could have been far worse.
Without in any way minimising the horrible disaster that hit Nepal that day, I want to make the case that data — and, in particular, a new type of social responsibility — helped Nepal avoid a worse calamity. It may offer lessons for other disasters around the world.
In the wake of the Nepal disaster, a wide variety of actors – from government, civil society and the private sector alike – rushed in to address the humanitarian crisis. One notable player was Ncell, Nepal’s largest mobile network operator. Shortly after the earthquake, Ncell decided to share its mobile data (in an aggregated, de-identified way) with the the non-profit Swedish organisation, Flowminder.
The city of La Paz in Bolivia is piloting a new tool called Barrio Digital—or Digital Neighborhood—to communicate more effectively and efficiently with citizens living in areas that fall within Barrios de Verdad, or PBCV, an urban upgrading program that provides better services and living conditions to people in poor neighborhoods.
The goals of Barrio Digital are to:
Increase citizen participation for evidence-based decision-making,
Reduce the cost of submitting a claim and shorten the amount of time it takes for the municipality to respond, and
Strengthen the technical skills and capacity within the municipality to use ICT tools for citizen engagement.
Around the world, there is no shortage of rhetoric related to the potential for the use of new information and communication technologies (ICTs) to 'transform teaching and learning'. Indeed, related pronouncements often serve as the rallying cry around, and justification for, the purchase of lots of educational technology hardware, software, and related goods and services. Where 'business as usual' is not thought to be working, some governments are increasingly open to considering 'business unusual' -- something that often involves the use of new technologies in some significant manner.
One challenge that many countries face along the way is that their procurement procedures are misaligned with what industry is able to provide, and with how industry is able to provide it. Technology changes quickly, and procurement guidelines originally designed to meet the needs of 20th century schooling (with a focus on school construction, for example, and the procurement of textbooks) may be inadequate when trying to operate in today's fast-changing technology environments. Indeed, in education as in other sectors, technological innovations typically far outpace the ability of policymakers to keep up.
Faced with considering the use of new, 'innovative' tools and approaches that hadn't been tried before at any large scale within its country's schools, education policymakers may reflexively turn to precedent and 'old' practices to guide their decisions, especially when it comes to procurement. This is usually seen within government ministries as a prudent course of action, given that such an approach is consistent with the status quo, and that related safeguards are (hopefully) in place. As a result, however, they may end up driving forward into the future primarily by looking in the rear view mirror.
When considering the scope for introducing various types of technology-enabled 'innovations' (however one might like to define that term) into their education systems, many governments face some fundamental challenges:
They don't know exactly what they want.
And even where they do:
They don't have the in-house experience or expertise to determine if what they want is practical, or even feasible, nor do they know what everything should cost.
One common mechanism utilized in many countries is the establishment of a special 'innovation fund', designed to support the exploration of lots of 'new stuff' in the education sector. Such efforts can be quite valuable, and they often end up supporting lots of worthwhile, innovative small scale projects. (The World Bank supports many 'innovation funds' related to the education sector around the world, for what that might be worth, and the EduTech blog exists in part to help document and explore some of what is learned along the way.) There is nothing wrong with small scale, innovative pilot projects, of course. In fact, one can argue that we need many more of them -- or at least more of them with certain characteristics. That said, introducing and making something work at a very small scale is a much different task than exploring how innovations can be implemented at scale across an entire education system.
In such circumstances:
What is a ministry of education to do?
How can it explore innovative approaches to the procurement of 'innovative' large scale educational technology programs in ways that are practical, appropriate, cost-effective, likely to yield good results, informed by research and international 'good practice', and transparent?
Education is a ‘black box’ -- or so a prevailing view among many education policymakers and researchers goes.
For all of the recent explosion in data related to learning -- as a result of standardized tests, etc. -- remarkably little is known at scale about what exactly happens in classroomsaroundtheworld, and outside of them, when it comes to learning, and what the impact of this has.
This isn't to say that we know nothing, of course:
The World Bank (to cite an example from within my own institution) has been using standardized classroom observation techniques to help document what is happening in many classrooms around the world (see, for example, reports based on modified Stallings Method classroom observations across Latin America which seek to identify how much time is actually spent on instruction during school hours; in many cases, the resulting data generated are rather appalling).
Common sense holds various tenets dear when it comes to education, and to learning; many educators profess to know intuitively what works, based on their individual (and hard won) experience, even in the absence of rigorously gathered, statistically significant 'hard' data; the impact of various socioeconomic factors is increasingly acknowledged (even if many policymakers remain impervious to them); and cognitive neuroscience is providing many interesting insights.
But in many important ways, education policymaking and processes of teaching and learning are constrained by the fact that we don't have sufficient, useful, actionable data about what is actually happening with learners at a large scale across an education system -- and what impact this might have. Without data, as Andreas Schleicher likes to say, you are just another person with an opinion. (Of course, with data you might be a person with an ill-considered or poorly argued opinion, but that’s another issue.)
side observation: Echoing many teachers (but, in contrast to teaching professionals, usually with little or no formal teaching experience themselves), I find that many parents and politicians also profess to know intuitively ‘what works’ when it comes to teaching. When it comes to education, most everyone is an ‘expert’, because, well, after all, everyone was at one time a student. While not seeking to denigrate the ‘wisdom of the crowd’, or downplay the value of common sense, I do find it interesting that many leaders profess to have ready prescriptions at hand for what ‘ails education’ in ways that differ markedly from the ways in which they approach making decisions when it comes to healthcare policy, for example, or finance – even though they themselves have also been patients and make spending decisions in their daily lives.
One of the great attractions of educational technologies for many people is their potential to help open up and peer inside this so-called black box. For example:
When teachers talk in front of a class, there are only imperfect records of what transpired (teacher and student notes, memories of participants, what's left on the blackboard -- until that's erased). When lectures are recorded, on the other hand, there is a data trail that can be examined and potentially mined for related insights.
When students are asked to read in their paper textbook, there is no record of whether the book was actually opened, let along whether or not to the correct page, how long a page was viewed, etc. Not so when using e-readers or reading on the web.
Facts, figures and questions scribbled on the blackboard disappear once the class bell rings; when this information is entered into, say, Blackboard TM (or any other digital learning management system, for that matter), they can potentially live on forever.
And because these data are, at their essence, just a collection of ones and zeroes, it is easy to share them quickly and widely using the various connected technology devices we increasingly have at our disposal.
A few years ago I worked on a large project where a government was planning to introduce lots of new technologies into classrooms across its education system. Policymakers were not primarily seeking to do this in order to ‘transform teaching and learning’ (although of course the project was marketed this way), but rather so that they could better understand what was actually happening in classrooms. If students were scoring poorly on their national end-of-year assessments, policymakers were wondering: Is this because the quality of instruction was insufficient? Because the learning materials used were inadequate? Or might it be because the teachers never got to that part of the syllabus, and so students were being assessed on things they hadn’t been taught? If technology use was mandated, at least they might get some sense about what material was being covered in schools – and what wasn’t. Or so the thinking went ....
Yes, such digital trails are admittedly incomplete, and can obscure as much as they illuminate, especially if the limitations of such data are poorly understood and data are investigated and analyzed incompletely, poorly, or with bias (or malicious intent). They also carry with them all sorts of very important and thorny considerations related to privacy, security, intellectual property and many other issues.
That said, used well, the addition of additional data points holds out the tantalizing promise of potentially new and/or deeper insights than has been currently possible within 'analogue' classrooms.
But there is another 'black box of education' worth considering.
In many countries, there have been serious and expansive efforts underway to compel governments make available more ‘open data’ about what is happening in their societies, and to utilize more ‘open educational resources’ for learning – including in schools. Many international donor and aid agencies support related efforts in key ways. The World Bank is a big promoter of many of these so-called ‘open data’ initiatives, for example. UNESCO has long been a big proponent of ‘open education resources’ (OERs). To some degree, pretty much all international donor agencies are involved in such activities in some way.
There is no doubt that increased ‘openness’ of various sorts can help make many processes and decisions in the education sector more transparent, as well as have other benefits (by allowing the re-use and ‘re-mixing’ of OERs, teachers and students can themselves help create new teaching and learning materials; civil society groups and private firms can utilize open data to help build new products and services; etc.).
What happens when governments promote the use of open education data and open education resources but, at the same time, refuse to make openly available the algorithms (formulas) that are utilized to draw insights from, and make key decisions based on, these open data and resources?
Are we in danger of opening up one black box, only to place another, more inscrutable back box inside of it?
A good number of African governments have shown how technologically-forward thinking they are by announcing one-tablet-per-child initiatives in their countries. President John recently announced that tablets for Ghana’s schoolchildren were at the center of his campaign to improve academic standards. Last year, President Kenyatta of Kenya abandoned a laptop project for tablets.
Facebook recently announced the public release of unprecedentedly high-resolution population maps for Ghana, Haiti, Malawi, South Africa, and Sri Lanka. These maps have been produced jointly by the Facebook Connectivity Lab and the Center for International Earth Science Information Network (CIESIN), and provide data on the distribution of human populations at 30-meter spatial resolution. Facebook conducted this research to inform the development of wireless communication technologies and platforms to bring Internet to the globally unconnected as part of the internet.org initiative.
Figure 1 conveys the spatial resolution of the Facebook dataset, unmatched in its ability to identify settlements. We are looking at approximately a 1 km2 area covering a rural village in Malawi. Previous efforts to map population would have represented this area with only a single grid cell (LandScan), or 100 cells (WorldPop), but Facebook has achieved the highest level of spatial refinement yet, with 900 cells. The blue areas identify the populated pixels in Facebook’s impressive map of the Warm Heart of Africa.
Facebook’s computer vision approach is a very fast method to produce spatially-explicit country-wide population estimates. Using their method, Facebook successfully generated at-scale, high-resolution insights on the distribution of buildings, unmatched by any other remote sensing effort to date. These maps demonstrate the value of artificial intelligence for filling data gaps and creating new datasets, and they could provide a promising complement to household surveys and censuses.
Beginning in March 2016, we started collaborating with Facebook to assess the precision of the maps and explore their potential uses in development efforts. Here, we describe the analyses undertaken to date by the Living Standards Measurement Study (LSMS) team at the World Bank to compare the high-resolution population projections against the ground truth data. Among the countries that were part of the initial release, Malawi was of particular interest for the validation exercise given the range of data at our disposal.
One of the early, decidedly modest goals for this event was simply to bring together key decisionmakers from across Asia (and a few other parts of the world -- it would become more global with each passing year) in an attempt to help figure out what was actually going on with technology use in education in a cross-section of middle and low income countries, and to help policymakers make personal, working level connections with leading practitioners -- and with each other. Many countries were announcing ambitious new technology-related education initiatives, but it was often difficult to separate hope from hype, as well as to figure out how lofty policy pronouncements might actually translate to things happening at the level of teachers and learners 'on-the-ground'.
As the first country to move from being a recipient of World Bank donor assistance to become a full-fledged donor itself, Korea presented in many ways an ideal host for the event. (Still is!) The Korean story of economic development over the past half century has been the envy of policymakers in many other places, who see in that country's recent past many similarities to their own current situations. Known for its technological prowess (home to Samsung and many other high tech companies) and famous in education circles for the performance of its students on international assessments like PISA, educational technology issues could be found at the intersection of two important components in a Venn diagram of 'Brand Korea'.
Since that first global symposium, over 1400 policymakers from (at least by my quick count) 65 countries have visited Korea annually as part of the global symposium to see and learn first hand from Korean experiences with the use of information and communication technologies (ICTs) in education, to be exposed to some of the latest related research around the world, to share information with each other about what was working -- and what wasn't -- and what might be worth trying in the future (and what to avoid). Along the way, Korea has come to be seen as a global hub for related information and knowledge, and KERIS itself increasingly is regarded by many countries as a useful organizational model to help guide their own efforts to help implement large scale educational technology initiatives.
While international events bringing together policymakers to discuss policy issues related to the use of new technologies in education are increasingly common these days, across Asia and around the world, back in 2007 the Global Symposium on ICT Use in Education represented the first regularly scheduled annual event of its type (at least to my knowledge; there were many one-off regional events, of course, many of the good ones organized by UNESCO) bringing together policymakers from highly developed, middle and low income countries.
Participating in the event for each of the past ten years has offered me a front row seat to observe how comparative policy discussions have evolved over the past decade in a way that is, I think, somewhat unique. What follows is a quick attempt to descibe some of what has changed over the years. (The indefatigable Jongwon Seo at KERIS is, I think, the only other person to have participated in all ten global symposia. As such, he is a sort of spiritual co-author of these reflections -- or at least the ones which may offer any useful insights. I'm solely responsible for any of the banal, boring or inaccurate comments that follow.)
It is conventional wisdom in many quarters -- indeed, for some people it approaches the level of 'incontrovertible fact' -- that young people are 'digital natives', possessed of some sort of innate ability to understand and utilize digital devices and applications merely because of their youth, because they have 'grown up surrounded by technology', in ways that older folks can't -- and perhaps never will. Anecdotes from amazed and proud parents and grandparents detailing how adept little Johnny (or Gianni, or Krishna, or Yidan, or Fatima, or Omar, or Maria) is at manipulating his (or her) parents' mobile phone or tablet "even though s/he doesn't even know how to read yet!" are commonly heard in conversations around the world.
In a very influential essay that appeared about 15 years ago ("Digital Natives, Digital Immigrants" [pdf]), Mark Prensky coined the term 'digital natives', asserting that "students today are all “native speakers” of the digital language of computers, video games and the Internet" and that, as a result, "today's students think and process information fundamentally differently from their predecessors". In contrast, "[t]hose of us who were not born into the digital world but have, at some later point in our lives, become fascinated by and adopted many or most aspects of the new technology are, and always will be compared to them, Digital Immigrants." While Prensky's views on this topic have evolved over the years and become more nuanced (those interested in his particular views may wish to visit his web site), this original definition and delineation of what it means to be a digital native and a digital immigrant remains quite potent for many people.
At the same time, and for over a decade, this assertion has come under consistent challenge and criticism from many academics, who contest various aspects of the 'digital natives myth', as well as the policy and design implications that often flow from them. The observable differences at the heart of the digital native narrative relate more to culture, or to geography, to socio-economic status or even just to personal preferences than they do to age, critics argue. No doubt some of these folks may glance at this post and ask: 'Digital natives', haven't we moved on from that stuff? When it comes to related academic discourse, the answer to this question is probably a qualified 'yes, at least in some circles'.
That said, in my experience, the digital natives hypothesis remains alive and well in many educational policymaking circles (as it does with many parents -- and grandparents, and marketers, and with many kids themselves), especially in places around the world that are just now beginning to roll-out or consider the use of educational technologies at a wide scale. Indeed, while meeting with education ministries on three different continents over the course of the last month, I've had very senior education officials in three different governments explain to me how the concept of 'digital natives' was central for their vision for education going forward. These recent conversations -- and many others -- prompted me to write this quick blog post (as well as one that will follow).
New developments and curiosities from a changing global media landscape: People, Spaces, Deliberation brings trends and events to your attention that illustrate that tomorrow's media environment will look very different from today's, and will have little resemblance to yesterday's.
“I believe television is going to be the test of the modern world, and that in this new opportunity to see beyond the range of our vision, we shall discover a new and unbearable disturbance of the modern peace, or a saving radiance in the sky. We shall stand or fall by television - of that I am quite sure.” E.B. White
Television has an enormous influence on people, bringing the news and entertainment to communities all over the world. In order to recognize the impact of television, in 1996, the United Nations General Assembly proclaimed 21 November as World Television Day. On Monday, 21 November 2016, the United Nations TV will host an open day at its studios for talks and interactive dialogues on its programming in observance of this day.
In an increasingly changing global media environment, with modern Information and Communications Technologies (ICTs) such as computers, Internet, mobile phones, tablets, wearables, on the rise, television continues to be a resilient communication tool. However, the television industry needs to adapt to the changing landscape in order to remain relevant. One of the most dramatic changes in this industry is the growth in the number of connected TV sets worldwide. Internet connected TVs provide interactive features, such as online browsing, video-on-demand, video streaming and social networking. With the mixture of new and old viewing habits, connected TVs are drawing larger audiences.
According to Digital TV Research, the number of connected TVs worldwide will reach the new high of 759 million by 2018, which is more than double of 2013 numbers (307.4 million).
In a new paper by Stefaan G. Verhulst at Global Partners Digital, Verhulst argues: “In recent years, multistakeholderism has become something of a catchphrase in discussions of Internet governance. This follows decades of attempts to identify a system of governance that would be sufficiently flexible, yet at the same time effective enough to manage the decentralized, non-hierarchical global network that is today used by more than 3 billion people. In the early years of the Internet, the prevailing view was that government should stay out of governance; market forces and self-regulation, it was believed, would suffice to create order and enforce standards of behavior. This view was memorably captured by John Perry Barlow’s 1996 “A Declaration of the Independence of Cyberspace,” which dramatically announced: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather”.
However, the shortcomings of this view have become apparent as the Internet has grown in scale and complexity, and as it has increasingly entered the course of everyday life. There is now a growing sense—perhaps even an emerging consensus—that markets and self-policing cannot address some of the important challenges confronting the Internet, including the need to protect privacy, ensure security, and limit fragmentation on a diverse and multi-faceted network. As the number of users has grown, so have calls for the protection of important public and consumer interests.