Comparing ICT use in education across countries

This page in:

still lots of questions ...At a fundamental level, attempts to answer many of the pressing policy questions we have about the use of ICTs in educational settings around the world -- and the impact of such use -- are complicated by the fact that we still do not have reliable, globally comparable data in this area.  As hard as it may be to believe -- especially given the large investments being made in this area and the increasing strategic importance of this topic in many countries -- basic answers to many basic questions about the use of technology in schools around the world remain largely unanswered.  Such questions include:

  • How many schools are connected to the Internet (and what is the quality of that connection)?
  • How many teachers have been trained to use ICTs?
  • How many schools have access to sufficient reliable power?
  • How many computers are being used for learning purposes in schools?
  • In what subjects are computers meant to be used, and to what extent?

 
This is about to change.

In Montevideo (Uruguay) this week, the UNESCO Institute for Statistics (UIS) convened representatives from statistical bodies from around the world to review results of a 25-country pilot project exploring just what types of national-level data related to ICT use in education can be reliably collected.

Recent World Bank technical assistance related to ICT use in education has highlighted the fact that internationally comparable data related to ICT use in education do not exist -- and that this absence is a problem.

The full set of indicators proposed, considered and tested as part of the UIS-led pilot process are available in a very useful Guide to Measuring Information and Communication Technologies (ICT) in Education that was released by UIS on Wednesday. These indicators include both a 'core' set that will make their way into regular global statistical collection processes and additional 'extended' indicators that countries may choose to collect, based on their specific needs.

Building on the consensus reached at this meeting, it is expected that cross-national data related to ICT use in education will begun to be collected in late 2010 as part of the general statistical gathering that UIS coordinates with all countries in the world.

No doubt there will be criticism of just what data are being collected -- and what are not.  It is of course *much* easier to criticize individual indicators and related questionnaire items (or the absence of such indicators and items) than to suggest options in their place.  One participant voiced the (widely-shared) opinion that "It is a mistake to separate out technology infrastructure from pedagogical practices."  Fair enough.  But getting data on the former is (relatively) easy; the latter is quite difficult. The UIS-led process should be seen as a very important first step in the process of truly global data collection activities on this topic, a process that will no doubt be refined and expanded in the coming years.

Achieving consensus around a proposed data collection effort of this magnitude is a herculean task.  Discussions around definitions (and translations of these definitions) of individual indicators and questionanire items, viability and cost of data collection, potential utility of such data to decisionmaking, etc. were fundamental to the Montevideo meetings, and indeed to the five-year global consultative process that UIS has led in this area.

Participation in a workshop for statisticians of course brings with it its own peculiar joys. I found the (often rather technical) discussions this week of specific proposed questionnaire items to be particularly insightful, as they highlighted both the diversity of individual country contexts and the greatly varied institutional (reporting & operational) arrangements.

Here's just one example:

Many countries recommend an average number of hours per week for the delivery of classes using ICT.  At first glance, it might appear to some that, generally speaking, the more hours of recommended hours per use of computers might correlate well with how 'advanced' a country is in its use of ICTs in schools.  In fact, the opposite is often the case.  In countries considered 'advanced' in ICT use, especially in 1-to-1 computing environments (like Uruguay, for example), laptops are (essentially) always available, but use is not officially prescribed/recommended for a specific period of time.  Rather, it is left for teachers to decide what is useful and appropriate, and what is not.  In Malaysia, another middle income country seen as a leader in the use of technology in education, computers are meant to be used by teachers of mathematics, science and English during every class period, but this use typically only happens for  20% of the class time. In countries just embarkling on widescale use of computers in schools, strong recommendations are often made related to specific numbers of minutes that computers are meant to be used each week. Such recommendations are meant to help with the integration of ICTs into the normal teaching and learning process.  The result of this is, in many cases, that less developed countries where ICT use in relatively new may well report that ICT use is recommended more than in more 'advanced' countries where ICTs are more mainstreamed in education.

(Just to complicate things further: In many counties, both rich and poor, where ICTs are used for educational purposes, this predominantly happens *outside* of school!)

This does not mean that data in this area should not be collected.  Rather, it highlights the fact that that simple conclusions drawn from such data can be quite dangerous.  No doubt some enterprising professor somewhere will attempt to build a global comparative 'index' of ICT use in schools in countries around the world based on the UIS data, with some countries ranked 'high' and others 'low'.  In many areas, such lists and rankings are often quite popular, and can appear at first glance to provide valuable insights in very simple, easy-to-understand ways.  (Think of the ranking of universities published by Shanghai Jiao Tong University globally, or U.S. News & World Report in the United States -- or even, in another sector, the popular Doing Business rankings published by the World Bank.) That said, the building of a universal  index related to ICT use in education is especially problemmatic, given the the number of assumptions and value judgements that would need to be made about the importance or weight of individual indicators -- and that cross-national data collection in this area is still in its infancy.  Let's hope this impulse can be avoided.

National data collection systems are typically slow to adapt or change, *but* the fast changing nature of technology requires regular adaptation and change.  How to capture this and remain relevant -- especially when educational and societal contexts are changing at the same time -- was a question much discussed during the week.  The explosion of mobile phone use in many countries (to cite just one example) raises questions about the usefulness of only collecting data around the use of 'computers' (and indeed what a 'computer' is in an educational context).  There are no easy answers to such questions.  If the past is any guide, we will no doubt need to keep re-orienting ourselves to make sense of the data we currently have (while highlighting the data we still don't have).  As we do so, the fact that the UIS will be collecting basic data on where things stand today in all countries in the world will greatly contribute to our collective ability to track developments and changes in this increasingly vital and strategic area of investment for governments and societies around the world. 


Authors

Michael Trucano

Visiting Fellow, Brookings, and Global Lead for Innovation in Education, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000