Few would argue against the notion that the One Laptop Per Child project (OLPC, originally referred to by many as the '$100 laptop project') has been the most high profile educational technology initiative for developing countries over the past half-decade or so. It has garnered more media attention, and incited more passions (pro and con), than any other program of its kind. What was 'new' when OLPC was announced back in 2005 has become part of mainstream discussions in many places today (although it is perhaps interesting to note that, to some extent, the media attention around the Khan Academy is crowding into the space in the popular consciousness that OLPC used to occupy), and debates around its model have animated policymakers, educators, academics, and the general public in way that perhaps no other educational technology initiative has ever done. Given that there is no shortage of places to find information and debate about OLPC, this blog has discussed it only a few times, usually in the context of talking about Plan Ceibal in Uruguay, where the small green and white OLPC XO laptops are potent symbols of the ambitious program that has made that small South American country a destination for many around the world seeking insight into how to roll out so-called 1-to-1 computing initiatives in schools very quickly, and to see what the results of such ambition might be.
The largest OLPC program to date, however, has not been in Uruguay, but rather in Peru, and many OLPC supporters have argued that the true test of the OLPC approach is perhaps best studied there, given its greater fealty to the underlying pedagogical philosophies at the heart of OLPC and its focus on rural, less advantaged communities. Close to a million laptops are meant to have been distributed there to students to date (902,000 is the commonly reported figure, although I am not sure if this includes the tens of thousands of laptops that were destroyed in the recent fire at a Ministry of Education warehouse). What do we know about the impact of this ambitious program?
Last month the Inter-American Development Bank (IDB) released a long-awaited working paper detailing findings from its evaluation of the OLPC program in Peru. While OLPC has been the subject of much research interest (some of decent quality, some decidedly less so; the OLPC wiki maintains a very useful list of this research), Technology and Child Development: Evidence from One Laptop per Child Program in Peru is meant to be the first large scale evaluation of the program's impact using randomized control trials (considered by many in the evaluation community as the 'gold standard' for this sort of thing).
In a blog post announcing the release of the paper (And the jury is back: One Laptop per Child is not enough), the IDB's Pablo Ibarrarán quickly summarizes this results of this research:
[-] the program dramatically increased access to computers
[-] no evidence that the program increased learning in Math or Language.
[-] some benefits on cognitive skills
This working paper is a follow-up to "Experimental Assessment of the Program "One Laptop Per Child" in Peru, the initial set of findings from IDB research into this area released in late 2010, and a continuation of its interest in 1-to-1 computing initiatives in latin America. (see from late 2010).
Technology and Child Development: Evidence from One Laptop per Child Program in Peru Abstract: "Although many countries are aggressively implementing the One Laptop per Child (OLPC) program, there is a lack of empirical evidence on its effects. This paper presents the impact of the first large-scale randomized evaluation of the OLPC program, using data collected after 15 months of implementation in 319 primary schools in rural Peru. The results indicate that the program increased the ratio of computers per student from 0.12 to 1.18 in treatment schools. This expansion in access translated into substantial increases in use both at school and at home. No evidence is found of effects on enrollment and test scores in Math and Language. Some positive effects are found, however, in general cognitive skills as measured by Raven’s Progressive Matrices, a verbal fluency test and a Coding test." |
As with most things related to the OLPC project, this working paper has kicked off a great deal of discussion. Some of this is happening over on our (sort-of) sister site, the EduTech Debate, which has devoted itself this month to exploring issues being raised as a result of the publication of the IDB report. In addition to reproducing findings from the IDB report itself, ETD notably features commentary from (among others) Oscar Becerra, who was responsible for overseeing the design and implementation of “Una Laptop por Niño" at the Ministry of Education in Peru (and who was involved in a fascinating and very open discussion with Christoph Derndorfer in the comments section of a post on the ETD site back in October 2010).
Given the vibrancy of the debate on the ETD site, and on other blogs where this is being discussed, my aim here isn't to try to try to attempt to analyze and dissect the IDB working paper. (For that, I would refer you to the ETD site -- after you've read the paper itself, of course!) That said, one thing that has struck me about many of the conversations happening as a result of the IDB paper, both online and in other forums, is how quickly the conversation can also become about other things. For the hard core evaluation people, the (often quite detailed and exact) conversation is about methods and methodologies. Some critics of the OLPC approach see the IDB report as a validation of sorts of some of criticisms they have long voiced about the program. More generally (and interestingly), however, I hear two common responses to the findings detailed in the IDB report about the lack of compelling impact evidence directed at the authors: What are you testing for -- is it really what's important? And: Are you testing for this in the right way?
These are certainly important questions to ask, and they touch on a common challenge made by folks seeking to do rigorous impact evaluations of educational technology projects, especially evaluations designed to provide insight to policymakers who are interested in the 'impact on test scores'. Whether or not you agree with this interest (and you are certainly free not to do so), there is no denying that this is a question asked regularly by many policymakers. The question then becomes: Which test scores? Broadly speaking, we can divide 'tests' into two types:
[#1] Standardized tests in common use within a formal education system (such as the sorts of high stakes school leaving exams that characterize many education systems)
[#2] Tests developed by experts to assess the impact of a particular intervention
(Let me be clear: I am not saying that these are the only criteria against which you can, or should, evaluate an educational technology project – a topic discussed regularly on the EduTech blog -- nor that they are necessarily good ones. Rather, I am saying that, for better or for worse, policymakers often seek to evaluate impact using such measures.)
The 'results' from these different types of tests of the impact of the same initiative might well be different -- and this difference might be very important at a practical level. You might see a marked positive impact on standardized test scores (#1), for example ... but what if these are bad tests to begin with? Where standardized tests measure recall of specific facts, for example, computers can be used quite effectively as sort of turbo-charged flashcards, helping to cram lots of specific facts or simple procedures into the heads of students (the decades-old, and not very affectionate, term in many education communities for this sort of use of educational technologies is 'drill and kill'). Let's postulate that, from a learning perspective, #2 measures what is actually important. That is all well and good, but, like it or not, #1 is usually what drives actual behavior at the school or classroom level. This tension -- should we measure the impact on what the system is current attempting to measure, or on what emerging consensus holds is actually most important for learning? -- is one not exclusive to discussions of the use of educational technologies, of course, but it can often be particularly acute in this area.
---
However you feel about the OLPC project (in its Peruvian incarnation, or more generally), or about the IDB's attempt to assess the impact of the OLPC project in Peru, there are, generally speaking, five potential explanations for the fact that no (or little) impact is found when evaluating an education technology initiative (please note that this list is adapted from a talk by the IDB's Eugenio Severin):
1. The use of technology in education in general makes no difference
or
2. It's too early to tell (our time horizon is too short)
or
3. The evaluators tested the wrong things and/or used the wrong methods
or
4. The idea was good, but the particular implementation was bad
or
5. Change doesn't come unless you make real changes (and often, no real, fundamental changes are made except for the addition of technology)
It is this last potential explanation that has frustrated many people in the educational technology community for a long time. Long term, sustained positive change (in the education sector, if not more broadly), whether as a result of an explicit reform process or slower, evolutionary changes in behavior, typically does not happen as the result of a single discrete intervention. Dump hardware in schools, hope for magic to happen -- this is for me the "classic worst practice in ICT use in education". I am not saying that this is an accurate characterization of what has happened in Peru (I have no first hand knowledge of the project there), but this is something that one sees seen repeated time and time again, in countries rich and poor, 'advanced' and 'developing'. Around the world, expecting the introduction of ICTs alone -- no matter whether the shiny devices are lined up in rooms in computer rooms added on to schools or (to borrow a particularly colorful metaphor) dropped from helicopters into remote communities -- to help bring about transformative, cost effective improvements in student learning while at the same time continuing with a business as usual approach to other aspects of the educational experience usually proves to be, well, a less than optimal way of going about things. In the specific case of OLPC in Peru, the IDB's suggestion "to combine the provision of laptops with a pedagogical model targeted toward increased achievement by students" sounds like an eminently reasonable recommendation, and one which presumably is relevant to educational technology initiatives of various sorts in other places as well. That said, given the history of educational technology programs showing little substantive impact in place after place, one can perhaps question whether it goes far enough. Given the outsized ambition that characterizes massive investments in ICTs in education in country after country around the world, it may seem foolish to question whether many of these sorts of programs are indeed being ambitious enough. But are they?
In speaking about ICT use in education, a number of well-respected commenters have noted that, "if you are already going down the wrong road, technology will only help get you there faster" [pdf]. For many, the promise of technology use in education has been that it will help to blaze new trails, while in practice, its use has often looked more like "tinkering toward utopia" (to borrow an evocative phrase from David Tyack and Larry Cuban, who used it in a slightly different context). Whether you agree with the IDB's findings on the OLPC project in Peru or not, the way its conclusions were arrived at, or indeed the nature of the inquiry altogether, reading the study and talking with its authors leaves me to ask if indeed we are being bold enough in the way we are thinking about the potential relevance of technology use in education. Perhaps it is unrealistic to think that truly new approaches to education are possible, at least within many existing education systems in many places, given well entrenched interests, pressing immediate needs, insufficient policy-relevant research to help make tough choices, bureaucratic inertia and the temptation to flit from reform to reform as a result of the latest academic fads or political expediency. Yet this is what excites so many people about the potential of new technologies. Given the massive price tags associated with large scale educational technology initiatives, it is hard to believe we aren’t being ambitious enough. But given the checkered history of so many investments of this sort around the world, if we aren’t being truly bold, it might be worth asking: Should we be doing this sort of thing at all?
Note: The image used at the top of this blog post of students in Ferreñafe, Peru ("learning learning") comes courtesy of One Laptop per Child via Flickr
and is used according to the terms of its Creative Commons Attribution 2.0 Generic (CC BY 2.0) license. (The OLPC Flickr pages are a great source of quality photos of children using technology around the world.)
Join the Conversation