Syndicate content

What have we learned from OLPC pilots to date?

Michael Trucano's picture

CC licensed photo courtesy of Daniel Drake via Flickr It's been four years since the The One Laptop Per Child (OLPC) project (known then as the '$100 laptop) was announced.   According to recent unconfirmed news reports from India, one quarter million of the little green and white OLPC XO laptops are now on order for use in 1500 hundred schools on the subcontinent.  Four years on, what have we learned about the impact of various OLPC pilots that might be of relevance to a deployment in India?  Thankfully, preliminary results are starting to circulate among researchers.  While nothing yet has approached what many consider to be the gold standard of evaluation work in this area, some of this research is beginning to see the light of day (or at least the Internet) -- and more is planned.

The Australian Council for Educational Research has produced perhaps the most useful literature review of the Evaluation of OLPC programs globally.

Most of the evaluations to date have been of very small pilots, and given the short duration of these projects, it is difficult -- if not dangerous --  to try to extrapolate too much from the findings from such reports.  This is especially true given the 'hothouse flower' nature of most high profile ICT in education pilots in their initial stages, where enthusiasm and statements about expected future changes in behavior and perceptions substitute for a lack of rigorously gathered, useful hard data.

In Ethiopia, GTZ sponsored an evaluation of the OLPC pilot project, Low-cost devices in educational systems: The use of the "XO-Laptop" in the Ethiopian Educational System [pdf], and Eduvision has done similar work, OLPC Ethiopia Implementation Report, September - December 2007 [pdf].

The OLPC program is being closely evaluated in Nepal by Open Learning Exchange (OLE-Nepal), which posts preliminary findgs on its blog from time to time. 

A small pilot OLPC program in Russia has also been evaluated (see Evaluation report : Introduction of XO laptops for (visually impaired) school students in Pskov and Nizhny Novgorod, Russia).

Evaluation work has begun for the OLPC program in Oceania. The OLPC wiki is the best source of information about this, including two documents from the Solomon Islands (Terms of Reference - Evaluation of One Laptop Per Child (OLPC) Pilot Project and Measurable Objectives and evaluation framework - Solomon Islands example).

It is encouraging to see that serious attention is starting to be paid to some of the larger OLPC implementations. In Mongolia, the World Bank has proposed an evaluation framework for the Mongolia READ program, which includes a component that utilizes the OLPC.  It is from the large OLPC implementation in Uruguay that we can perhaps expect the best set of first results from a large-scale project.  Ceibal has partnered with IDRC to evaluate the OLPC initiative there and compare results with similar (non-OLPC) programs in three other countries. Uruguay's Universidad de la Republica produced a report of the first stages of the implemenation of the Ceibal project (see Proyecto Flor De Ceibo: Informe de lo Actuado (agosto - diciembre 2008)).  Most encouraging of all is that Inter-american Development Bank (IDB) is proposing a rigorous randomized evaluation of the OLPC project in Peru, the world's largest OLPC implementation to date.  Indeed, it is from the IDB that we can probably expect to learn the most about the demonstrated impact of the OLPC initiative, given the seriousness and analytical rigor that it is bringing to its work in this area.

(Inspiration for this posting came from a thread on the OLPCnews.com web site started by GeSCI's Roxana Bassi.) 

(photo at the top of this blog post used according to its Creative Commons license; photo courtesy of Daniel Drake via Flickr)

Comments

Also of potential interest: I have just been sent this additional document from GeSCI: OLPC Regional Case Studies: Asia, Africa, Europe and Latin America. http://www.gesci.org/files/docman/OLPC_Case-Studies.pdf

Mike, You and I both share a great desire for the OLPC program (or any big ICT implementation) to have a good evaluation, but I am sad to hear that IADB is just now proposing an OLCP evaluation process. I would have hoped a "rigorous randomized evaluation of the OLPC project" would start before implementation, not 2 years in. I guess better late than never, eh? I just hope that IADB looks beyond official channels. At OLPC News, we've found that Peruvian volunteers have a much different opinion on implementation success than Peruvian officials: http://www.olpcnews.com/countries/peru/olpc_peru_far_from_goals.html

Hi Wayan, Thanks for your comment. I certainly don't want to speak for the IDB here, and so my formulations are deliberately circumspect, based on publicly available information on the Internet. That said, and as you know, the process of evaluation is an iterative one, with learnings from 'pilot' evaluations hopefully informing revised and improved methodologies for evaluating and assessing impact. You are certainly right to highlight that different groups will have different opinions on 'implementation success'. A recent workshop organized by the European Commission and the OECD explored this issue in a useful way. If you haven't seen it yet, you might want to have a look -- they have just posted the papers and presentations on-line. --> http://crell.jrc.ec.europa.eu/workshopictimpact.htm -Mike

Submitted by Ed Gaible on
Mike, hi, You're pointing to a very large number of evaluations of a single, dedicated ed-tech solution. While these evaluations might be a bit late to the party--especially given the investments that are now being planned--there's a terrific opportunity to start to quantify what works (essential factors for success), what doesn't work, and what might now be so essential in terms of ed-tech projects. Results would be germane to the OLPC, of course, but could also establish baseline inputs and expectations for other large-scale PC-based projects. What would it take for the evaluators contracted by the larger agencies (WB, IDRC, et al) to compare approaches and develop a set of common indicators or a minimal shared framework? Such a framework might enable comparison of data on, say, teacher-development inputs (on a per teacher basis), maintenance/repair inputs, content inputs, and a few other factors; the impact of these inputs might be assessed in relation to simple outcomes over a given period, such as number of messages/emails/blogposts per kid, time spent using the XO per kid, that sort of thing. Now's the time.

Hi Ed, Thanks for your note. I do agree: now is indeed the time. There is actually movement afoot to have wide buy-in from a set of key organizations on a common set of indicators in this area. This would be a good start. This will build on a process begun by the 'Partnership on Measuring ICT for Development', an international, multi-stakeholder initiative to improve the availability and quality of ICT data and indicators, particularly in developing countries, across sectors, including education. More information about this partnership is available on the ITU web site at http://www.itu.int/ITU-D/ict/partnership/. The UNESCO Institute of Statistics (UIS) has been spearheading this work in the education area. UIS actually convened a meeting in Morocco earlier this week of the 'Working Group on ICT Statistics in Education (WISE)'. If you read French, they have posted a set of documents from the workshop on the UNESCO-Rabat web site. http://rabat.unesco.org/article.php3?id_article=1683 I can't seem to find a link to the latest set of indicators; here is the original paper (http://ow.ly/62iZ, pdf) and an update (http://ow.ly/62iW, pdf). This work builds on work start a half-decade ago out of the UNESCO-Bangkok office on 'Developing and Using Indicators of ICT Use in Education': http://www.unescobkk.org/index.php?id=1803 -Mike

Submitted by Eugenio Severin... on
Thanks Mike for the article. Really interesting. About Peruvian Evaluation, we are working in a large scale experimental evaluation in commitment with Peruvian authorities, applying qualitative and quantitative instrument to measure the impact of this project. We are working with independent consulting and firms in this actions. We are participating also in more little scale (pilot programs) in Paraguay, Haiti, Colombia and Brasil, and proposing a similar evaluation for Uruguay. We think that the expertise and experience from this countries will be very useful to other countries in the future, and in order to improve the own initiatives in Perú and others. Regards Eugenio

Mike, Thanks for a much-needed unifying view of the M&E efforts for the various OLPC activities. The comment you make about the "hothouse flower" effect is an essential one - this is often neglected in technology assessments, especially where that technology is novel. What will happen two or three years down the line in Uruguay or Ethiopia? In the short term, there is a need for more qualitative work to better understand likely trajectories of behavior and perceptions. In the long term, longitudinal studies and formative assessment - that will seek to improve existing programs to changing environments - will be needed. On another note, do you happen to have any more information that you can share about the evaluation framework proposed for Mongolia? Jaspal

Hi Jaspal, Many thanks for your comment; we appear to be in general agreement here. Introducing an 'innovation' is often much easier than sustaining it. When I read of the excitement and optimism that are *very* apparent (undeniable) in many schools when the OLPC XO laptops first arrive, and in the months afterward, I am often reminded of the early days of the World Links program (http://www.world-links.org), with which I was closely involved for many years. We found that it was relatively easy to get communities excited about the prospect of using computers to aid in the education process, but that building long-term sustainability for such a program was a much tougher challenge. Rigorous *independent* evaluations (even when we didn't like what they said -- actually, *especially* when we didn't like what they said) were key tools that we used at World Links to help guide our work. (By the way, many of the evaluations of World Links programs in various countries are available on-line, http://ow.ly/62jR). Once I have more information about the Mongolia project that can be shared, I will do so here (and/or invite some of the project principals to post to the blog as well). -Mike

Jaspal, One thing I forgot to mention: The International Children's Digital Library is involved in the project in Mongolia. The ICDL people have a strong research orientation, and publish frequently on a variety of topics. You may want to monitor the research section of their site as well: http://en.childrenslibrary.org/about/research/paperpresent.shtml -Mike

Submitted by Raul Roman on
Hello Mike Thanks for this valuable post. I wanted to add a comment to this discussion. 1. I personally think there is too much emphasis on impact evaluation in the field of ICT and Education, including OLPC projects. The reasons for such emphasis are perfectly understandable (and common across the whole range of international development practice areas). Impact evaluation of OLPC initiatives using quasi-experimental designs is certainly possible -- although it is also important to clarify the technical, logistical and resource-related challenges and limitations of such research endeavors, particularly in the field of education (something that I would love to discuss with you at some point, as clarifying this issue would be of value in itself for the ICT4E community at large). 2. The point I want to make here is that "process evaluation" and "implementation evaluation" are equally or even more important than "impact evaluation" in this case. There should be more emphasis on process/implementation evaluation because this kind of evaluation has the potential to provide the most important practical lessons for policy and program planning. Impact evaluation and implementation/process evaluation complement each other. We certainly cannot move forward with research designs for impact evaluation until we understand if and how the immensely complex ICT4E projects we talk about are carried out as planned. Besides "impact", let's also turn our attention to systematic assessments of process and implementation to generate a holistic view of what is really happening in the field -- and to get us ready to better measure impact. Thanks and best regards, Raul

Hi Raul, Thanks for your comments. One of the of the great difficulties in impact evaluations of ICT use in education is that there are just so many variables at hand. How do you disentangle the "ICT" input from all of the rest? Randomization is one way to attempt to do this -- but it can be quite difficult (and costly) to do well. Given this complexity and expense, is this worth doing? I do agree that some of us -- perhaps -- put disproportionate rhetorical stress on impact evaluation. For me at least, this is because I see so little impetus to critically examine what we are doing in this area. Randomization may indeed be the 'gold standard' (although some people dispute this), but even adopting a 'bronze standard' would help us move forward quite a bit. Some groups see a commitment to funding rigorous impact evaluations as a useful tool to compel critical examinations of process/implementation issues. (Please note that I am using the word 'critical' here not in any pejorative sense -- I am talking about looking at things with a dispassionate, scientific eye, not a negative one.) Whatever the case, more transparency about intentions, process *and* impact -- and less marketing -- would certainly help all of us as we help make investment decisions related to ICT use in education. The 'Lessons Learned' series that UNESCO-Bangkok put out a few years ago was a useful attempt to synthesize findings from some of the on-going formative evaluation work that went on in Southeast Asia. http://www.unescobkk.org/education/ict/online-resources/e-library/ Let's hope we see more of this type of work as well! -Mike

Thanks for the article. It is indeed quite difficult to get a global overview of the effect of the OLPC project as it involves many different cultures. This is probably not representative of the whole Uruguayan project Ceibal, but this is a small video shot by a pair of teacher in a small provincial town over there. It's in Spanish, and it basically speaks about the OLPC project *in combination* with the Dokeos LMS, but it shows how good the whole thing can be. http://www.youtube.com/watch?v=OxBLzzPt-iM

Hi Yannick, Many thanks for the link to this informative video! It is encouraging to see the information and documentation that is starting to emerge from Uruguay from the OLPC initiative. -Mike ps For others: Some additional YouTube videos on the OLPC experience: http://www.youtube.com/results?search_query=olpc+uruguay&search_type=&aq=f

Submitted by Ram Sharma on
OLPC is a great project and supposed to do wonders. What i think its going to do is more of learning rather the mugging in child Specially in India. I am keeping my fingers crossed when every child will get one laptop. Yannik Thanks for the amazing video. Ram

Add new comment