Syndicate content

evaluation

Learning from a randomized evaluation of OLPC in Peru

Michael Trucano's picture

some times the goals are clear to see -- it's just challenging to get there | image credit: Martin St-Amant - Wikipedia - CC-BY-SA-3.0The Inter-american Development Bank (IDB) recently released the first set of results from its on-going, multi-year randomized evaluation of the impact of the OLPC project in Peru.
Experimental Assessment of the Program "One Laptop Per Child" in Peru (Spanish version here) is the first rigorous attempt to examine the impact of the largest '1-to-1 computing' initiative in a developing country.  This evaluation, done in concert with the Ministry of Education, looks at the ambitious program to provide computing resources to multi-grade rural elementary schools in some of the poorer communities of Peru.

Evaluating the evaluating of the Millennium Villages Project

Michael Trucano's picture

not all millennium projects are this neatly contained within clearly defined bordersWhen is the rigorous impact evaluation of development projects a luxury, and when is it a necessity?

This is a question asked in a new paper examing the Millennium Villages Project (MVP), a high profile initiative that, according to its web site, offers a "bold, innovative model for helping rural African communities lift themselves out of extreme poverty".

In the words of one of the authors of When Does Rigorous Impact Evaluation Make a Difference? The Case of the Millennium Villages, "We show how easy it can be to get the wrong idea about the project’s impacts when careful, scientific impact evaluation methods are not used. And we detail how the impact evaluation could be done better, at low cost."  The paper underscores the importance of comparing trends identified within a project activity with those in comparator sites if one is to determine the actual impact of a specific project.  This sentiment should come as no surprise to those familiar with an area of exploding interest in the international donor and development community -- that of the usefulness of randomized evaluations.

How would you design an ICT/education program for impact?

Michael Trucano's picture

where is this road leading us? the path ahead is murky | image attribution at bottomImagine, if you will, that you were an official at an international development organization who has been working with country x for a number of years in helping them think through options and issues related to the use of ICTs in their education sector.  As part of this dialogue, you had regularly preached the virtues of a commitment to rigorous monitoring and impact evaluation.

Country x has, in various ways, been host to numerous initiatives to introduce computers into its schools and, to lesser extents, to train teachers and students on their use, and schools have piloted a variety of digital learning materials and education software applications.  It is now ready, country leaders say, to invest in a rigorous, randomized trial of an educational technology initiative as a prelude to a very ambitious, large-scale roll-out of the use of educational technologies nationwide. It asks:

What programs or specific interventions should we consider?

Worst practice in ICT use in education

Michael Trucano's picture

doing these things will not make you happyIn business and in international development circles, much is made about the potential for 'learning from best practice'.  Considerations of the use of educational technologies offer no exception to this impulse.  That said, 'best practice' in the education sector is often a rather elusive concept (at best!  some informed observers would say it is actually dangerous).  The term 'good practice' may be more useful, for in many (if not most) cases and places, learning from and adapting 'good' practices is often much more practical -- and more likely to lead to success.  Given that many initiatives seem immune to learning from either 'best' or even 'good' practice in other places or contexts, it may be most practical to recommend 'lots of practice', as there appears to be a natural learning curve that accompanies large scale adoption of ICTs in the education sector in many countries -- even if this means 'repeating the mistakes' of others.

But do we really need to repeat the mistakes of others? If adopting 'best practice' is fraught with difficulties, and 'good practice' often noted but ignored, perhaps it is useful instead to look at 'worst practice'.  The good news is that, in the area of ICT use in education, there appears to be a good deal of agreement about what this is!

Evaluating the One Laptop Per Child Initiative in Sri Lanka

how do we know she's learning? | image attribution at bottomThe Sri Lanka Ministry of Education (MOE) recently decided to pilot the One Laptop Per Child (OLPC) program by purchasing laptops from the OLPC Foundation, with funding from the World Bank, and distributing them to 1,300 students in selected primary schools throughout the country. The scheme may eventually be scaled up, depending upon the educational benefits of the pilot stage.

How do you evaluate a plan like Ceibal?

Michael Trucano's picture

I'd like to teach the world to code ... (used according to terms of CC license courtesy LIRNET.NET & AK Mahan)If you have had your fill of theories and promises about what the widespread diffusion of information and communication technologies (ICTs) might mean for teaching and learning practices across an entire education system and want to see what actual practice looks like, a trip to Montevideo (or better yet, one of the regions outside the Uruguayan capital) should be high on your list.

Under Plan Ceibal (earlier blog post here), Uruguay is the first country in the world to ensure that all primary school students (or at least those in public schools) have their own personal laptop.  For free.  (The program is being extended to high schools, and, under a different financial scheme, to private schools as well).  Ceibal is about more than just 'free laptops for kids', however.  There is a complementary educational television channel. Schools serve as centers for free community wi-fi, and free connectivity has been introduced in hundreds of municipal centers around the country as well.  There are free local training programs for parents and community members on how to use the equipment.  Visiting Uruguay last week, I was struck by how many references there were to 'one laptop per teacher' (and not just 'one laptop per child', which has been the rallying cry for a larger international initiative and movement). Much digital content has been created, and digital learning content is something that is expected to have a much greater prominence within Ceibal now that the technology infrastructure is largely in place.

How to measure technology use in education

Michael Trucano's picture

one way to measure ... | courtesy of the Tango Desktop Project via the Wikimedia Commons ICTs are increasingly being used in education systems around the world. How do we know what the impact of such use is? How should we monitor and assessment the use of ICTs in education? How can, should and might answers to these questions impact the policy planning process?

What have we learned from OLPC pilots to date?

Michael Trucano's picture

CC licensed photo courtesy of Daniel Drake via Flickr It's been four years since the The One Laptop Per Child (OLPC) project (known then as the '$100 laptop) was announced.   According to recent unconfirmed news reports from India, one quarter million of the little green and white OLPC XO laptops are now on order for use in 1500 hundred schools on the subcontinent.  Four years on, what have we learned about the impact of various OLPC pilots that might be of relevance to a deployment in India?  Thankfully, preliminary results are starting to circulate among researchers.  While nothing yet has approached what many consider to be the gold standard of evaluation work in this area, some of this research is beginning to see the light of day (or at least the Internet) -- and more is planned.

Why are there so many poor evaluations of ICT use in education?

Michael Trucano's picture

Olbers' paradox is sometimes easier to wrap your head around than the question of why there are so many poor evaluations of ICT use in education | image attribution at bottomDespite increasing attention to the impact of ICT on teaching and learning in various ways, the ICT/education field continues to be littered with examples of poor evaluation work.  A few of them arrive in my in-box every week.

There are many potential reasons advanced for the general poor quality of much of this work.  One is simple bias -- many evaluations are done and/or financed by groups greatly invested in the success of a particular initiative, and in such cases findings of positive impact are almost foregone conclusions.  Many (too many, some will argue) evaluations are restricted to gauging perceptions of impact, as opposed to actual impact. Some studies are dogged by sloppy science (poor methodologies, questionable data collection techniques), others attempt to extrapolate finds from carefully nurtured, hothouse flower pilot projects in ways that are rather dubious. (The list of potential explanations is long; we'll stop here for now.)

The Use and Misuse of Computers in Education: Evidence from a Randomized Experiment in Colombia

Michael Trucano's picture

super random sampling or random supersampling? you be the judgeWorld Bank Economist Felipe Barrera-Osorio, working with Leigh Linden of Columbia University, has just published a very useful and rigorous study on the impact of ICT use in Colombia.

The Use and Misuse of Computers in Education: Evidence from a Randomized Experiment in Colombia (PDF) looked at  97 schools and 5,201 children over two years of participation in the Computers for Schools program.

While some readers may immediately latch onto the finding that the program "had little effect on students’ test scores", I found the potential explanation for this lack of positive impact to be even more valuable:

"The main reason for these results seems to be the failure to incorporate the computers into the educational process. Although the program increased the number of computers in the treatment schools and provided training to the teachers on how to use the computers in their classrooms, surveys of both teachers and students suggest that teachers did not incorporate the computers into their curriculum."

Pages