One Laptop Per Child is not improving reading or math. But, are we learning enough from these evaluations?
This page in:
A few months ago, the first randomized evaluation of the One Laptop Per Child (OLPC) came out as a working paper (you can find a brief summary by the authors here), after circulating in the seminar/conference circuit for a while. Many articles and blogs followed (see a good one here by Michael Trucano and find the short piece in the Economist and the responses it generated from OLPC in the comments section) because the study found no effects of OLPC in Peru on test scores in reading and math, no improvements in enrollment or attendance, no change in time spent on homework or motivation, but some improvements in cognitive ability as measured by Raven’s Progressive Colored Matrices.
At the Australasian Development Economics Conference (ADEW) I attended last week at Monash University in Melbourne, another paper on a smaller pilot of the OLPC in Nepal presented similar findings: no effects on English or Math test scores for primary school children who were given XO laptops along with their teachers (This study has some problems: the schools in the control group are demonstrably different than the treated schools, so the author uses a difference in difference analysis to get impact estimates. There are worries about mean reversion [Abhijit Banerjee pointed this out during the Q&A] and some strange things happening with untreated grades in treatment schools seeing improvements in test scores, so the findings should be treated with caution). What I want to talk about is not so much the evidence, but the fact that the whole thing looks a mess – both from the viewpoint of the implementers (countries who paid for these laptops) and from that of the OLPC.
First, though, let’s go back and think for another second about whether it would be reasonable to expect improvements in mastery of curricular material if we just give each student a laptop in developing countries. Another study (gated, WP version available here) that was published in the Quarterly Journal of Economics last year found that children who won a voucher to purchase a computer had lower school grades but higher computer skills and improved cognitive ability. Interestingly, parental supervision protecting time spent doing homework was protective of test scores without reducing the improvements in computer literacy and cognitive ability. So, if you just give kids a computer, we find out that they’ll use it. The use is likely heterogeneous in the way described by Banerjee et al. in “The Miracle of Microfinance?”: just as loans can be used for consumption or investment, computers can be used the same way depending on the child’s type and circumstances. But, without substantial additional effort, it seems unlikely that the children will read books on these computers (the OLPCs were loaded with a large number of e-books in the programs mentioned above)or do their homework using them. If parents pay attention, the time spent on the computer may come out of other leisure activities; otherwise, it will likely come out of time spent on learning how to read and do math, leading to the sorts of effects described above. (There is more on the use of technology in education with mixed results – I will not review the literature here, but it seems to me that Michael Trucano keeps an active and informative blog on this issue).
The reason I call this a mess is because I am not sure (a) how the governments (and the organizations that help them) purchased a whole lot of these laptops to begin with and (b) why their evaluations have not been designed differently – to learn as much as we can from them on the potential of particular technologies in building human capital. Let’s discuss these in order:
My understanding is that each laptop costs approximately $200. That’s a lot of money, ignoring any other costs of distribution, software development, training, etc. The Peru study suggests that the Peruvian government bought 900,000 of these laptops. Couldn’t spending US$180 million on these laptops wait until some careful evaluation was conducted? In my last blog post I talked about moving from efficacy to effectiveness in social science field trials. This is the opposite: there are now a couple of studies that did the best they could given that the governments were already implementing programs built around OLPC (measuring effectiveness, kind of), but how were they convinced of the efficacy of OLPC to start implementing these programs in the first place?
Bruce Wydick, in a guest post he did for us a few months back, suggests one explanation: some interventions are hyped without proper evidence: under that state of the world, the XO laptop becomes the next shiny solution to our problems in one area – a panacea. When I searched for the evidence that OLPC may significantly improve learning, I got this sentence on their website, with no links to any studies or corroboration: “Extensively field-tested and validated among some of the poorest and most remote populations on earth, constructionism emphasizes what Papert calls “learning learning” as the fundamental educational experience.” Based on what evidence did the UNDP, as far back as 2006, sign a memorandum of understanding with OLPC to support national governments to deploy these laptops in schools?
If I was running OLPC, I would hire a credible third party evaluator to run an efficacy trial. Whatever aspect of human capital it is that I am proposing my laptops improve (reading, cognitive, or non-cognitive), I would measure all of those things carefully under ideal circumstances: I would vary the intervention by having trained teachers or not, specially designed software for learning or not, internet access or not, allowing children to take the laptops home or not, etc. I’d also have a thorough review of the literature that suggests what kinds of long-term improvements in welfare, poverty reduction, growth, etc. such potential improvements may cause. If the trial showed no effects or effects below a certain threshold to be meaningful or cost-effective, I’d go back to the drawing board. If they showed larger effects, then I could start working with governments to evaluate pilot versions of what would look like scaled-up versions of these programs: problems with internet access, stolen laptops, teacher capacity, etc. These steps would help me deploy many more laptops, which furthers my goal as a non-profit organization.
But, at least we can understand why OLPC did not undertake these steps: they already believe that these laptops are good for children (apparently even at the current price tag) and there are already governments buying large quantities with the help of international development organizations. But, why didn’t the governments in Latin America, where apparently most of OLPC deployments happened so far, insist on better evidence before embarking on this path? In Peru, they may now reconsider the program but more than $180 million has already been spent; in Nepal, the Department of Education was wise enough to do a small pilot first and hence spent a small amount on the laptops, but they did not give enough thought to designing the evaluation properly. Many of the authors of the Peru study are from the Inter-American Development Bank (IDB), who seem to have collaborated with the Peruvian government in evaluating the OLPC there – perhaps they can comment on the process.
One important role larger development organizations like the World Bank or IDB can play is in testing big ideas like these across multiple countries or settings. No one with a pulse in 2012 thinks that cheap laptops are not a good thing: we’re just trying to decide whether we should be spending precious funds on subsidizing them for families with young children. Same with Millennium Villages: perhaps the ‘big bang’ approach has merit. But every such idea needs to be assessed properly, allowing us to learn as much as possible from each study. The bigger the idea and the hype, the more important the evidence becomes.
We have come some distance from the days when we used to implement projects and programs with the belief that they would work – without much in the way of thorough evaluations. These days, an array of tools are available to examine program impacts and policymakers are much more sympathetic to tweak program implementation to facilitate credible evaluations. But, donors and governments are still vulnerable to spending large sums on the latest fads, the magic bullets – only to have the evaluations to follow not precede…
We are also still being mainly opportunistic in what is being evaluated: we get a call from someone saying they are about to start implementing project X or program Y, and we jump in if it sounds interesting. That’s still too late and quite haphazard when it comes to learning the answers to important questions. As researchers and as policymakers, we all have to be more proactive in producing evidence before decisions are made. Until then, studies like the ones covered here will be second-best solutions putting out fires instead of preventing them.
Excellent post, very interesting. As one of the authors of the IDB study, I can say that exactly that has been our interest in to develop the study. It is true that might have been previously assessed at a smaller scale, but seeing the glass half full, there are many countries in Latin America, Africa and Asia, interested in developing initiatives One-on-One. We believe that this study should be a warning light for all who are considering or preparing such projects, in order to learn from the Peruvian experience.
For example, little preparation given to teachers, the absence of connectivity and lack of educational resources, and above all, lack of clear educational objectives, directly undermine the learning outcomes of the project.
I also agree that the challenge is to know in what context, with what components and strategy, this type of million dollar investments, can indeed produce positive impacts on student learning.
There are also standing questions about what is meant by impact on this type of project (what results is reasonable to expect and what are overly optimistic) and which are therefore proper measuring instruments.
Like many others, I sincerely believe that the use of technology to support teaching and learning is not only necessary but inevitable. For this reason, it is essential to follow closely the implementation of this kind of projects, so that each new initiative is made on the basis of acquired knowledge. If technologies do not contribute to substantially change the practices of schools, teachers and students, is impossible to imagine that they will obtain different results.
Some interesting questions I suppose, though not much very new or unique. I don't think you've answered your own question posed in the title - and what exactly would constitute "learning enough" from these evaluations?
What constitutes "good design" in regards to an evaluation of a program whose objectives were unclear? Why suggest pegging evaluation design only to human capital outputs in such a case, what of social capital and other considerations?
1. Did the governments of countries such as Peru and Nepal make any statements about the objectives they hoped to meet through this program? Because clearly those should be in the evaluation.
2. Ideally, yes, OLPC would have been evaluated more thoroughly before being rolled out and perhaps countries will put a hold on committing resources to this program.
But now, it would seem that for the countries that have already made the investment, the most important questions are going to be (a) why aren't the laptops having the desired effects (and what can we do about it?) and (b) what additional measures can be taken to maximize the return on this investment now that it has been made.
I suspect some qualitative research would come in handy...
As a side note, did you see on the OLPC website where they pointed out that the laptop often represents the brightest light source in these children's households? Laptops could be being put toward other uses at home...
Thanks for this coherent essay. The last paragraphs in particular are well put.
I believe Peru also ran early pilots; anecdotally valuable but without comprehensive or randomized assessment.
Among other aspects of the mess, there seems to be general disagreement about how long, based on the test in question, one should wait before the impact of various learning interventions is visible; and about what sorts of statistics to use. Epidemiology (health) offers parallels here for those in a position to do longitudinal studies across time, contexts, countries, &c.
Thanks for these comments. On (1), that's a good questions. I was hoping that either the authors of the Peru paper from IDB or someone from the government involved in this would comment in this space and shed some light on this question. On going forward, a point of my post was that had the evaluations been designed differently, we'd have more answers now on how to proceed. The Peru study is careful and well-written and the authors have some good, sensible suggestions but a study design geared to answer some specific questions could have been more helpful.
Re: your last point, I often use my phone as a flashlight to put my key in the front door as we don't have a working light bulb there ;-)
Thanks for these comments. Yes, the duration within which we'd see some important effects is always an important and tough question. However, to even get to that point, we have to have some pretty good reasons (mainly conceptual but also some empirical -- even if minimal) to test things and follow-up for a long time. This was exactly the subject of the exchange I had during Q&A in Melbourne after the presentation of the Nepal paper with Abhijit Banerjee. We could wait and see if this type of intevention has an effect on an important aspect of human capital, but we have to have good reasons to expect it and that aspect has to be important enough to justify the faith. Definitely not a point to be taken lightly...
Thanks for these comments. You have a study with rich data and careful analysis and interpretation. It will no doubt affect decisions made by other governments (or, at least, let's hope so...)
You refer to a lack of clear educational objectives by the government: do you or any of your colleagues have any insights into how the government decided to buy so many laptops in the first place? I am really interested in this question: as I mentioned in my post, the UNDP committed to helping fund these things through governments as far back as 2006, more than five years before your study. What can help governments avoid these types of costly decisions?
This kind of evaluation "mess" isn't limited to the OLPC or developing countries. The whole of public education in the United States is currently trying desperately to come to grips with the issue of standardized testing. Many have argued (notably Diane Ravitch) that the value of such testing is unproven. Yet, the US has invested more than a decade of time and millions of education dollars to increase the testing process.
Maybe a partial key is the issue of measuring educational progress. Using a narrow focus on improved reading scores to assess OLPC's success seems odd in the first place. Further, was the technology itself used during the testing, or was it perhaps yet another bubble test evaluation? Naturally, experimental design would demand that the same evaluation tool be used for both the OLPC and the control groups. However, how well does such a bubble test evaluate how engaged children might be with reading books they choose themselves, and were those kinds of books pre-loaded for them to choose? Perhaps it is like asking every student to read teacher-chosen, district-approved "classics" and then analyzing them to death.
Experiments asking the wrong questions with the wrong assessment tools may prove nothing, no matter how much they cost.
Thanks -- these are good comments. In my short post, I was trying to emphasize the importance of making clear outright what the expected gains or goals are. You either have theory that suggests some links or perhaps you have some evidence, but you have to have something that is explicit and corroborated.
You're also absolutely right about the 200 books loaded on the machines. A better approach could have been to give incentives to children to load what they want on to the laptops. Furthermore, I am still not sure about reading on laptops. I am different than my age cohort in preferring electronic to paper, but I still only read on my iPad or Kindle, and not on my laptop, even though it is slick and light...Perhaps the newer tablets will be more successful in getting children to read on them...
The OLPC program has been a curious movement for several years now. Many of the questions you raise, we raised back in 2007, 2008 and since on OLPC News. Most still have no answer.
Yet here is the answer to your question as to why governments went ahead and bought XO's without efficacy trials. Actually they did efficacy trials. Politicians were seeking an efficient way to show they are cutting edge, they care about the children, and are worthy of parental support and admiration. In this trial, promising a chicken in every pot or laptop for every child is very effective. And so Vázquez is a national hero and Kagame is a futurist.
Check out the evaluation of the WorldReader pilot in Ghana that uses Kindles to increase reading. They got results all right. Some that they wanted and others that they did not:
I feel a certain vindication reading this report, though it brings me no pleasure. As the first voice to speak out warning of the lack of an empirical foundation for the OLPC claims I was surprised that conclusions seemingly obvious to a product development engineer in the private sector were less than obvious to so many in the ID community.
My opponents at that stage were willing to argue from an unabashedly uninformed standpoint while those with appropriate academic credentials remained as observers. I cannot be too critical of them, though, as my argument ran counter to the brash assertions emanating from a lofty citadel built upon the academic/industrial/governmental nexus. I had nothing to risk in raising a contrarian voice against their assertions, and potentially much to lose if they were not countered - others had other parameters in their calculations.
Schadenfreude is a cold comfort. I take more comfort in the fact that properly qualified voices were soon to be heard repeating the questions that seemed obvious at the outset, that wiser heads in Nepal and elsewhere declined to join the intended mad rush and took an empirical choice. Perhaps some wise heads will turn to the task of questioning the genesis of the project and how such a flimsy foundation was allowed to support such a sweeping set of assertions.
Of course, any such effort should be secondary to approaching the questions that remain unanswered as to the appropriate and sustainable forms for using computer technology to allow the bottom billions to improve their lives.
Thanks. I wrote my piece more from an impact evaluation perspective, but as I approached it more like a journalist might have afterwards, I saw that there are articles as far back as 2007 (mostly by technology writers) that declared the XO a failure (mainly due to the fact that they were not really selling many units and the critics thought that this was a top down rather than bottom up design that was bound to fail despite many talented designers working on this laptop). I thought that those declarations may have been a tad too early (their evidence was also scant), but these articles also gave an indication of the approach OLPC took to deploy these things in as many countries as possible: lots of handshakes with government officials, many of which seem to have gone nowhere. Peru and its LAC neighbors may have been the minority exceptions, rather than the norm.
Your answer to why the governments bought it makes eminent sense, although it is still an opinion. I am waiting for someone involved in the decision making process to shed more light on it, but not holding my breath...
Dear Berk, we think your post is very interesting and raises important questions. We posted a response to some of them at IDB's Development Effectiveness blog:
Julian Cristia, Pablo Ibarraran, Ana Santiago and Eugenio Severin
Julian et al.,
Thanks for your thoughtful post. In it, it's hard to find much with which I disagree (other than a misspelling of my name). It still remains the case, however, that it would be better had Peru spent just a few million dollars on a pilot project, rather than dive in first and evaluate later. As you said, that's the next step in the progress we're witnessing...
I was at the big IDB meeting where Negroponte made his pitch to the Ministers of Education and Technology of many LAC countries. Before he spoke, another speaker produced the best slide I've ever seen to express the OLPC implementation plan:
Then Negroponte gave an impassioned speech on how LAC government should not listen to those that would impede its growth (like the previous speaker), but should forge ahead and take leadership in ICT.
He was politely rebuffed by on Minister who said that he saw a child play with a musical toy - the child could make noise, but without a teacher, he would never play music. I used a variant of that phrase in my 60 Minutes interview.
Yet in the end, the Presidential love of iconic initiatives succeeded. Laptops were bought (OLPC & others) and distributed. Sometime with analysis, sometimes not. A great overview of this effect is here:
Thanks very much for the ongoing commentary, which has been very educational for me. It's good to see that many government officials were indeed skeptical and did not jump on the bandwagon. It's also informative (and kind of shocking) to see that Negroponte called the idea of doing pilot projects "ridiculous."
The IADB study which I personally supported and helped to deploy did not include some important background information regarding the Peruvian basic education environment:
• A January 2007 census evaluation of 180,000 Peruvian basic education teachers showed 92% of them lacked basic Math reasoning skills and 62% of them did not read at 6th grade level, 27% were at levels of zero or below.
• After 200 hours of remedial education run by local universities’ faculties of Education, still 13% of the teachers were at the same zero or below level.
• The program designed by the government to improve the average quality of teachers at public schools requires at least 10 years to complete.
The above described situation left us with the difficult choice of waiting 10 years to do something or begin in parallel. A study by McKinsey in 2007 had found that the world’s most improved school systems had in common their concern for teacher quality and getting the best people into teaching -- so this really reduced our options. We also knew that ICT skills had been identified as key ingredients for success by many organizations (see, for example, http://www.p21.org/storage/documents/P21_Report.pdf ). Putting 21st century tools into the hands of children must be a good way to begin working while a better teacher workforce was being developed.
In a recent seminar to present the results, IADB specialists describe the study in detail and clarified some variables measured, for example, they measured motivation towards doing homework (not improved and should not surprise anyone since bad teachers tend to be boring) and it has been translated as motivation towards learning. Also attendance did not improve because it is already very high (same for coverage which is almost 100% in 1-6 grades).
It was never the program’s primary goal to improve Math and Language test scores. What we expected was that the children’s lives would be improved by giving them more options for their future -- something the study has proven because the children are more proficient in the use of 21st century tools (Table 5 of the study shows dramatic increase in both access and use of computers by treatment group) and have better cognitive skills (Table 7). It remains to be seen if those improved cognitive skills translate into better test scores with appropriate adjustments in strategy. I don’t agree with the Economist article’s biased reading of the study.
In spite of the fact the program did not aim directly to improve test scores we agreed with the IADB’s proposal to study impact on test scores because if a positive impact was found we would have gained 10 years. The fact that there was no impact on test scores should not be a surprise nor a reason to dismiss the program but to adjust the strategy and reinforce teacher training as well as trying to capitalize on the improved cognitive skills of the children in order to pursue more ambitious objectives.
The IADB Study
I would like to comment on specific areas of the study and present alternative interpretations to most of what has been circulating recently. I don’t think any of the material circulating is unbiased and this is also my personal opinion:
“The program did not affect the quality of instruction in class” (p.3)
I cannot agree with the statement that activities done “might have little effect on educational outcomes (word processing, calculator, games, music...)”. Being able to work with a word processor and a calculator should be seen as an educational outcome in itself. The paragraph implies that Math and Language scores are the only educational outcome to be expected.
The study goes on to mention that “on the positive side, the results indicate some benefits on cognitive skills”. What needs to be done, and we tried to prepare the project to be able to do so, is to build on those improved cognitive skills by integrating additional material as teachers’ quality improves. For example, we were able to integrate external stakeholders into our effort: the National University of Engineering has an application development laboratory that is working on the design of educational games specifically aimed to improve Math scores; a local firm (http://www.soft-one.org/) developed a reading comprehension application that has proven effects and runs on the XO’s, offering a possibility to improve reading comprehension scores of sixth grade children.
“limited information about how to integrate the computers provided into regular pedagogical practices” (p.6)
This is true but the problem arises when we identify “regular practices” with correct or desirable practices. In a school system with a teacher profile like the one described it is not reasonable to expect regular practices are something we want to reinforce. Choosing to work only with teachers with good practices is not an option as it leaves many teachers out of the picture – it was not a decision to ignore them. Many good teachers have made the extra effort, based on the 40 hours trainingl and achieved very good results, but they are a small minority (the good teachers). The main issue here is the lengthy and difficult process of teacher assimilation of technology. I remember a meeting of Peruvian education officers with Clotilde Fonseca in 1998 when she was president of Omar Dengo Foundation in Costa Rica. Fonseca said that after 10 years of the Programa de Informatica Educativa, Costa Rican teachers were able to talk about Piaget correctly. Not that they were able to apply it properly to their classroom practice, but that they were moving in the right direction.
“lack of Internet access and the fact that the laptops did not run Windows…” (p.7)
We knew from the beginning that universal Internet access would be lacking. This led us to design what we called “the portable Internet”, a 2GB USB drive with educational pre-selected pages allowing children and teachers to experience the feeling of navigation, and to include a reduced 30,000 entries version of Wikipedia in Spanish. That was what we called “asynchronous connectivity” meaning that the pages would be periodically updated based on teachers’ and students’ requests. Regarding Windows, with support from Microsoft we ran a test project with Windows based XO’s and had to deal with the high requirements in terms of Memory – Storage and virus infection problems. The project was generally successful, however, Microsoft decided to abandon it because they were not developing their platform for the XO. We were convinced that the ICT skills would be easily transferable to other platforms in the future since upgrading to new versions of Windows can be as complex as switching from Sugar to Windows. One unintended advantage of asynchronous connectivity was the protection against access to inappropriate content that we couldn’t expect from teachers who knew less than their students about ICT.
“The central outcomes of the study are achievement and cognitive tests” (p.10)
The authors cannot be more specific about what they found and the whole document focuses on the fact that there was no measurable impact in achievement as measured by Math and Language test scores AND that there were positive effects in cognitive skills as measured by Raven’s Progressive Matrices test. Why so-called independent analysts decide to write about one and ignore the other may be a matter of study in itself.
“Motivation toward attendance and homework” (p.11)
The study mentions its findings contradict what has been suggested regarding motivation (p.2). Since I was the one who suggested this, I must say that in a study (still unpublished) that I have done what was measured was intrinsic motivation towards school work, not attendance or homework. The variables measured that showed improvement were: interest – pleasure in school work, relation between effort and results obtained, perceived competence for school work, creative stress, perceived selection of what to do, and improved personal relations. The only variable that did not improve was perceived importance of school work, which was very high from the beginning. Therefore, the study does not prove the suggestion was wrong.
“Computer Access, Use and Skills” (pp. 13-15)
The study describes very promising results in this section. For example, only 13% of laptops malfunctioned at some point and half of them were successfully repaired. This is a demonstration of the XO’s ruggedness and the success of our training program, which devoted 8 hours to technical maintenance by teachers. Also promising was that the theft ratio was only 0.3%. It’s worth noting that we decided to give the computers but not the property because in those places where parents were mistakenly told the computers were theirs to own, many of them immediately decided to try to sell them for cash.
One negative finding of the study is that the students used the computers mostly during school time. Given that there were no specific Math or Language activities we must assume that they were taking time away from regular school work and using it for OLPC activities. The fact that the test results show no impact may be interpreted as “the teachers work in class has no impact on test scores” -- something I am afraid is not far from the truth.
“self-perceived school competence … evidence of small negative effects” (p.16)
I don’t agree with the implication of these negative effects as a decrease in self-esteem. What may be inferred is that children realize that they are not well prepared for school, a finding that is good if it motivates them to work harder to achieve what they want. What I found is that children in non-OLPC schools have extremely high perceived competence for school work, in spite of their dismal results, something the presence of the XO seemed to be correcting for good.
“Academic Achievement and Cognitive Skills” (pp. 16-17)
I find this section very instructive. The study states that there is no pedagogical model linking software with particular curriculum objectives. That is true because we were convinced that, in order for curriculum-related software to work, we needed well-educated, competent and well-trained teachers. Since we could only provide the training component there was no way to ensure this approach would work. The fact most teachers did not find the training enough is a proof training only was not enough and re-educating them was well out of our reach.
It is rewarding to learn that the approach chosen did not replicate reported negative effects of home computer usage on grades. Another positive impact neglected by most readers.
The main positive effect described here is the access and use of computers translated into improvements of general cognitive skills. In my opinion, this means that the foundation is in place on which to build and that the task is to continue working to achieve the desired results. It is to be seen if the new government administration will be willing to keep building or will decide to forget about it and begin again. It is promising the person in charge (Sandro Marcone) was the first director of Proyecto Huascarán, a predecesor of the General Directorate of Educational Technologies.
One surprising positive result is the 4.6 to 6 months advantage in cognitive skills progression for the treatment group, which means a 30 to 40% improvement in just 15 months. Why the analyst chose to ignore this remains to be understood.
“Discussion” (pp. 19-20)
I don’t share the conclusion “governments should consider alternative uses of public funds”. It is known that improving teachers’ quality is a long-term effort (Korea took 40 years to improve its education system). Something needs to be done apart from waiting. Whether building on top of improved cognitive skills will result in an improvement in test scores is something that might be worth pursuing. It seems that the current administration has a strong focus on teacher training, which will surely have a positive effect.
The mention about improved IQ in countries as emphasized by some researchers is in line with a study by Nina Hansen that suggests improved IQ test results in Ethiopian children who participated in an OLPC program.
In conclusion, the IADB study is valuable and will hopefully serve as a guide to strategists in getting the most out of the investment made. In no way does it support the Economists’s “Error message” article or the many interested parties who are trying to use the study to push their own agendas. I will have to finish this as I did at the beginning of the project by quoting Miguel de Cervantes’ “Don Quijote” when he said "Let the dogs bark Sancho, it's a sign that we keep advancing".
Thanks for this thorough description of the thought process that went into the decision to invest in OLPCs in Peru -- it's informative for me and many of our readers (although it does seem like the arguments of different parties moved from other venues to here).
One small issue, however: none of this answers my main question of why the government chose not to pilot the introduction of this at a much smaller scale rather than buying close to one million laptops at the outset. Whatever the goal was, and one can agree that meaningful improvement of cognitive skills may well be a worthy one, the effectiveness of OLPCs could still have been tested at a smaller scale first.
And while I agree that the improvements in cognitive skills are important (I even argued this with Abhijit Banerjee at the ADEW workshop where the evaluation of Nepal OLPC pilot was presented), the gains, at 0.1 standard deviations, don't look that large. We have no idea whether the gains are linear in exposure or, in fact, what shape they have, meaning that longer exposure may or may not keep improving children's cognitive skills over the control group or not. So, while the critics of OLPC are wrong to discount the gains in cognitive skills altogether, the supporters should not exaggerate them either.
Thanks again for commenting in this space, which is very valuable. Sincerely,
Great post - and great discussion it has kicked off here as well! (If you ever want to catalyze lots of discussion, a post on the OLPC project is sure to one sure way to make that happen!)
For what it's worth, my comment was too long and so I just did a related post over on the World Bank's EduTech blog, (Let them eat laptops?* - http://blogs.worldbank.org/edutech/why).
National math test scores continue to be disappointing. This poor trend persists in spite of new texts, standardized tests with attached implied threats, or laptops in the class. At some point, maybe we should admit that math, as it is taught currently and in the recent past, seems irrelevant to a large percentage of grade school kids.
Why blame a sixth grade student or teacher trapped by meaningless lessons? Teachers are frustrated. Students check out.
The missing element is reality. Instead of insisting that students learn another sixteen formulae, we need to involve them in tangible life projects. And the task must be interesting.
Project-oriented math engages kids. It is fun. They have a reason to learn the math they may have ignored in the standard lecture format of a class room.
From the Un Developed world
I come from a Latin American Country were education is needed just like food is needed. I recently travelled to another Latin American Country and I was not able to explain to my own children why some Countries are so poor and one and only All Mighty One is so rich and prosperous.
Where have all these posts and deep thoughts been cooked? Only from an electronic device. Whether my thoughts or your thoughts are deeper in quantity and quality depends con our capacities. But at last all were written and shared because we had the opportunity to access one or another electronic device.
It's not only about education, tests, and standards it's about taking the risk and taking a HUGE step and giving the opportunity to hold and have access.
The poor need an opportunity and meanwhile we talk and argue they are still out there in the same situation they have been for decades.
Don't they deserve the chance of all of us, and reports, to be mistaken...
"When I'm asked how we are measuring the benefits for children of the OLPC Educational Project and what kind of parameters and evaluation methods we are using, it is hard for me to figure out what people really expect to hear: That the moment children got their laptop they became smarter? That at the end of the period between moment x and moment y, the children in the laptop group performed better than those in the placebo and the no-treatment groups? Well, in Arahuay School the Peru Ministry of Education is running a short term pre-test and post-test pilot study, with an OLPC group only, to see what light can be obtained regarding what actually happens and how it happens in terms of knowledge, performance, and skill development variables. We are going to have the fully analyzed data and results by the end of November.
Meanwhile, many events, which may or may not have made children smarter, are worth highlighting. The OLPC project gave place to an environment, created by and for the child, which fostered learning in a more motivating and significant way than had previously been the case. I present a few observations regarding some people, events, and experiences of the month we spent in Arahuay implementing the project. They have to do with personal motivation, inter-personal relationships, and community impact, with a positive influence on the children, teachers, school, and the whole community, as well as on us, the implementing team.
Right after the project started, we noticed that most of the children started coming to school tidier or better dressed, no one commenting on it.
In the three different classrooms, several children showed aggressive behavior towards their classmates, absenteeism, and even desertion. The teachers made these observations and pointed out specific children who had changed their behavior patterns tremendously. I have to make a note regarding some of these children. It is obvious that each child has a world of its own, but at Arahuay School most of the children are lacking in good nutrition, clothes, and housing, and some don't even have a family to give them care, affection, and love. Some children who were lacking in family support at least came back to school by themselves.
Diego is in second grade. He had stopped coming to school. However, he did come to pick up his XO laptop the day we handed them out. His teacher explained to me, that he has no father, and that his mother went to Lima, the capital, to work. And now, Diego and his two brothers are living by themselves, but a kind neighbor gives them their meals. Diego's teacher went to his home to encourage him to come back to school. Four days after we had started working with the children at school, he came back, and all his classmates welcomed him a lot. From then on, he came to school everyday. Diego also caught on quickly to the XO, and was very keen on helping his first grade classmates do their work on their XOs. "
I participated in the IAD impact study representung the Peruvian Ministry of Education. The goal of the project was not grade improvement. There is no part of the study devoted to impact in motivation. We conducted a separate study to explore this area and found improvement of up to 400% in intrinsic motivation towards school work. I have the data of this study and would be willing to share it. The impact on cognitive ability found was remarkable and deserves more research, however, there was a huge and well funded backlash effort by major ICT companies whose bottom line was jeopardized by OLPC.