A few months ago, the first randomized evaluation of the One Laptop Per Child (OLPC) came out as a working paper (you can find a brief summary by the authors here), after circulating in the seminar/conference circuit for a while. Many articles and blogs followed (see a good one here by Michael Trucano and find the short piece in the Economist and the responses it generated from OLPC in the comments section) because the study found no effects of OLPC in Peru on test scores in reading and math, no improvements in enrollment or attendance, no change in time spent on homework or motivation, but some improvements in cognitive ability as measured by Raven’s Progressive Colored Matrices.
At the Australasian Development Economics Conference (ADEW) I attended last week at Monash University in Melbourne, another paper on a smaller pilot of the OLPC in Nepal presented similar findings: no effects on English or Math test scores for primary school children who were given XO laptops along with their teachers (This study has some problems: the schools in the control group are demonstrably different than the treated schools, so the author uses a difference in difference analysis to get impact estimates. There are worries about mean reversion [Abhijit Banerjee pointed this out during the Q&A] and some strange things happening with untreated grades in treatment schools seeing improvements in test scores, so the findings should be treated with caution). What I want to talk about is not so much the evidence, but the fact that the whole thing looks a mess – both from the viewpoint of the implementers (countries who paid for these laptops) and from that of the OLPC.
First, though, let’s go back and think for another second about whether it would be reasonable to expect improvements in mastery of curricular material if we just give each student a laptop in developing countries. Another study (gated, WP version available here) that was published in the Quarterly Journal of Economics last year found that children who won a voucher to purchase a computer had lower school grades but higher computer skills and improved cognitive ability. Interestingly, parental supervision protecting time spent doing homework was protective of test scores without reducing the improvements in computer literacy and cognitive ability. So, if you just give kids a computer, we find out that they’ll use it. The use is likely heterogeneous in the way described by Banerjee et al. in “The Miracle of Microfinance?”: just as loans can be used for consumption or investment, computers can be used the same way depending on the child’s type and circumstances. But, without substantial additional effort, it seems unlikely that the children will read books on these computers (the OLPCs were loaded with a large number of e-books in the programs mentioned above)or do their homework using them. If parents pay attention, the time spent on the computer may come out of other leisure activities; otherwise, it will likely come out of time spent on learning how to read and do math, leading to the sorts of effects described above. (There is more on the use of technology in education with mixed results – I will not review the literature here, but it seems to me that Michael Trucano keeps an active and informative blog on this issue).
The reason I call this a mess is because I am not sure (a) how the governments (and the organizations that help them) purchased a whole lot of these laptops to begin with and (b) why their evaluations have not been designed differently – to learn as much as we can from them on the potential of particular technologies in building human capital. Let’s discuss these in order:
My understanding is that each laptop costs approximately $200. That’s a lot of money, ignoring any other costs of distribution, software development, training, etc. The Peru study suggests that the Peruvian government bought 900,000 of these laptops. Couldn’t spending US$180 million on these laptops wait until some careful evaluation was conducted? In my last blog post I talked about moving from efficacy to effectiveness in social science field trials. This is the opposite: there are now a couple of studies that did the best they could given that the governments were already implementing programs built around OLPC (measuring effectiveness, kind of), but how were they convinced of the efficacy of OLPC to start implementing these programs in the first place?
Bruce Wydick, in a guest post he did for us a few months back, suggests one explanation: some interventions are hyped without proper evidence: under that state of the world, the XO laptop becomes the next shiny solution to our problems in one area – a panacea. When I searched for the evidence that OLPC may significantly improve learning, I got this sentence on their website, with no links to any studies or corroboration: “Extensively field-tested and validated among some of the poorest and most remote populations on earth, constructionism emphasizes what Papert calls “learning learning” as the fundamental educational experience.” Based on what evidence did the UNDP, as far back as 2006, sign a memorandum of understanding with OLPC to support national governments to deploy these laptops in schools?
If I was running OLPC, I would hire a credible third party evaluator to run an efficacy trial. Whatever aspect of human capital it is that I am proposing my laptops improve (reading, cognitive, or non-cognitive), I would measure all of those things carefully under ideal circumstances: I would vary the intervention by having trained teachers or not, specially designed software for learning or not, internet access or not, allowing children to take the laptops home or not, etc. I’d also have a thorough review of the literature that suggests what kinds of long-term improvements in welfare, poverty reduction, growth, etc. such potential improvements may cause. If the trial showed no effects or effects below a certain threshold to be meaningful or cost-effective, I’d go back to the drawing board. If they showed larger effects, then I could start working with governments to evaluate pilot versions of what would look like scaled-up versions of these programs: problems with internet access, stolen laptops, teacher capacity, etc. These steps would help me deploy many more laptops, which furthers my goal as a non-profit organization.
But, at least we can understand why OLPC did not undertake these steps: they already believe that these laptops are good for children (apparently even at the current price tag) and there are already governments buying large quantities with the help of international development organizations. But, why didn’t the governments in Latin America, where apparently most of OLPC deployments happened so far, insist on better evidence before embarking on this path? In Peru, they may now reconsider the program but more than $180 million has already been spent; in Nepal, the Department of Education was wise enough to do a small pilot first and hence spent a small amount on the laptops, but they did not give enough thought to designing the evaluation properly. Many of the authors of the Peru study are from the Inter-American Development Bank (IDB), who seem to have collaborated with the Peruvian government in evaluating the OLPC there – perhaps they can comment on the process.
One important role larger development organizations like the World Bank or IDB can play is in testing big ideas like these across multiple countries or settings. No one with a pulse in 2012 thinks that cheap laptops are not a good thing: we’re just trying to decide whether we should be spending precious funds on subsidizing them for families with young children. Same with Millennium Villages: perhaps the ‘big bang’ approach has merit. But every such idea needs to be assessed properly, allowing us to learn as much as possible from each study. The bigger the idea and the hype, the more important the evidence becomes.
We have come some distance from the days when we used to implement projects and programs with the belief that they would work – without much in the way of thorough evaluations. These days, an array of tools are available to examine program impacts and policymakers are much more sympathetic to tweak program implementation to facilitate credible evaluations. But, donors and governments are still vulnerable to spending large sums on the latest fads, the magic bullets – only to have the evaluations to follow not precede…
We are also still being mainly opportunistic in what is being evaluated: we get a call from someone saying they are about to start implementing project X or program Y, and we jump in if it sounds interesting. That’s still too late and quite haphazard when it comes to learning the answers to important questions. As researchers and as policymakers, we all have to be more proactive in producing evidence before decisions are made. Until then, studies like the ones covered here will be second-best solutions putting out fires instead of preventing them.
Join the Conversation