One of the biggest economic benefits of schooling are labor market earnings. For many people, education and experience are their only assets. This is why I believe that it’s very important to know the economic benefits of investments in schooling.
Is payment by results just the most recent over-hyped solution for development, or is it an effective incentive for accelerating change?
When reading up on payment by results (PbR) recently I was struck by the contrast between how quickly it has spread through the aid world and how little evidence there is that it actually works.
In a way, this is unavoidable with a new idea – you make the case for it based on theory, then you implement, then you test and improve or abandon. In this case the theory, ably argued by Center for Global Development (CGD) and others, was that PbR aligns incentives in developing country governments with development outcomes, and encourages innovation, since it does not specify how to, for example, reduce maternal mortality, merely rewards governments when they achieve it.
Those arguments have certainly persuaded a bunch of donors. The UK government (pdf) says that this “new form of financing that makes payments contingent on the independent verification of results ... is a cross government reform priority”. The UK’s department for international development (DfID) called its 2014 PbR strategy Sharpening Incentives to Perform (pdf) and promised to make it “a major part of the way DfID works in future”. David Cameron, the British prime minister, waxes lyrical on the topic.
But I seem to be coming up against a long list of potential problems with PbR. Let’s start with Paul Clist and Stefan Dercon: 12 Principles for PbR in International Development (pdf), who set out a series of situations in which PbR is either unsuitable or likely to backfire. For example if results cannot be unambiguously measured, lawyers are going to have a field day when a donor tries to refuse payment by arguing they haven’t been achieved. They also make the point that PbR makes no sense if the recipient government already wants to achieve a certain goal – then you should just give them the money up front and let them get on with it.
We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?
Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking. What about something more rigorous: how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:
One option would be to do something like our own Effectiveness Reviews, but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.
There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
Back in 2005, I helped put together a 'quick guide to ICT and education challenges and research questions' in developing countries. This list was meant to inform a research program at the time sponsored by the World Bank's infoDev program, but I figured I'd make it public, because the barriers to publishing were so low (copy -> paste -> save -> upload) and in case doing so might be useful to anyone else.
While I don't know to what extent others may have actually found this list helpful, I have seen this document referenced over the years in various funding proposals, and by other funding agencies. Over the past week I've (rather surprisingly) heard two separate organizations reference this rather old document in the course of considering some of their research priorities going forward related to investigating possible uses of information and communication technologies (ICTs) to help meet educational goals in low income and middle countries around the world, and so I wondered how these 50 research questions had held up over the years.
Are they still relevant?
What did we miss, ignore or not understand?
The list of research questions to be investigated going forward was a sort of companion document to Knowledge maps: What we know (and what we don't) about ICT use in education in developing countries. It was in many ways a creature of its time and context. The formulation of the research questions identified was in part influenced by some stated interests of the European Commission (which was co-funding some of the work) and I knew that some research questions would resonate with other potential funders at the time (including the World Bank itself) who were interested in related areas (see, for example, the first and last research questions). The list of research questions was thus somewhat idiosyncratic, did not presume to be comprehensive in its treatment of the topic, and was not intended nor meant to imply that certain areas of research interest were 'more important' than others not included on the list.
That said, in general the list seems to have held up quite well, and many of the research questions from 2005 continue to resonate in 2015. In some ways, this resonance is unfortunate, as it suggests that we still don't know answers to a lot of very basic questions. Indeed, in some cases we may know as little in 2015 as we knew in 2015, despite the explosion of activity and investment (and rhetoric) in exploring the relevance of technology use in education to help meet a wide variety of challenges faced by education systems, communities, teachers and learners around the world. This is not to imply that we haven't learned anything, of course (an upcoming EduTech blog post will look at two very useful surveys of research findings that have been published in the past year), but that we still have a long way to go.
Some comments and observations,
with the benefit of hindsight and when looking forward
The full list of research questions from 2005 is copied at the bottom of this blog post (here's the original list as published, with explanation and commentary on individual items).
Reviewing this list, a few things jump out at me:
As efforts to distribute low cost computing devices and connectivity to schools pick up steam in developing countries around the world, many ministries of education are systematically thinking about the large scale use of digital educational content for the first time. Given that many countries have already spent, are spending, or soon plan to spend large amounts of money on computer hardware, they are often less willing or able to consider large scale purchases of digital learning materials -- at least until they get a better handle on what works, what doesn't and what they really need. In some cases this phenomenon is consistent with one of the ten 'worst practices' in ICT use in education which have been previously discussed on the EduTech blog: "Think about educational content only after you have rolled out your hardware". Whether or not considerations of digital learning materials are happening 'too early' or 'too late', it is of course encouraging that they are now happening within many ministries of education.
As arguably the world's highest profile digital educational content offering in the world -- and free at that! -- with materials in scores of languages, it is perhaps not surprising that many ministries of education are proposing to use Khan Academy content in their schools.
The promise and potential for using materials from Khan Academy (and other groups as well) is often pretty clear. Less is known about the actual practice of using digital educational content in schools in middle and low income countries in systematic ways.
Last week saw a flurry of news reports in response to a single blog post about the well known One Laptop Per Child project. It's dead, proclaimed one news report as a result; it's not dead yet, countered another. Recalling Mark Twain's famous quotation, Wired chimed in to announce that Reports of One Laptop Per Child's death have been greatly exaggerated.
Whatever the status and future of the iconic initiative that has helped bring a few million green and white laptops to students in places like Uruguay, Peru and Rwanda, it is hard to argue that, ten years ago, when the idea was thrown out there, you heard a lot of people asking, ‘Why would you do such a thing?’ Ten years on, however, the idea of providing low cost computing devices like laptops and tablets to students is now (for better and/or for worse, depending on your perspective) part of the mainstream conversation in countries all around the world.
What do we know about the impact and results of initiatives
to provide computing devices to students
in middle and low income countries around the world?
Last year I spent some time in Papua New Guinea (or PNG, as it is often called), where the World Bank is supporting a number of development projects, and has activities in both the ICT and education sectors. For reasons historical (PNG became an independent nation only in 1975, breaking off from Australia), economic (Australia's is by far PNG's largest export market) and geographical (the PNG capital, Port Moresby, lies about 500 miles from Cairns, across the Coral Sea), Australia provides a large amount of support to the education sector in Papua New Guinea, and I was particularly interested in learning lessons from the experiences of AusAid, the (now former) Australian donor agency.
For those who haven't been there: PNG is a truly fascinating place. It is technically a middle income country because of its great mineral wealth but, according to the Australian government, "Despite positive economic growth rates in recent years, PNG’s social indicators are among the worst in the Asia Pacific. Approximately 85 per cent of PNG’s mainly rural population is poor and an estimated 18 per cent of people are extremely poor. Many lack access to basic services or transport. Poverty, unemployment and poor governance contribute to serious law and order problems."
Among other things, PNG faces vexing (and in some instances, rather unique) circumstances related to remoteness (overland travel is often difficult and communities can be very isolated from each other as a result; air travel is often the only way to get form one place to another: with a landmass approximately that of California, PNG has 562 airports -- more, for example, than China, India or the Philippines!) and language (PNG is considered the most linguistically diverse country in the world, with over 800 (!) languages spoken). The PNG education system faces a wide range of challenges as a result. PNG ranks only 156th on the Human Development Index and has a literacy rate of less than 60%. As an overview from the Australian government notes,
"These include poor access to schools, low student retention rates and issues in the quality of education. It is often hard for children to go to school, particularly in the rural areas, because of distance from villages to schools, lack of transport, and cost of school fees. There are not enough schools or classrooms to take in all school-aged children, and often the standard of school buildings is very poor. For those children who do go to school, retention rates are low. Teacher quality and lack of required teaching and educational materials are ongoing issues."
[For those who are interested, here is some general background on PNG from the World Bank, and from the part of the Australian Department of Foreign Affairs and Trade that used to be known as AusAid, a short report about World Bank activities to support education in PNG from last year and an overview of the World Bank education project called READ PNG.]
If you believe that innovation often comes about in response to tackling great challenges, sometimes in response to scarcities of various sorts, Papua New Guinea is perhaps one place to put that belief to the test.
Given the many great challenges facing PNG's education sector,
its low current capacity to meet these challenges,
and the fact that 'business as usual' is not working,
while at the same time mobile phone use has been growing rapidly across society,
might ICTs, and specifically mobile phones,
offer new opportunities to help meet many long-standing, 'conventional' needs
in perhaps 'unconventional' ways?
A small research project called SMS Story has been exploring answers to this question.
Not a week goes by where I don't receive an unsolicited email from a company touting the benefits of its new 'educational videogame'. Indeed, just last week I opened my inbox to find two separate emails proclaiming how two different mobile gaming apps were destined to "transform learning!!!" Now, in a lot of the cases, I must confess that I am not always sure why something is an 'educational game', and not just a 'game' (although if I am in a difficult mood, I might offer that in too many instances an 'educational game' is 'a game that really isn't much fun'). That said, there is no denying that videogames are big business around the world. So -- increasingly -- is education. Even most people who fear that potential negative effects of some (or even most) videogames on young people would, at the same time, acknowledge the promise and potential for videogames to offer enriching learning experiences. The history of the introduction of educational technologies is in many ways long on promise and potential, however, and short on actual evidence of how they impact learning in tangible and fundamental ways.
Much is made of the potential for ICTs to be used to promote more personalized learning experiences through the introduction of various types of ICT-enabled assessment systems. For me, it has long seemed like the most powerful real-time learning assessment engines have been found in videogames, where actions (or inactions) are often met with near instantaneous responses, to which the player is then challenged to respond in turn. This feedback loop -- taking an action, being presented with information as a result, having to synthesize and analyze this information and doing something as a result -- might meet some people's definition of 'learning'. A good videogame engages its users so strongly that they are willing to fail, and fail, and fail again, until they learn enough from this failure that they can proceed with the game. Even where educational software is not explicitly labeled as a 'game', designers are increasingly introducing game-like elements (badges, achievement bonuses, scoring systems) as a way to promote user (learner? player?) engagement as part of a process known as 'gamification'.
The use of videogames for educational purposes, or at least in educational contexts, is far from an OECD or U.S. phenomenon. Whether I am visiting a school computer lab after hours in central Russia, an Internet cafe filled with students in Indonesia or standing behind some schoolgirls carrying phones between classes in Tanzania, 'educational' videogames seem to be nearly everywhere. Past posts on the EduTech blog have profiled things like the use of video games on mobile phones to promote literacy in rural India and EVOKE, an online game for students across Africa which the World Bank helped sponsor a few years ago. When I speak with young software entrepreneurs in Nairobi or Accra or Manila, they often talk excitedly about the latest educational game they are developing (for markets local and distant).
Do educational games 'work'?
Are they 'effective'?
And if so: How can they be used in schools?
Questions such as these are of increasing interest to scholars. Given both their potential for learning, and how aggressively videogames are being marketed to many education systems, they should be of increasing interest to educational policymakers as well. Some recent research brings us a little closer to a time when we can answer some of them.
Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)
Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations. The internal wrangling and pressure to not publish these must have been so high:
- ‘What will our donors say? How will we justify poor results to our funders and contributors?’
- ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
- ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
- ‘Our field managers will get upset, angry and discouraged when they read these.’
- ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
- ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’
The zeitgeist may indeed be transparency, but few organizations are actually doing it.
Last year on this blog, I asked a few questions (eLearning, Africa and ... China?) as a result of my participation in a related event in Dar Es Salaam where lots of my African colleagues were ‘talking about China’, but where few Chinese (researchers, practitioners, firms, officials) were present. This year's eLearning Africa event in Benin, in contrast, featured for the first time a delegation of researchers from China, a visit organized by the International Research and Training Centre for Rural Education (INRULED), a UNESCO research center headquartered at Beijing Normal University (with additional outposts at Baodin, Nanjing and Gansu). Hopefully this is just the beginning of a positive trend to open up access to knowledge about what is working (and isn’t working) related to ICT use in education in places in rural China that might more resemble certain situations and contexts in many developing countries than those drawn from experiences in, for example, Boston or Singapore (or from Shanghai and Beijing, for that matter). Establishing working level linkages between researchers and practitioners (and affiliated institutions) in China and Africa, can be vital to helping encourage such knowledge exchanges.