We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?
Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking. What about something more rigorous: how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:
One option would be to do something like our own Effectiveness Reviews, but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.
There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
While I don't know to what extent others may have actually found this list helpful, I have seen this document referenced over the years in various funding proposals, and by other funding agencies. Over the past week I've (rather surprisingly) heard two separate organizations reference this rather old document in the course of considering some of their research priorities going forward related to investigating possible uses of information and communication technologies (ICTs) to help meet educational goals in low income and middle countries around the world, and so I wondered how these 50 research questions had held up over the years.
Are they still relevant?
What did we miss, ignore or not understand?
The list of research questions to be investigated going forward was a sort of companion document to Knowledge maps: What we know (and what we don't) about ICT use in education in developing countries. It was in many ways a creature of its time and context. The formulation of the research questions identified was in part influenced by some stated interests of the European Commission (which was co-funding some of the work) and I knew that some research questions would resonate with other potential funders at the time (including the World Bank itself) who were interested in related areas (see, for example, the first and last research questions). The list of research questions was thus somewhat idiosynscratic, did not presume to be comprehensive in its treatment of the topic, and was not intended nor meant to imply that certain areas of research interest were 'more important' than others not included on the list.
That said, in general the list seems to have held up quite well, and many of the research questions from 2005 continue to resonate in 2015. In some ways, this resonance is unfortunate, as it suggests that we still don't know answers to a lot of very basic questions. Indeed, in some cases we may know as little in 2015 as we knew in 2015, despite the explosion of activity and investment (and rhetoric) in exploring the relevance of technology use in education to help meet a wide variety of challenges faced by education systems, communities, teachers and learners around the world. This is not to imply that we haven't learned anything, of course (an upcoming EduTech blog post will look at two very useful surveys of research findings that have been published in the past year), but that we still have a long way to go.
Some comments and observations,
with the benefit of hindsight and when looking forward
The full list of research questions from 2005 is copied at the bottom of this blog post (here's the original list as published, with explanation and commentary on individual items).
Over the past five years, there has perhaps been no educational technology initiative that has been more celebrated around the world than the Khan Academy. Born of efforts by one man to provide tutoring help for his niece at a distance, in 2006 the Khan Academy became an NGO providing short video tutorials on YouTube for students. It is now a multi-million dollar non-profit enterprise, reaching over ten million students a month in both after-school and in-school settings around the world with a combination of offerings, including over 100,000 exercise problems, over 5,000 short videos on YouTube, and an online 'personalized learning dashboard'. Large scale efforts to translate Khan Academy into scores of languages are underway, with over 1000 learning items currently available in eleven languages (including French, Xhosa, Bangla, Turkish, Urdu, Portuguese, Arabic and Spanish). Founder Sal Khan's related TED video ("Let's use video to reinvent education") has been viewed over three million times, and the Khan Academy has been the leading example cited in support of a movement to 'flip the classroom', with video lectures viewed at home while teachers assist students doing their 'homework' in class.
As efforts to distribute low cost computing devices and connectivity to schools pick up steam in developing countries around the world, many ministries of education are systematically thinking about the large scale use of digital educational content for the first time. Given that many countries have already spent, are spending, or soon plan to spend large amounts of money on computer hardware, they are often less willing or able to consider large scale purchases of digital learning materials -- at least until they get a better handle on what works, what doesn't and what they really need. In some cases this phenomenon is consistent with one of the ten 'worst practices' in ICT use in education which have been previously discussed on the EduTech blog: "Think about educational content only after you have rolled out your hardware". Whether or not considerations of digital learning materials are happening 'too early' or 'too late', it is of course encouraging that they are now happening within many ministries of education.
As arguably the world's highest profile digital educational content offering in the world -- and free at that! -- with materials in scores of languages, it is perhaps not surprising that many ministries of education are proposing to use Khan Academy content in their schools.
The promise and potential for using materials from Khan Academy (and other groups as well) is often pretty clear. Less is known about the actual practice of using digital educational content in schools in middle and low income countries in systematic ways.
What do we know about how Khan Academy is actually being used in practice, and how might this knowledge be useful or relevant to educational policymakers in developing countries?
Whatever the status and future of the iconicinitiative that has helped bring a few million green and white laptops to students in places like Uruguay, Peru and Rwanda, it is hard to argue that, ten years ago, when the idea was thrown out there, you heard a lot of people asking, ‘Why would you do such a thing?’ Ten years on, however, the idea of providing low cost computing devices like laptops and tablets to students is now (for better and/or for worse, depending on your perspective) part of the mainstream conversation in countries all around the world.
What do we know about the impact and results of initiatives
to provide computing devices to students
in middle and low income countries around the world?
Last year I spent some time in Papua New Guinea (or PNG, as it is often called), where the World Bank is supporting a number of development projects, and has activities in both the ICT and education sectors. For reasons historical (PNG became an independent nation only in 1975, breaking off from Australia), economic (Australia's is by far PNG's largest export market) and geographical (the PNG capital, Port Moresby, lies about 500 miles from Cairns, across the Coral Sea), Australia provides a large amount of support to the education sector in Papua New Guinea, and I was particularly interested in learning lessons from the experiences of AusAid, the (now former) Australian donor agency.
For those who haven't been there: PNG is a truly fascinating place. It is technically a middle income country because of its great mineral wealth but, according to the Australian government, "Despite positive economic growth rates in recent years, PNG’s social indicators are among the worst in the Asia Pacific. Approximately 85 per cent of PNG’s mainly rural population is poor and an estimated 18 per cent of people are extremely poor. Many lack access to basic services or transport. Poverty, unemployment and poor governance contribute to serious law and order problems."
Among other things, PNG faces vexing (and in some instances, rather unique) circumstances related to remoteness (overland travel is often difficult and communities can be very isolated from each other as a result; air travel is often the only way to get form one place to another: with a landmass approximately that of California, PNG has 562 airports -- more, for example, than China, India or the Philippines!) and language (PNG is considered the most linguistically diverse country in the world, with over 800 (!) languages spoken). The PNG education system faces a wide range of challenges as a result. PNG ranks only 156th on the Human Development Index and has a literacy rate of less than 60%. As an overview from the Australian government notes,
"These include poor access to schools, low student retention rates and issues in the quality of education. It is often hard for children to go to school, particularly in the rural areas, because of distance from villages to schools, lack of transport, and cost of school fees. There are not enough schools or classrooms to take in all school-aged children, and often the standard of school buildings is very poor. For those children who do go to school, retention rates are low. Teacher quality and lack of required teaching and educational materials are ongoing issues."
If you believe that innovation often comes about in response to tackling great challenges, sometimes in response to scarcities of various sorts, Papua New Guinea is perhaps one place to put that belief to the test.
Given the many great challenges facing PNG's education sector,
its low current capacity to meet these challenges,
and the fact that 'business as usual' is not working,
while at the same time mobile phone use has been growing rapidly across society,
might ICTs, and specifically mobile phones,
offer new opportunities to help meet many long-standing, 'conventional' needs
in perhaps 'unconventional' ways?
A small research project called SMS Story has been exploring answers to this question.
Not a week goes by where I don't receive an unsolicited email from a company touting the benefits of its new 'educational videogame'. Indeed, just last week I opened my inbox to find two separate emails proclaiming how two different mobile gaming apps were destined to "transform learning!!!" Now, in a lot of the cases, I must confess that I am not always sure why something is an 'educational game', and not just a 'game' (although if I am in a difficult mood, I might offer that in too many instances an 'educational game' is 'a game that really isn't much fun'). That said, there is no denying that videogames are big business around the world. So -- increasingly -- is education. Even most people who fear that potential negative effects of some (or even most) videogames on young people would, at the same time, acknowledge the promise and potential for videogames to offer enriching learning experiences. The history of the introduction of educational technologies is in many ways long on promise and potential, however, and short on actual evidence of how they impact learning in tangible and fundamental ways.
Much is made of the potential for ICTs to be used to promote more personalized learning experiences through the introduction of various types of ICT-enabled assessment systems. For me, it has long seemed like the most powerful real-time learning assessment engines have been found in videogames, where actions (or inactions) are often met with near instantaneous responses, to which the player is then challenged to respond in turn. This feedback loop -- taking an action, being presented with information as a result, having to synthesize and analyze this information and doing something as a result -- might meet some people's definition of 'learning'. A good videogame engages its users so strongly that they are willing to fail, and fail, and fail again, until they learn enough from this failure that they can proceed with the game. Even where educational software is not explicitly labeled as a 'game', designers are increasingly introducing game-like elements (badges, achievement bonuses, scoring systems) as a way to promote user (learner? player?) engagement as part of a process known as 'gamification'.
The use of videogames for educational purposes, or at least in educational contexts, is far from an OECD or U.S. phenomenon. Whether I am visiting a school computer lab after hours in central Russia, an Internet cafe filled with students in Indonesia or standing behind some schoolgirls carrying phones between classes in Tanzania, 'educational' videogames seem to be nearly everywhere. Past posts on the EduTech blog have profiled things like the use of video games on mobile phones to promote literacy in rural India and EVOKE, an online game for students across Africa which the World Bank helped sponsor a few years ago. When I speak with young software entrepreneurs in Nairobi or Accra or Manila, they often talk excitedly about the latest educational game they are developing (for markets local and distant).
Do educational games 'work'? Are they 'effective'?
And if so: How can they be used in schools?
Questions such as these are of increasing interest to scholars. Given both their potential for learning, and how aggressively videogames are being marketed to many education systems, they should be of increasing interest to educational policymakers as well. Some recent research brings us a little closer to a time when we can answer some of them.
Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)
Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations. The internal wrangling and pressure to not publish these must have been so high:
‘What will our donors say? How will we justify poor results to our funders and contributors?’
‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
‘Our field managers will get upset, angry and discouraged when they read these.’
‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’
The zeitgeist may indeed be transparency, but few organizations are actually doing it.
Last year on this blog, I asked a few questions (eLearning, Africa and ... China?) as a result of my participation in a related event in Dar Es Salaam where lots of my African colleagues were ‘talking about China’, but where few Chinese (researchers, practitioners, firms, officials) were present. This year's eLearning Africa event in Benin, in contrast, featured for the first time a delegation of researchers from China, a visit organized by the International Research and Training Centre for Rural Education (INRULED), a UNESCO research center headquartered at Beijing Normal University (with additional outposts at Baodin, Nanjing and Gansu). Hopefully this is just the beginning of a positive trend to open up access to knowledge about what is working (and isn’t working) related to ICT use in education in places in rural China that might more resemble certain situations and contexts in many developing countries than those drawn from experiences in, for example, Boston or Singapore (or from Shanghai and Beijing, for that matter). Establishing working level linkages between researchers and practitioners (and affiliated institutions) in China and Africa, can be vital to helping encourage such knowledge exchanges.
Drawing insights from his readings of a few evaluations of technology use (one in Nepal[PDF] and one in Romania) he notes that, at quick glance, some large scale implementations of educational technologies are, for lack of a more technical term, rather a 'mess':
"The reason I call this a mess is because I am not sure (a) how the governments (and the organizations that help them) purchased a whole lot of these laptops to begin with and (b) why their evaluations have not been designed differently – to learn as much as we can from them on the potential of particular technologies in building human capital."
Three members of the team at IDB that led the OLPC Peru evaluation have responded ("One Laptop per Child revisited") in part to question (b) in the portion of Berk's informative and engaging post excerpted above. I thought I'd try to try to help address question (a).
First let me say: I have no firsthand knowledge of the background to the OLPC Peru project specifically, nor of the motivations of various key actors instrumental in helping to decide to implement the program there as it was implemented, beyond what I have read about it online. (There is quite a lot written about this on the web; I won't attempt to summarize the many vibrant commentaries on this subject, but, for those who speak Spanish or who are handy with online translation tools, some time with your favorite search engine should unearth some related facts and a lot of opinions -- which I don't feel well-placed to evaluate in their specifics.) I have never worked in Peru, and have had only informal contact with some of the key people working on the project there. The World Bank, while maintaining a regular dialogue with the Ministry of Education in Peru, was not to my knowledge involved in the OLPC project there in any substantive way. The World Bank itself is helping to evaluate a small OLPC pilot in Sri Lanka; a draft set of findings from that research is currently circulating and hopefully it will be released in the not too distant future.
That said, I *have* been involved in various capacities with *lots* of other large scale initiatives in other countries where lots of computers were purchased for use in schools and/or by students and/or teachers, and so I do feel I can offer some general comments based on this experience, in case it might of interest to anyone.
At an event last year in Uruguay for policymakers from around the world, a few experts who have worked in the field of technology use in education for a long time commented that there was, in their opinion and in contrast to their experiences even a few years ago, a surprising amount of consensus among the people gathered together on what was really important, what wasn't, and on ways to proceed (and not to proceed). Over the past two years, I have increasingly made the same comment to myself when involved in similar discussions in other parts of the world. At one level, this has been a welcome development. People who work with the use of ICTs in education tend to be a highly connected bunch, and the diffusion of better (cheaper, faster) connectivity has helped to ensure that 'good practices and ideas' are shared with greater velocity than perhaps ever before. Even some groups and people associated with the 'give kids computers, expect magic to happen' philosophy appear to have had some of their more extreme views tempered in recent years by the reality of actually trying to put this philosophy into practice.
That said, the fact that "everyone agrees about most everything" isn't always such a good thing. Divergent opinions and voices are important, if only to help us reconsider why we believe what we believe. (They are also important because they might actually be right, of course, and all of the rest of us wrong, but that's another matter!) Even where there is an emerging consensus among leading thinkers and practitioners about what is critically important, this doesn't mean that what is actually being done reflects this consensus -- or indeed, that this consensus 'expert' opinion is relevant in all contexts.