We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?
Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking. What about something more rigorous: how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:
One option would be to do something like our own Effectiveness Reviews, but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.
There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
While I don't know to what extent others may have actually found this list helpful, I have seen this document referenced over the years in various funding proposals, and by other funding agencies. Over the past week I've (rather surprisingly) heard two separate organizations reference this rather old document in the course of considering some of their research priorities going forward related to investigating possible uses of information and communication technologies (ICTs) to help meet educational goals in low income and middle countries around the world, and so I wondered how these 50 research questions had held up over the years.
Are they still relevant?
What did we miss, ignore or not understand?
The list of research questions to be investigated going forward was a sort of companion document to Knowledge maps: What we know (and what we don't) about ICT use in education in developing countries. It was in many ways a creature of its time and context. The formulation of the research questions identified was in part influenced by some stated interests of the European Commission (which was co-funding some of the work) and I knew that some research questions would resonate with other potential funders at the time (including the World Bank itself) who were interested in related areas (see, for example, the first and last research questions). The list of research questions was thus somewhat idiosynscratic, did not presume to be comprehensive in its treatment of the topic, and was not intended nor meant to imply that certain areas of research interest were 'more important' than others not included on the list.
That said, in general the list seems to have held up quite well, and many of the research questions from 2005 continue to resonate in 2015. In some ways, this resonance is unfortunate, as it suggests that we still don't know answers to a lot of very basic questions. Indeed, in some cases we may know as little in 2015 as we knew in 2015, despite the explosion of activity and investment (and rhetoric) in exploring the relevance of technology use in education to help meet a wide variety of challenges faced by education systems, communities, teachers and learners around the world. This is not to imply that we haven't learned anything, of course (an upcoming EduTech blog post will look at two very useful surveys of research findings that have been published in the past year), but that we still have a long way to go.
Some comments and observations,
with the benefit of hindsight and when looking forward
The full list of research questions from 2005 is copied at the bottom of this blog post (here's the original list as published, with explanation and commentary on individual items).
Theories of change (ToCs) – will the idea stick around and shape future thinking on development, or slide back into the bubbling morass of aid jargon, forgotten and unlamented? Last week some leading ToC-istas at ODI, LSE and The Asia Foundation and a bunch of other organisations spent a whole day taking stock, and the discussion highlighted strengths, weaknesses and some looming decisions.
According to an excellent 2011 overview by Comic Relief, ToCs are an "on-going process of reflection to explore change and how it happens – and what that means for the part we play". They locate a programme or project within a wider analysis of how change comes about, draw on external learning about development, articulate our understanding of change and acknowledge wider systems and actors that influence change.
But the concept remains very fuzzy, partly because (according to a useful survey by Isobel Vogel) ToCs originated from two very different kinds of thought: evaluation (trying to clarify the links between inputs and outcomes) and social action, especially participatory and consciously reflexive approaches.
At the risk of gross generalization, the first group tends to treat ToCs as ‘logframes on steroids’, a useful tool to develop more complete and accurate chains of cause and effect. The second group tend to see the world in terms of complex adaptive systems, and believe the more linear approaches (if we do X then we will achieve Y) are a wild goose chase. These groups (well, actually they’re more of a spectrum) co-exist within organisations, and even between different individuals in country offices.
Social marketing emerged from the realization that marketing principles can be used not just to sell products but also to "sell" ideas, attitudes and behaviors. The purpose of any social marketing program, therefore, is to change the attitudes or behaviors of a target population-- for the greater social good.
In evaluating social marketing programs, the true test of effectiveness is not the number of flyers distributed or public service announcements aired but how the program impacted the lives of people.
Rebecca Firestone, a social epidemiologist at PSI with area specialties in sexual and reproductive health and non-communicable diseases, walks us through some best practices of social marketing and offers suggestions for improvement in the future. Chief among her suggestions is the need for more and better evaluation of social marketing programs.
Social Marketing Master Class: The Importance of Evaluation
Over the past five years, there has perhaps been no educational technology initiative that has been more celebrated around the world than the Khan Academy. Born of efforts by one man to provide tutoring help for his niece at a distance, in 2006 the Khan Academy became an NGO providing short video tutorials on YouTube for students. It is now a multi-million dollar non-profit enterprise, reaching over ten million students a month in both after-school and in-school settings around the world with a combination of offerings, including over 100,000 exercise problems, over 5,000 short videos on YouTube, and an online 'personalized learning dashboard'. Large scale efforts to translate Khan Academy into scores of languages are underway, with over 1000 learning items currently available in eleven languages (including French, Xhosa, Bangla, Turkish, Urdu, Portuguese, Arabic and Spanish). Founder Sal Khan's related TED video ("Let's use video to reinvent education") has been viewed over three million times, and the Khan Academy has been the leading example cited in support of a movement to 'flip the classroom', with video lectures viewed at home while teachers assist students doing their 'homework' in class.
As efforts to distribute low cost computing devices and connectivity to schools pick up steam in developing countries around the world, many ministries of education are systematically thinking about the large scale use of digital educational content for the first time. Given that many countries have already spent, are spending, or soon plan to spend large amounts of money on computer hardware, they are often less willing or able to consider large scale purchases of digital learning materials -- at least until they get a better handle on what works, what doesn't and what they really need. In some cases this phenomenon is consistent with one of the ten 'worst practices' in ICT use in education which have been previously discussed on the EduTech blog: "Think about educational content only after you have rolled out your hardware". Whether or not considerations of digital learning materials are happening 'too early' or 'too late', it is of course encouraging that they are now happening within many ministries of education.
As arguably the world's highest profile digital educational content offering in the world -- and free at that! -- with materials in scores of languages, it is perhaps not surprising that many ministries of education are proposing to use Khan Academy content in their schools.
The promise and potential for using materials from Khan Academy (and other groups as well) is often pretty clear. Less is known about the actual practice of using digital educational content in schools in middle and low income countries in systematic ways.
What do we know about how Khan Academy is actually being used in practice, and how might this knowledge be useful or relevant to educational policymakers in developing countries?
This is the third in our series of posts on evidence and decision-making; also posted on Heather’s blog. Here are Part 1 and Part 2
In our last post, we wrote about factors – evidence and otherwise – influencing decision-making about development programmes. To do so, we have considered the premise of an agency deciding whether to continue or scale a given programme after piloting it and including an accompanying evaluation commissioned explicitly to inform that decision. This is a potential ‘ideal case’ of evidence-informed decision-making. Yet, the role of evidence in informing decisions is often unclear in practice.
What is clear is that transparent parameters for making decisions about how to allocate resources following a pilot may improve the legitimacy of those decisions. We have started, and continue in this post, to explore whether decision-making deliberations can be shaped ex ante so that, regardless of the outcome, stakeholders feel it was arrived at fairly. Such pre-commitment to the process of deliberation could carve out a specific role for evidence in decision-making. Clarifying the role of evidence would inform what types of questions decision-makers need answered and with what kinds of data, as we discussed here.
Whatever the status and future of the iconicinitiative that has helped bring a few million green and white laptops to students in places like Uruguay, Peru and Rwanda, it is hard to argue that, ten years ago, when the idea was thrown out there, you heard a lot of people asking, ‘Why would you do such a thing?’ Ten years on, however, the idea of providing low cost computing devices like laptops and tablets to students is now (for better and/or for worse, depending on your perspective) part of the mainstream conversation in countries all around the world.
What do we know about the impact and results of initiatives
to provide computing devices to students
in middle and low income countries around the world?
This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…
Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.
And yet. the role of evidence in decision-making of this kind is unclear.
In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.
Among those who talk about development & welfare policy/programs/projects, it is tres chic to talk about evidence-informed decision-making (including the evidence on evidence-informed decision-making and the evidence on the evidence on…[insert infinite recursion]).
This concept — formerly best-known as evidence-based policy-making — is contrasted with faith-based or we-thought-really-really-hard-about-this-and-mean-well-based decision-making. It is also contrasted with the (sneaky) strategy of policy-based evidence-making. Using these approaches may lead to not-optimal decision-making, adoption of not-optimal policies and subsequent not-optimal outcomes.
In contrast, proponents of the evidence-informed decision-making approach believe that through their approach, decision-makers are able to make more sound judgments between those policies that will provide the best way forward, those that may not and/or those that should maybe be repealed or revised. This may lead them to make decisions on policies according to these judgments, which, if properly implemented or rolled-back may, in turn, improve development and welfare outcomes. It is also important to bear in mind, however, that it is not evidence alone that drives policymaking. We discuss this idea in more detail in our next post.
Last year I spent some time in Papua New Guinea (or PNG, as it is often called), where the World Bank is supporting a number of development projects, and has activities in both the ICT and education sectors. For reasons historical (PNG became an independent nation only in 1975, breaking off from Australia), economic (Australia's is by far PNG's largest export market) and geographical (the PNG capital, Port Moresby, lies about 500 miles from Cairns, across the Coral Sea), Australia provides a large amount of support to the education sector in Papua New Guinea, and I was particularly interested in learning lessons from the experiences of AusAid, the (now former) Australian donor agency.
For those who haven't been there: PNG is a truly fascinating place. It is technically a middle income country because of its great mineral wealth but, according to the Australian government, "Despite positive economic growth rates in recent years, PNG’s social indicators are among the worst in the Asia Pacific. Approximately 85 per cent of PNG’s mainly rural population is poor and an estimated 18 per cent of people are extremely poor. Many lack access to basic services or transport. Poverty, unemployment and poor governance contribute to serious law and order problems."
Among other things, PNG faces vexing (and in some instances, rather unique) circumstances related to remoteness (overland travel is often difficult and communities can be very isolated from each other as a result; air travel is often the only way to get form one place to another: with a landmass approximately that of California, PNG has 562 airports -- more, for example, than China, India or the Philippines!) and language (PNG is considered the most linguistically diverse country in the world, with over 800 (!) languages spoken). The PNG education system faces a wide range of challenges as a result. PNG ranks only 156th on the Human Development Index and has a literacy rate of less than 60%. As an overview from the Australian government notes,
"These include poor access to schools, low student retention rates and issues in the quality of education. It is often hard for children to go to school, particularly in the rural areas, because of distance from villages to schools, lack of transport, and cost of school fees. There are not enough schools or classrooms to take in all school-aged children, and often the standard of school buildings is very poor. For those children who do go to school, retention rates are low. Teacher quality and lack of required teaching and educational materials are ongoing issues."
If you believe that innovation often comes about in response to tackling great challenges, sometimes in response to scarcities of various sorts, Papua New Guinea is perhaps one place to put that belief to the test.
Given the many great challenges facing PNG's education sector,
its low current capacity to meet these challenges,
and the fact that 'business as usual' is not working,
while at the same time mobile phone use has been growing rapidly across society,
might ICTs, and specifically mobile phones,
offer new opportunities to help meet many long-standing, 'conventional' needs
in perhaps 'unconventional' ways?
A small research project called SMS Story has been exploring answers to this question.