Syndicate content


Hawthorne effects: Past and future

Heather Lanthorn's picture

Maseru Shining Centuary TextilesI have two main points in this blog. The first is a public service announcement in the guise of history. Not so long ago, I heard someone credit the Hawthorne effect to an elusive, eponymous Dr. Hawthorne, of which, in this case, there is not one directly tied to these studies. The second is a call to expand our conception of Hawthorne effects – or really, observer or evaluator effects – in the practice of social science monitoring and evaluation.
Hawthorne history

The Hawthorne effect earned its name from the factory in which the study was sited: the Western Electric Company’s Hawthorne plant, near Chicago. These mid-1920s studies, carried out by MIT, Harvard, and the US National Research Council researchers were predicated on in-vogue ideas related to scientific management. Specifically, the researchers examined the effect of artificial illumination on worker productivity, raising and lowering the artificial light available to the women assembling electric relays (winding coils of wire) in a factory until the artificial light available was equivalent to moonlight.
The finding that made social science history (first in the nascent fields of industrial and organizational psychology and slowly trickling out from there) was that worker productivity increased when the amount of light was changed, and productivity decreased when the study ended. It was then suggested that the workers’ productivity increased because of the attention paid to them via the study, not because the light was altered.

Thus, the “Hawthorne effect” was named and acknowledged: the change in an outcome that can be attributed to behavioral responses among subjects/respondents/beneficiaries simply by virtue of being observed as part of an experiment or evaluation.

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green's picture

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Orphaned and homeless children being given a non-formal education at a school in IndiaEveryone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.

Research questions about technology use in education in developing countries

Michael Trucano's picture
let's investigate this systematically ...
let's investigate this systematically ...

Back in 2005, I helped put together a 'quick guide to ICT and education challenges and research questions' in developing countries. This list was meant to inform a research program at the time sponsored by the World Bank's infoDev program, but I figured I'd make it public, because the barriers to publishing were so low (copy -> paste -> save -> upload) and in case doing so might be useful to anyone else.

While I don't know to what extent others may have actually found this list helpful, I have seen this document referenced over the years in various funding proposals, and by other funding agencies. Over the past week I've (rather surprisingly) heard two separate organizations reference this rather old document in the course of considering some of their research priorities going forward related to investigating possible uses of information and communication technologies (ICTs) to help meet educational goals in low income and middle countries around the world, and so I wondered how these 50 research questions had held up over the years.

Are they still relevant?


What did we miss, ignore or not understand?

The list of research questions to be investigated going forward was a sort of companion document to Knowledge maps: What we know (and what we don't) about ICT use in education in developing countries. It was in many ways a creature of its time and context. The formulation of the research questions identified was in part influenced by some stated interests of the European Commission (which was co-funding some of the work) and I knew that some research questions would resonate with other potential funders at the time (including the World Bank itself) who were interested in related areas (see, for example, the first and last research questions). The list of research questions was thus somewhat idiosynscratic, did not presume to be comprehensive in its treatment of the topic, and was not intended nor meant to imply that certain areas of research interest were 'more important' than others not included on the list.

That said, in general the list seems to have held up quite well, and many of the research questions from 2005 continue to resonate in 2015. In some ways, this resonance is unfortunate, as it suggests that we still don't know answers to a lot of very basic questions. Indeed, in some cases we may know as little in 2015 as we knew in 2015, despite the explosion of activity and investment (and rhetoric) in exploring the relevance of technology use in education to help meet a wide variety of challenges faced by education systems, communities, teachers and learners around the world. This is not to imply that we haven't learned anything, of course (an upcoming EduTech blog post will look at two very useful surveys of research findings that have been published in the past year), but that we still have a long way to go.

Some comments and observations,
with the benefit of hindsight and when looking forward

The full list of research questions from 2005 is copied at the bottom of this blog post (here's the original list as published, with explanation and commentary on individual items).

Reviewing this list, a few things jump out at me:

Where have we got to on Theories of Change? Passing fad or paradigm shift?

Duncan Green's picture

Gum Arabic farmers at Hilat Ismaiel, North Kordofan, SudanTheories of change (ToCs) – will the idea stick around and shape future thinking on development, or slide back into the bubbling morass of aid jargon, forgotten and unlamented? Last week some leading ToC-istas at ODI, LSE and The Asia Foundation and a bunch of other organisations spent a whole day taking stock, and the discussion highlighted strengths, weaknesses and some looming decisions.

(Summary, agenda + presentations here)

According to an excellent 2011 overview by Comic Relief, ToCs are an "on-going process of reflection to explore change and how it happens – and what that means for the part we play". They locate a programme or project within a wider analysis of how change comes about, draw on external learning about development, articulate our understanding of change and acknowledge wider systems and actors that influence change.

But the concept remains very fuzzy, partly because (according to a useful survey by Isobel Vogel) ToCs originated from two very different kinds of thought: evaluation (trying to clarify the links between inputs and outcomes) and social action, especially participatory and consciously reflexive approaches.

At the risk of gross generalization, the first group tends to treat ToCs as ‘logframes on steroids’, a useful tool to develop more complete and accurate chains of cause and effect. The second group tend to see the world in terms of complex adaptive systems, and believe the more linear approaches (if we do X then we will achieve Y) are a wild goose chase. These groups (well, actually they’re more of a spectrum) co-exist within organisations, and even between different individuals in country offices.

Social Marketing Master Class: The Importance of Evaluation

Roxanne Bauer's picture

Social marketing emerged from the realization that marketing principles can be used not just to sell products but also to "sell" ideas, attitudes and behaviors.  The purpose of any social marketing program, therefore, is to change the attitudes or behaviors of a target population-- for the greater social good.

In evaluating social marketing programs, the true test of effectiveness is not the number of flyers distributed or public service announcements aired but how the program impacted the lives of people.

Rebecca Firestone, a social epidemiologist at PSI with area specialties in sexual and reproductive health and non-communicable diseases, walks us through some best practices of social marketing and offers suggestions for improvement in the future.  Chief among her suggestions is the need for more and better evaluation of social marketing programs.

Social Marketing Master Class: The Importance of Evaluation

Evaluating the Khan Academy

Michael Trucano's picture
this is fascinating, but wouldn't it be better online?
this is fascinating, but wouldn't it be better online?
Over the past five years, there has perhaps been no educational technology initiative that has been more celebrated around the world than the Khan Academy. Born of efforts by one man to provide tutoring help for his niece at a distance, in 2006 the Khan Academy became an NGO providing short video tutorials on YouTube for students. It is now a multi-million dollar non-profit enterprise, reaching over ten million students a month in both after-school and in-school settings around the world with a combination of offerings, including over 100,000 exercise problems, over 5,000 short videos on YouTube, and an online 'personalized learning dashboard'. Large scale efforts to translate Khan Academy into scores of languages are underway, with over 1000 learning items currently available in eleven languages (including French, Xhosa, Bangla, Turkish, Urdu, Portuguese, Arabic and Spanish). Founder Sal Khan's related TED video ("Let's use video to reinvent education") has been viewed over three million times, and the Khan Academy has been the leading example cited in support of a movement to 'flip the classroom', with video lectures viewed at home while teachers assist students doing their 'homework' in class.

As efforts to distribute low cost computing devices and connectivity to schools pick up steam in developing countries around the world, many ministries of education are systematically thinking about the large scale use of digital educational content for the first time. Given that many countries have already spent, are spending, or soon plan to spend large amounts of money on computer hardware, they are often less willing or able to consider large scale purchases of digital learning materials -- at least until they get a better handle on what works, what doesn't and what they really need. In some cases this phenomenon is consistent with one of the ten 'worst practices' in ICT use in education which have been previously discussed on the EduTech blog: "Think about educational content only after you have rolled out your hardware". Whether or not considerations of digital learning materials are happening 'too early' or 'too late', it is of course encouraging that they are now happening within many ministries of education.

As arguably the world's highest profile digital educational content offering in the world -- and free at that! -- with materials in scores of languages, it is perhaps not surprising that many ministries of education are proposing to use Khan Academy content in their schools.

The promise and potential for using materials from Khan Academy (and other groups as well) is often pretty clear. Less is known about the actual practice of using digital educational content in schools in middle and low income countries in systematic ways.
What do we know about how Khan Academy is actually being used in practice, and how might this knowledge be useful or relevant to educational policymakers in developing countries?

‘Relevant Reasons’ in Decision-Making (3 of 3)

Heather Lanthorn's picture

This is the third in our series of posts on evidence and decision-making; also posted on Heather’s blog. Here are Part 1 and Part 2
In our last post, we wrote about factors – evidence and otherwise – influencing decision-making about development programmes. To do so, we have considered the premise of an agency deciding whether to continue or scale a given programme after piloting it and including an accompanying evaluation commissioned explicitly to inform that decision. This is a potential ‘ideal case’ of evidence-informed decision-making. Yet, the role of evidence in informing decisions is often unclear in practice.

What is clear is that transparent parameters for making decisions about how to allocate resources following a pilot may improve the legitimacy of those decisions. We have started, and continue in this post, to explore whether decision-making deliberations can be shaped ex ante so that, regardless of the outcome, stakeholders feel it was arrived at fairly. Such pre-commitment to the process of deliberation could carve out a specific role for evidence in decision-making. Clarifying the role of evidence would inform what types of questions decision-makers need answered and with what kinds of data, as we discussed here.

Two new rigorous evaluations of technology use in education

Michael Trucano's picture
Look, right there, there it is: Impact! (I think ...)
Look, right there, there it is:
Impact! (I think ...)

Last week saw a flurry of news reports in response to a single blog post about the well known One Laptop Per Child project. It's dead, proclaimed one news report as a result; it's not dead yet, countered another. Recalling Mark Twain's famous quotation, Wired chimed in to announce that Reports of One Laptop Per Child's death have been greatly exaggerated.

Whatever the status and future of the iconic initiative that has helped bring a few million green and white laptops to students in places like Uruguay, Peru and Rwanda, it is hard to argue that, ten years ago, when the idea was thrown out there, you heard a lot of people asking, ‘Why would you do such a thing?’ Ten years on, however, the idea of providing low cost computing devices like laptops and tablets to students is now (for better and/or for worse, depending on your perspective) part of the mainstream conversation in countries all around the world.

What do we know about the impact and results of initiatives
to provide computing devices to students
in middle and low income countries around the world?

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Have Evidence, Will… Um, Erm?

Heather Lanthorn's picture

Commissioning Evidence

Among those who talk about development & welfare policy/programs/projects, it is tres chic to talk about evidence-informed decision-making (including the evidence on evidence-informed decision-making and the evidence on the evidence on…[insert infinite recursion]).

This concept — formerly best-known as evidence-based policy-making — is contrasted with faith-based or we-thought-really-really-hard-about-this-and-mean-well-based decision-making. It is also contrasted with the (sneaky) strategy of policy-based evidence-making. Using these approaches may lead to not-optimal decision-making, adoption of not-optimal policies and subsequent not-optimal outcomes.

In contrast, proponents of the evidence-informed decision-making approach believe that through their approach, decision-makers are able to make more sound judgments between those policies that will provide the best way forward, those that may not and/or those that should maybe be repealed or revised. This may lead them to make decisions on policies according to these judgments, which, if properly implemented or rolled-back may, in turn, improve development and welfare outcomes. It is also important to bear in mind, however, that it is not evidence alone that drives policymaking. We discuss this idea in more detail in our next post.