Syndicate content

evaluation

Social Marketing Master Class: The Importance of Evaluation

Roxanne Bauer's picture

Social marketing emerged from the realization that marketing principles can be used not just to sell products but also to "sell" ideas, attitudes and behaviors.  The purpose of any social marketing program, therefore, is to change the attitudes or behaviors of a target population-- for the greater social good.

In evaluating social marketing programs, the true test of effectiveness is not the number of flyers distributed or public service announcements aired but how the program impacted the lives of people.

Rebecca Firestone, a social epidemiologist at PSI with area specialties in sexual and reproductive health and non-communicable diseases, walks us through some best practices of social marketing and offers suggestions for improvement in the future.  Chief among her suggestions is the need for more and better evaluation of social marketing programs.


Social Marketing Master Class: The Importance of Evaluation

‘Relevant Reasons’ in Decision-Making (3 of 3)

Heather Lanthorn's picture

This is the third in our series of posts on evidence and decision-making; also posted on Heather’s blog. Here are Part 1 and Part 2
***
In our last post, we wrote about factors – evidence and otherwise – influencing decision-making about development programmes. To do so, we have considered the premise of an agency deciding whether to continue or scale a given programme after piloting it and including an accompanying evaluation commissioned explicitly to inform that decision. This is a potential ‘ideal case’ of evidence-informed decision-making. Yet, the role of evidence in informing decisions is often unclear in practice.

What is clear is that transparent parameters for making decisions about how to allocate resources following a pilot may improve the legitimacy of those decisions. We have started, and continue in this post, to explore whether decision-making deliberations can be shaped ex ante so that, regardless of the outcome, stakeholders feel it was arrived at fairly. Such pre-commitment to the process of deliberation could carve out a specific role for evidence in decision-making. Clarifying the role of evidence would inform what types of questions decision-makers need answered and with what kinds of data, as we discussed here.

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Have Evidence, Will… Um, Erm?

Heather Lanthorn's picture

Commissioning Evidence

Among those who talk about development & welfare policy/programs/projects, it is tres chic to talk about evidence-informed decision-making (including the evidence on evidence-informed decision-making and the evidence on the evidence on…[insert infinite recursion]).

This concept — formerly best-known as evidence-based policy-making — is contrasted with faith-based or we-thought-really-really-hard-about-this-and-mean-well-based decision-making. It is also contrasted with the (sneaky) strategy of policy-based evidence-making. Using these approaches may lead to not-optimal decision-making, adoption of not-optimal policies and subsequent not-optimal outcomes.

In contrast, proponents of the evidence-informed decision-making approach believe that through their approach, decision-makers are able to make more sound judgments between those policies that will provide the best way forward, those that may not and/or those that should maybe be repealed or revised. This may lead them to make decisions on policies according to these judgments, which, if properly implemented or rolled-back may, in turn, improve development and welfare outcomes. It is also important to bear in mind, however, that it is not evidence alone that drives policymaking. We discuss this idea in more detail in our next post.

#4 from 2013: Numbers Are Never Enough (especially when dealing with Big Data)

Susan Moeller's picture

Our Top Ten Blog Posts by readership in 2013
This post was originally published on January 8, 2013


The newest trend in Big Data is the personal touch.  When both the New York Times and Fast Company have headlines that trumpet: “Sure, Big Data Is Great. But So Is Intuition.” (The Times) and “Without Human Insight, Big Data Is Just A Bunch Of Numbers.” (Fast Company) you know that a major trend is afoot.

So what’s up?

The claims for what Big Data can do have been extraordinary, witness Andrew McAfee and Erik Brynjolfsson’s seminal article in October in the Harvard Business Review: “Big Data: The Management Revolution,” which began with the showstopper:  “‘You can’t manage what you don’t measure.’”  It’s hard not to feel that Big Data will provide the solutions to everything after that statement.  As the HBR article noted:  “…the recent explosion of digital data is so important. Simply put, because of big data, managers can measure, and hence know, radically more about their businesses, and directly translate that knowledge into improved decision making and performance.”

Ups and Downs in the Struggle for Accountability – Four New Real Time Studies

Duncan Green's picture

OK, we’ve had real time evaluations, we’ve done transparency and accountability initiatives, so why not combine the two? The thoroughly brilliant International Budget Partnership is doing just that, teaming up with academic researchers to follow in real time the ups and downs of four TAIs in Mexico, Brazil, South Africa and Tanzania. Read the case study summaries (only four pages each, with full versions if you want to go deeper), if you can, but below I’ll copy most of the overview blog by IBP research manager Albert van Zyl.

By following the work rather than tidying it all up with a neat but deceitful retrospective evaluation, they record the true messiness of building social contracts between citizens and states: the ups and downs, the almost-giving-up-and-then-winning, the crucial roles of individuals, the importance of scandals and serendipity.

What is a Theory of Change and How Do We Use It?

Duncan Green's picture

I’m planning to write a paper on this, but thought I’d kick off with a blog and pick your brains for references, suggestions etc. Everyone these days (funders, bosses etc) seems to be demanding a Theory of Change (ToC), although when challenged, many have only the haziest notion of what they mean by it. It’s a great opportunity, but also a risk, if ToCs become so debased that they are no more than logframes on steroids. So in internal conversations, blogs etc I’m gradually fleshing out a description of a ToC. When I ran this past some practical evaluation Oxfamers, they helpfully added a reality check – how to have a ToC conversation with an already existing programme, rather than a blank sheet of paper?

But first the blank sheet of paper. If you’re a regular visitor to this blog, you’ll probably recognize some of this, because it builds on the kinds of questions I ask when trying to understand past change episodes, but throws them forward. Once you’ve decided roughly what you want to work on (and that involves a whole separate piece of analysis), I reckon it’s handy to break down a ToC into four phases, captured in the diagram.

Will Midterm Evaluations Become the Dinosaurs of Development?

Milica Begovic's picture

I argued a few months back that information we get from story-telling is fundamentally different to what we get from polls and surveys. If we can’t predict what’s coming next, then we have to continuously work to understand what has and is happening today. (See: Patterns of voices from the Balkans – working with UNDP)

Methods we’re all used to using (surveys, mid-term evaluations) are ill prepared to do that for us and increasingly act as our blindfolds.

Why stories?

As I started working through the stories we collected, this question has become even stronger.

To give you some background, we started testing whether stories could help us:

So What do I take Away from The Great Evidence Debate? Final Thoughts (for now)

Duncan Green's picture

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and 20 pages of top quality comments (I couldn’t face adding the twitter traffic).

(For those of you that missed the wonk-war, we had an initial critique of the results agenda from Chris Roche and Rosalind Eyben, a take-no-prisoners response from Chris Whitty and Stefan Dercon, then a final salvo from Roche and Eyben + lots of comments and an online poll. Epic.)

On the debate itself, I had a strong sense that it was unhelpfully entrenched throughout – the two sides were largely talking past each other,  accusing each other of ‘straw manism’ (with some justification) and lobbing in the odd cheap shot (my favourite, from Chris and Stefan ‘Please complete the sentence ‘More biased research is better because…’ – debaters take note). Commenter Marcus Jenal summed it up perfectly:

Lant Pritchett v the Randomistas on the Nature of Evidence - Is a Wonkwar Brewing?

Duncan Green's picture

Recently I had a lot of conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But the tacit hierarchy of these different kinds of knowledge worried me – anything with a number attached had a privileged position, however partial the number or questionable the process for arriving at it. In contrast, decades of experience were not even credited as ‘evidence’, but often written off as ‘opinion’. It felt like we were in danger of discounting our richest source of insight – gut feeling.

In this state of discomfort, I went off for lunch with Lant Pritchett (right – he seems to have forgiven me for my screw-up of a couple of years ago). He’s a brilliant and original thinker and speaker on any number of development issues, but I was most struck by the vehemence of his critique of the RCT randomistas and the quest for experimental certainty. Don’t get me (or him) wrong, he thinks the results agenda is crucial in ‘moving from an input orientation to a performance orientation’ and set out his views as long ago as 2002 in a paper called ‘It pays to be ignorant’, but he sees the current emphasis on RCTs as an example of the failings of ‘thin accountability’ compared to the thick version.

Pages