Impact as Narrative: Guest post by Bruce Wydick
This page in:
There is arguably little that makes development economists sharpen their fangs as the use of tear-jerking, heart-warming, credit-card-mobilizing anecdotes by development NGOs to support impact claims. Recently in an informal conversation about NGO websites at a recent conference, Paul Niehaus half-jokingly suggested that a good use of graduate research assistant time might be a compilation of the top 25 most egregious “impact” webpages, based purely on narratives of outliers by (well-meaning) non-profits. If nothing else, it would serve as an excellent tool for teaching undergraduates about causal effects while massaging the egos of development economists by reminding us there is least the ONE reason we have value to the rest of society.
![]()
But why so often is narrative used in place of good data in impact claims? There are at least two reasons. The first is well-known: a lack of understanding of causal effects, or in some cases, an unwillingness to submit a program to rigorous evaluation. The second is more interesting: a good narrative soundly beats even the best data. Economists and scientists of all ilks need to digest what for many is an unpleasant fact: In the battle for hearts and minds of human beings, narrative will consistently outperform data in its ability to influence human thinking and motivate human action. And if we fail to grasp this fact, even the best impact evaluations that generate the best counterfactuals, with the most statistically efficient estimations, and the most thoughtfully crafted standard errors are likely to inspire less real change in policy and behavior…than someone else’s really good story.
The reason is that the human brain has difficulty connecting emotionally with data, even an expert analysis of data. And it is emotion that typically produces the motivation necessary to elicit active response. This is unfortunate news for development economists, but has been demonstrated by psychologist Paul Slovic in a series of papers (2007, 2008, 2010) to which he ascribes the term, psychic numbing, for the way in which people behaviorally ignore data overload. In these studies he and co-authors demonstrate that people generate sympathy toward an identifiable victim of poverty or war with whom they are more able to identify, but fail to generate sympathy toward statistical victims. As a result, even the most convincing analysis of data often fails to create change.
![]()
This they discovered in an experiment among subjects in which they offered subjects the opportunity to contribute $5 to Save the Children. Half the subjects received a message with factual information taken from the Save the Children website describing poverty conditions for millions of affected individuals in Sub-Saharan Africa (the statistical victim). The other told the story of one impoverished girl in Mali and showed her picture (an identifiable victim). The results showed that $2.83 was given by subjects with the identifiable victim, and $1.17 by subjects given the statistical victim. The researchers embodied a cross treatment, in which they presented half the subjects of each of the previous two treatments with the following text:
“We’d like to tell you about some research conducted by social scientists. This research shows that people typically react more strongly to specific people who have problems than to statistics about people with problems. For example, when ‘‘Baby Jessica’’ fell into a well in Texas in 1989, people sent over $700,000 for her rescue effort. Statistics—e.g., the thousands of children who will almost surely die in automobile accidents this coming year—seldom evoke such strong reactions.”
The result, as seen in the figure, was that subjects dramatically decreased giving to the identifiable victim, but unfortunately gave little more to the statistical victim.
Narrative, and the personalization of truth in the broader sense, appears to influence behavior more strongly than even very convincing data. Consider the foot-dragging by many in instituting change in the face of mountains of data supporting human impact on global climate change. In contrast the “Crying Indian” public service announcements radically changed American behavior regarding public litter and pollution. Frankly, climate change needs a Crying Indian because narrative represents a tremendously powerful force for collective action for good, or for ill.
Narrative has displayed considerable power to create vigorous movements in microeconomic development. The number of clients served by microfinance grew from 13 million to over 200 million between 1997 and 2012--not because researchers had carefully demonstrated positive impacts--but largely due to the wide appeal of a compelling narrative of entrepreneurialism among the poor, buoyed by thousands of inspiring anecdotes. Everyone embraced this narrative, on the left, the right, the center. Recently RCT impact data has contributed to a waning enthusiasm for microfinance, but arguably no more so than narratives of over-indebtedness and abusive threats by microfinance debt collectors.
Narrative is also a powerful vehicle for communicating esoteric concepts. Some of the most influential economic models--Nobel Prize-winning models--such as Akerlof’s use of the “market for lemons” to explain the consequences of information asymmetries, and Diamond’s “coconut model” explaining multiple equilibria in the unemployment rate, have been presented in the context of narrative or parable. Indeed the tremendous impact of these models on the way we think about economic life may stem from their ability to harness story to convey abstract truth. Both theory and empirics benefit from narrative.
Development economists need to become more skillful at the use of weaving narrative, story, and parable in and around their empirical work. But how can we incorporate the power of narrative into our impact research papers? Our impact studies will have a bigger impact itself if we learn to incorporate narrative into the presentation of our research, so that at issue is not narrative vs. data, but the distinction between “biased narratives,” which promote a misleading view of an average treatment effect on the treated (ATT), and “unbiased narratives,” which help a consumer of our research better grasp what we present as an unbiased estimate of the ATT.
I want to suggest one particular tool that I will call the “median impact narrative,” which (though not precisely the average--because the average typically does not factually exist) recounts the narrative of the one or a few of the middle-impact subjects in a study. So instead of highlighting the outlier, Juana, who has built a small textile empire from a few microloans, we conclude with a paragraph describing Eduardo, who after two years of microfinance borrowing, has dedicated more hours to growing his carpentry business and used microloans to weather two modest-size economic shocks to his household, an illness to his wife and the theft of some tools. If one were to choose the subject for the median impact narrative rigorously it could involve choosing the treated subject whose realized impacts represent the closest Euclidean distance (through a weighting of impact variables via the inverse of the variance-covariance matrix) to the estimated ATTs.
Consider, for example, the “median impact narrative” of the outstanding 2013 Haushofer and Shapiro study of GiveDirectly, a study finding an array of substantial impacts from unconditional cash transfers in Kenya. The median impact narrative might recount the experience of Joseph, a goat herder with a family of six who received $1100 in five electronic cash transfers. Joseph and his wife both have only two years of formal schooling and have always struggled to make ends meet with their four children. At baseline, Joseph’s children went to bed hungry an average of three days a week. Eighteen months after receiving the transfers, his goat herd increased by 51%, bringing added economic stability to his household. He also reported a 30% reduction in his children going to bed hungry in the period before the follow-up survey, and a 42% reduction in number of days his children went completely without food. Tests of his cortisol indicated that Joseph experienced a reduction in stress, about 0.14 standard deviations relative to same difference in the control group. This kind of narrative on the median subject from this particular study cements a truthful image of impact into the mind of a reader.
A false dichotomy has emerged between the use of narrative and data analysis; either can be equally misleading or helpful in conveying truth about causal effects. As researchers begin to incorporate narrative into their scientific work, it will begin to create a standard for the appropriate use of narrative by non-profits, making it easier to insist that narratives present an unbiased picture that represents a truthful image of average impacts.
Bruce Wydick is Professor of Economics at the University of San Francisco and author of the new development economics novel, The Taste of Many Mountains (Thomas Nelson/HarperCollins, 2014), a fiction narrative based on fieldwork related to the recent study of de Janvry, McIntosh, and Sadoulet on the impact of fair trade coffee.

But why so often is narrative used in place of good data in impact claims? There are at least two reasons. The first is well-known: a lack of understanding of causal effects, or in some cases, an unwillingness to submit a program to rigorous evaluation. The second is more interesting: a good narrative soundly beats even the best data. Economists and scientists of all ilks need to digest what for many is an unpleasant fact: In the battle for hearts and minds of human beings, narrative will consistently outperform data in its ability to influence human thinking and motivate human action. And if we fail to grasp this fact, even the best impact evaluations that generate the best counterfactuals, with the most statistically efficient estimations, and the most thoughtfully crafted standard errors are likely to inspire less real change in policy and behavior…than someone else’s really good story.
The reason is that the human brain has difficulty connecting emotionally with data, even an expert analysis of data. And it is emotion that typically produces the motivation necessary to elicit active response. This is unfortunate news for development economists, but has been demonstrated by psychologist Paul Slovic in a series of papers (2007, 2008, 2010) to which he ascribes the term, psychic numbing, for the way in which people behaviorally ignore data overload. In these studies he and co-authors demonstrate that people generate sympathy toward an identifiable victim of poverty or war with whom they are more able to identify, but fail to generate sympathy toward statistical victims. As a result, even the most convincing analysis of data often fails to create change.

This they discovered in an experiment among subjects in which they offered subjects the opportunity to contribute $5 to Save the Children. Half the subjects received a message with factual information taken from the Save the Children website describing poverty conditions for millions of affected individuals in Sub-Saharan Africa (the statistical victim). The other told the story of one impoverished girl in Mali and showed her picture (an identifiable victim). The results showed that $2.83 was given by subjects with the identifiable victim, and $1.17 by subjects given the statistical victim. The researchers embodied a cross treatment, in which they presented half the subjects of each of the previous two treatments with the following text:
“We’d like to tell you about some research conducted by social scientists. This research shows that people typically react more strongly to specific people who have problems than to statistics about people with problems. For example, when ‘‘Baby Jessica’’ fell into a well in Texas in 1989, people sent over $700,000 for her rescue effort. Statistics—e.g., the thousands of children who will almost surely die in automobile accidents this coming year—seldom evoke such strong reactions.”
The result, as seen in the figure, was that subjects dramatically decreased giving to the identifiable victim, but unfortunately gave little more to the statistical victim.
Narrative, and the personalization of truth in the broader sense, appears to influence behavior more strongly than even very convincing data. Consider the foot-dragging by many in instituting change in the face of mountains of data supporting human impact on global climate change. In contrast the “Crying Indian” public service announcements radically changed American behavior regarding public litter and pollution. Frankly, climate change needs a Crying Indian because narrative represents a tremendously powerful force for collective action for good, or for ill.
Narrative has displayed considerable power to create vigorous movements in microeconomic development. The number of clients served by microfinance grew from 13 million to over 200 million between 1997 and 2012--not because researchers had carefully demonstrated positive impacts--but largely due to the wide appeal of a compelling narrative of entrepreneurialism among the poor, buoyed by thousands of inspiring anecdotes. Everyone embraced this narrative, on the left, the right, the center. Recently RCT impact data has contributed to a waning enthusiasm for microfinance, but arguably no more so than narratives of over-indebtedness and abusive threats by microfinance debt collectors.
Narrative is also a powerful vehicle for communicating esoteric concepts. Some of the most influential economic models--Nobel Prize-winning models--such as Akerlof’s use of the “market for lemons” to explain the consequences of information asymmetries, and Diamond’s “coconut model” explaining multiple equilibria in the unemployment rate, have been presented in the context of narrative or parable. Indeed the tremendous impact of these models on the way we think about economic life may stem from their ability to harness story to convey abstract truth. Both theory and empirics benefit from narrative.
Development economists need to become more skillful at the use of weaving narrative, story, and parable in and around their empirical work. But how can we incorporate the power of narrative into our impact research papers? Our impact studies will have a bigger impact itself if we learn to incorporate narrative into the presentation of our research, so that at issue is not narrative vs. data, but the distinction between “biased narratives,” which promote a misleading view of an average treatment effect on the treated (ATT), and “unbiased narratives,” which help a consumer of our research better grasp what we present as an unbiased estimate of the ATT.
I want to suggest one particular tool that I will call the “median impact narrative,” which (though not precisely the average--because the average typically does not factually exist) recounts the narrative of the one or a few of the middle-impact subjects in a study. So instead of highlighting the outlier, Juana, who has built a small textile empire from a few microloans, we conclude with a paragraph describing Eduardo, who after two years of microfinance borrowing, has dedicated more hours to growing his carpentry business and used microloans to weather two modest-size economic shocks to his household, an illness to his wife and the theft of some tools. If one were to choose the subject for the median impact narrative rigorously it could involve choosing the treated subject whose realized impacts represent the closest Euclidean distance (through a weighting of impact variables via the inverse of the variance-covariance matrix) to the estimated ATTs.
Consider, for example, the “median impact narrative” of the outstanding 2013 Haushofer and Shapiro study of GiveDirectly, a study finding an array of substantial impacts from unconditional cash transfers in Kenya. The median impact narrative might recount the experience of Joseph, a goat herder with a family of six who received $1100 in five electronic cash transfers. Joseph and his wife both have only two years of formal schooling and have always struggled to make ends meet with their four children. At baseline, Joseph’s children went to bed hungry an average of three days a week. Eighteen months after receiving the transfers, his goat herd increased by 51%, bringing added economic stability to his household. He also reported a 30% reduction in his children going to bed hungry in the period before the follow-up survey, and a 42% reduction in number of days his children went completely without food. Tests of his cortisol indicated that Joseph experienced a reduction in stress, about 0.14 standard deviations relative to same difference in the control group. This kind of narrative on the median subject from this particular study cements a truthful image of impact into the mind of a reader.
A false dichotomy has emerged between the use of narrative and data analysis; either can be equally misleading or helpful in conveying truth about causal effects. As researchers begin to incorporate narrative into their scientific work, it will begin to create a standard for the appropriate use of narrative by non-profits, making it easier to insist that narratives present an unbiased picture that represents a truthful image of average impacts.
Bruce Wydick is Professor of Economics at the University of San Francisco and author of the new development economics novel, The Taste of Many Mountains (Thomas Nelson/HarperCollins, 2014), a fiction narrative based on fieldwork related to the recent study of de Janvry, McIntosh, and Sadoulet on the impact of fair trade coffee.
Well done, good points!
I wrote dozens of gooey NGO tearjerkers in the early years of my development career and the sole purpose of all this pap is to get the maximum of cash into the till. This can on occasion mean days in the field until you find either (1) the most positive outlier in the whole valley, or (2) the only person in the valley who can credibly claim to have benefited from a project.
I never made stuff up - not was I encouraged to - but I did see it as my job to go a thousand miles until I found the perfect beneficiary.
Sometimes it's really funny, like the well project in Afghanistan (a really good project, in fact) where a villager told me that "since we have the well, we have less malaria [sic], but we now have the cholera!". We drove on to the next village...
Also, Western private donors seem to like energetic, ambitious, hard-working people living in Rousseau-ian utopian communities where everyone works together in pursuit of the common good. It's strange how many folk look down on poor people living next door as lazy anti-social drunkards, but swallow the NGO line that poor people living in poor countries are unvaryingly the "deserving poor".
The best thing is to keep the PR and evaluation functions completely separate, with rigid fire walls in between. Organizational incentives mean that if you blur line between the two, you won't get more honest PR, you'll just get even more dishonest evaluations than we already have now.
Thanks to share
Great post. I like the point that telling stories doesn't have to be antithetical to good evaluation - you just have to pick the stories that are more representative, like the median. Good qualitative researchers have been doing this for years but they have a harder time of convincing readers that they've picked the median beneficiary.
Keeping PR and evaluations separate isn't always the best thing. It made MCC look a little silly when their PR published an outlier story. MCC prides itself on rigorous evaluation. I blogged it as "MCC success story is a failure" at http://bit.ly/1x896bI
Apologies, I am coming a little late to this conversation! Nevertheless, a few things. First, great that both Bruce and Bill have pointed out (again) that narrative has a useful value in (impact) evaluation. This is true not just for a sales hook but because it is critical to getting beyond 'did it work?' to 'why/not?'
I feel Bill's point should be sharper -- it's not just that narrative is not antithetical to good evaluation but, rather, it is constitutive of good evaluation and any learning and evidence-informed decision-making agenda.
I'd also like to push the idea of a median impact narrative a bit further. The basic point is a solid and important one: sampling strategy matters to qualitative work and for understanding what really happened for a range of people.
One consideration for sampling is that the same observables (independent vars) that drive sub-group analyses can also be used to help determine a qualitative sub-sample (capturing medians, outliers in both directions, etc).
A second consideration, in the spirit of lieberman's call for nested analyses (or other forms of linked and sequential qual-quant work), the results of quantitative work can be used to inform sampling of later qualitative work, targeting those representing the range of outcomes values.
Both these considerations should be fit into a framework that recognizes that qualitative work has its own versions of representativeness (credibility) as well as power (saturation) (which I ramble about here: http://blogs.worldbank.org/publicsphere/1-2014).
Finally, in all of this talk about appropriate sampling for credible qualitative work, we need to also be talking about credible analysis and *definitely* moving beyond cherry-picked quotes as the grand offering from qualitative work. Qualitative researchers in many fields have done a lot of good work on synthesizing across stories. This needs to be reflected in 'rigorous' evaluation practice. Qualitative work is not just for pop-out boxes (I pitch the idea of a qualitative pre-analysis plan, here: http://hlanthorn.com/2014/10/26/planning-for-qualitative-data-collectio…).
Thanks to both Bruce and Bill for bringing attention to an important topic in improving evaluation -- both for programmatic learning and for understanding theoretical mechanisms (as levy-paluck points out in her paper on combining qualitative work in field experiments). I hope this is a discussion that keeps getting better.
Bill Savedoff: thank you very much for sharing the link to your "MCC success story is a failure" blog. It's a great piece.
It may be helpful in this context to distinguish between two audiences for story-telling: small private donors (who typically "think fast") and big institutional backers such as foundations or even Congress (who "think slow", hopefully). When it comes to the latter, median story telling can indeed make a lot of sense.
Fascinating blog thanks.
This is really good. It is also important to learn the personalities of the target audience. If they are statistics or stories driven. A similar approach as well for sustainable energy investments - some are driven by the business case (profits, ROI, etc) , while some are driven by the impacts to climate, envi and social responsibility.
This is a funny post b/c I was chatting with a colleague at the Fund this morning on the way to work, who said to me that it's so much easier to tell stories about what you at the Bank do than what we do here. So in the Bank, you get to say -- here's a road we built and now a person living here doesn't have to walk tomatoes to the market, he now bikes, and his life is better. People understand it. But in my job, how do we share stories about reducing inflation and get people to care? He went on to say, and what about our gravestones? What will they say about me there? I don't want them to say "He was a good macro economist."
The same applies to water and sanitation projects. "Here's Ahmed who didn't get ill" is hard to communicate to small private donors - you can't attach an individual face and story to the very real benefits. DRR projects are probably even more difficult.
Hence the popularity of child sponsorship schemes with some NGOs. It's bad programming but great fundraising, so you raise funds for "sponsorship" but really do something quite different with the money. Kiva does the same thing with microcredit:
http://www.cgdev.org/blog/kiva-not-quite-what-it-seems
The greatest danger is perhaps that those very high up in the organization can start believing what their PR department produces for fundraising purposes and use it to inform policy decisions. Then you got a real serious problem.
This is useful. Paper abstracts often already contain similar information - the change to narrative would be easy but powerful.
I just wanted to point out that there are different types of audience of IEs. Sure, potential individual donors, but also policy makers, who are important and have a higher demand and (growing) understanding of unadorned numbers.
The power of communicators making the link between real results (and the failures that lead to them) is that it can inspire another kind of confidence in an organization. The staff and leadership of an organization may not have all the answers, but they can build their reputation by asking the right questions, being transparent, and continuing to learn and adapt.
With my class at Georgetown last semester, we developed new guidelines that explore how our sector can get beyond polarizing portrayals of global development and aid as all virtuous, or all wasted. See: http://issuu.com/howmatters/docs/the_development_element
And more here on why I left M&E for communications: http://www.how-matters.org/2014/09/10/power-of-the-pen-why-i-left-m-and…