Syndicate content

Decision Making

The Things We Do: Regret can trip us up- before we’ve even begun

Roxanne Bauer's picture

Few people doubt the merits of pausing to "think things through" before making a decision. Without doing so, we fear we may end up making a decision that leads to harm and misfortune.  However, this process is itself a double-edged sword that can lead us astray.

Boy thinkingWe've all been forced to make tough decisions in life. From career progression and where to live to which route to take on a trip, we navigate life's choices by considering our options and weighing them against each other. In the context of these decisions, we attempt to predict the negative consequences from an action or decision and the likelihood that those consequences will actually occur. 

Regret- we seek to avoid it when we can

In a famous study on Regret Theory, Loomes and Sugden present the idea that in making decisions, individuals not only consider the knowledge they have and the resources at their disposal, but also the likely scenarios that will result from their choices.  They further suggest that the pleasure associated with the results of their choices depends not only on the nature of those results but also on the nature of alternative results. Individuals consider the regret their future selves may feel if they know they would have been better if they had chosen differently. Likewise, they consider the joy their future selves may feel if the consequences of their decisions turn out to be optimal. Thus, both a cause and a consequence of our desire to avoid losses (loss aversion) is our desire to avoid the pain of regret. 
According to researchers, individuals exhibit “regret aversion” when they fear their decision will turn out to be wrong in hindsight. Sometimes, we engage in regret aversion before making a decision, leading us to hem and haw and lose out on opportunities. Other times, we engage in regret aversion after a decision is already made, leading us to hold on to losing assets or undesirable positions because we don’t want to admit our choice was not the best one. Many of the interventions that behavioral economists suggest, such as automatic enrollment, default options, and providing information to consumers, are set up to reduce the ex post regret individuals will face for not doing something that’s in their interest.

What does it mean to do policy-relevant research and evaluation?

Heather Lanthorn's picture

Center for International Forestry Research (CIFOR) researchers upload the data to see the resultsWhat does it mean to do policy-relevant research and evaluation? How does it differ from policy adjacent research and evaluation? Heather Lanthorn explores these questions and offers some food for thought on intention and decision making.

This post is really a conversation with myself, which I started here, but I would be happy if everyone was conversing on it a bit more: what does it mean to do research that is ‘policy relevant’? From my vantage point in impact evaluation and applied political-economy and stakeholder analyses, ‘policy relevant’ is a glossy label that a researcher or organization can apply to his/her own work at his/her own discretion. This is confusing, slightly unsettling, and probably dulls some of the gloss off the label.

The main thrust of the discussion is this: we (researchers, donors, folks who have generally bought-into the goal of evidence- and evaluation-informed decision-making) should be clear (and more humble) about what is meant by ‘policy relevant’ research and evaluation. I don’t have an answer to this, but I try to lay out some of the key facets, below.
Overall, we need more thought and clarity – as well as humility – around what it means to be doing policy-relevant work. As a start, we may try to distinguish work that is ‘policy adjacent’ (done on a policy) from work that is either ‘decision-relevant’ or ‘policymaker-relevant’ (similar to ‘decision-relevant,’ (done with the explicit, ex ante purpose of informing a policy or practice decision and therefore an intent to be actionable).
I believe the distinction I am trying to draw echoes what Tom Pepinsky wrestled with when he blogged that it was the “murky and quirky” questions and research (a delightful turn of phrase that Tom borrowed from Don Emmerson) “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own idiosyncratic settings. These questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to a grander, paradigmatic question.
Throughout, my claim is not that one type of work is more important or that one type will always inform better decision-making. I am, however, asking that, as “policy-relevant” becomes an increasingly popular buzzword, we pause and think about what it means.

Quote of the Week: Arnab Goswami

Sina Odugbemi's picture

"I don't believe in creating an artificial consensus, which some people are comfortable with. So if there is something wrong, you can ask yourself two questions: 'Why did it happen? Will the people who did it go unpunished?' "

Arnab Goswami, an Indian journalist and the editor-in-chief of Indian news channel Times Now. He anchors The Newshour, a live debate show that airs weekdays on Times Now and hosts the television programme Frankly Speaking with Arnab.

Buffet of Champions: What Kind Do We Need for Impact Evaluations and Policy?

Heather Lanthorn's picture
I realize that the thesis of “we may need a new kind of champion” sounds like a rather anemic pitch for Guardians of the Galaxy. Moreover, it may lead to inflated hopes that I am going to propose that dance-offs be used more often to decide policy questions. While I don’t necessarily deny that this is a fantastic idea (and would certainly boost c-span viewership), I want to quickly dash hopes that this is the main premise of this post. Rather, I am curious why “we” believe that policy champions will be keen on promoting and using impact evaluation (and subsequent evidence syntheses of these) and to suggest that another range of actors, which I call “evidence” and “issue” champions may be more natural allies. There has been a recurring storyline in recent literature and musings on (impact) evaluation and policy- or decision-making:
  • First, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
  • Second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report, “What is the evidence on evidence-informed policy-making”, as well as here).
  • Third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
  • Fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. This even includes bringing policy champions (say, bureaucrats) on as research PIs.

There seems to be a sleight of hand at work in the above formulation, and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. Said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation.

‘Relevant Reasons’ in Decision-Making (3 of 3)

Heather Lanthorn's picture

This is the third in our series of posts on evidence and decision-making; also posted on Heather’s blog. Here are Part 1 and Part 2
In our last post, we wrote about factors – evidence and otherwise – influencing decision-making about development programmes. To do so, we have considered the premise of an agency deciding whether to continue or scale a given programme after piloting it and including an accompanying evaluation commissioned explicitly to inform that decision. This is a potential ‘ideal case’ of evidence-informed decision-making. Yet, the role of evidence in informing decisions is often unclear in practice.

What is clear is that transparent parameters for making decisions about how to allocate resources following a pilot may improve the legitimacy of those decisions. We have started, and continue in this post, to explore whether decision-making deliberations can be shaped ex ante so that, regardless of the outcome, stakeholders feel it was arrived at fairly. Such pre-commitment to the process of deliberation could carve out a specific role for evidence in decision-making. Clarifying the role of evidence would inform what types of questions decision-makers need answered and with what kinds of data, as we discussed here.

How Can a Post-2015 Agreement Drive Real Change? Please Read and Comment on this Draft Paper

Duncan Green's picture

The post-2015 discussion on what should succeed the Millennium Development Goals (MDGs) is picking up steam, with barely a day going by without some new paper, consultation or high level meeting. So I, along with Stephen Hale and Matthew Lockwood, have decided to add to the growing slush-pile with a new discussion paper. We want you to read the draft (see right) and help us improve it. Contributions by 5 November please, either as comments on the blog, or emailed to research[at]

The paper argues that there’s an urgent need to bring power and politics into the centre of the post-2015 discussion. To have impact, any post-2015 arrangement has to take into account the lessons of over a decade of implementing the existing MDGs, and be shaped by the profound global change since the MDGs were debated over the course of the 1990s and early noughties.  We’re hoping that this will be at the centre of this week’s discussions in London linked to the High Level Panel and in Berlin at the Berlin Civil Society Center on Development post 2015.

I Only Hear What I Want to Hear – And So Do You

Anne-Katrin Arnold's picture

Do you know these people who always only hear what they want to hear? Who interpret everything in a way that fits their own views? Yes? Deal with them every day? Well, chances are, you’re one of them. Biased information processing is a common phenomenon. It happens when the information we receive is out of sync with what we believe to be true, or want to be true, or when the information is inconvenient for us. This obviously has huge implications for communication campaigns in development.

Will Suna get a dam despite the change in rainfall?

Philip Angell's picture

Earlier this year, we were in a country called Suna. If it sounds unfamiliar, it is an imaginary developing country in West Africa. For one day, two dozen senior Ghanaian officials and business leaders in Accra participated in a simulation exercise. They were grappling with a question on whether to build a new hydroelectric dam in the backdrop of uncertain data on water availability for the next 50 years. Although the situation was fictionalized, the problem is quite real for decision makers in many parts of the world.

The broader question was: How do you prepare for the tough, contentious, complex decisions required to deal with impacts of climate change that now seem inevitable? 

That question posed for the simulation exercise was key to the 13th edition of the World Resources Report: Decision Making in a Changing Climate (jointly published by the World Bank, UNDP, UNEP, and the World Resources Institute). We took a distinctly new approach to research and writing this report, one that engaged a wide range of experts and practitioners from the very beginning, as well as one that tried new techniques. 

One important part of that new approach was to engage government officials, members of civil society and the private sector in two developing countries, Ghana and Vietnam, to participate in scenario exercises involving climate adaptation decisions. The goal was to learn how officials approached such decisions, how they would go about making them…and why.

The reason the core question is complex is the vast sea of uncertainty on the extent of future climate impacts. Between now and 2050, predictions in a 2010 World Bank report on the Economics of Adaptation to Climate Change, suggests that yearly rainfall in the country could plummet to 60% less than it is today or increase by as much as 49%.