Syndicate content

Decision-Making

Enforcing Accountability in Decision-Making

Heather Lanthorn's picture

A recent episode reminded us of why we began this series of posts, of which is this is the last. We recently saw our guiding scenario for this series play out: a donor was funding a pilot project accompanied by a rigorous evaluation, which was intended to inform further funding decisions.

In this specific episode, a group of donors discussed an on-going pilot programme in Country X, part of which was evaluated using a randomized-control trial. The full results and analyses were not yet in; the preliminary results, marginally significant, suggested that there ought to be a larger pilot taking into account lessons learnt.

Along with X’s government, the donors decided to scale-up. The donors secured a significant funding contribution from the Government of X — before the evaluation yielded results. Indeed, securing government funding for the scale-up and a few innovations in the operational model had already given this project a sort-of superstar status, in the eyes of both the donor as well as the government. It appeared the donors in question had committed to the government that the pilot would be scaled-up before the results were in. Moreover, a little inquiry revealed that the donors did not have clear benchmarks or decision-criteria going into the pilot about key impacts and magnitudes — that is, the types of evidence and results — that would inform whether to take the project forward.

There was evidence (at least it was on the way) and there was a decision but it is not clear how they were linked or how one informed the other.

Allowing ‘Revisibility’ in Decision-Making

Heather Lanthorn's picture

Throughout this series of posts (1, 2, 3, 4), we have considered two main issues. First, how can evidence and evaluation be shaped to be made more useful - that is, directly useable - in guiding decision-makers to initiate, modify, scale-up or drop a program? Or, as recently pointed out by Jeff Hammer, how can we better evaluate opportunity costs between programs, to aid in making decisions. Second, given that evidence will always be only part of policy/programmatic decision, how can we ensure that decisions are made (and perceived to be made) fairly?

For such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. If the four included criteria are met, Daniels argues, it brings legitimacy to deliberative processes and, he further argues, consequent fairness to the decision and coherence to decisions over time.

The first two criteria set us up for the third: first, decision-makers agree ex ante to constrain themselves to relevant reasons (determined by stakeholders) in deliberation and, second, make public the grounds for a decision after the deliberation. These first two, we argue, can aid organizational learning and coherence in decision-making over time by setting and using precedent over time - an issue that has been bopping around the blogosphere this week.

'Going Public' with Decisionmaking

Heather Lanthorn's picture

In our last post, we discussed how establishing “relevant reasons” for decision-making ex ante may enhance the legitimacy and fairness of deliberations on resource allocation. We also highlight that setting relevant decision-making criteria can inform evaluation design by highlighting what evidence needs to be collected.

We specifically focus on the scenario of an agency deciding whether to sustain, scale or shut down a given programme after piloting it with an accompanying evaluation — commissioned explicitly to inform that decision. Our key foci are both how to make evidence useful to informing decisions and how, recognizing that evidence plays a minor role in decision-making, to ensure decision-making is done fairly.

For such assurance, we primarily rely on Daniels’ framework for promoting “accountability for reasonableness” (A4R) among decision-makers. If the four included criteria are met, Daniels argues, it will bring legitimacy to deliberations and, he further argues, consequent fairness to the decision.

In this post, we continue with the second criterion to ensure A4R: the publicity of decisions taken drawing on the first criterion, relevant reasons. We consider why transparency – that is, making decision criteria public – enhances the fairness and coherence of those decisions. We also consider what ‘going public’ means for learning.

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.

Quote of the Week: Barack Obama

Sina Odugbemi's picture

“Nothing comes to my desk that is perfectly solvable. Otherwise someone else would have solved it. So you wind up dealing with probabilities. Any given decision you make you’ll wind up with a 30 to 40 percent chance that it isn’t going to work. You have to own that and feel comfortable with the way you made that decision. You can’t be paralyzed by the fact that it might not work out.”

- Barack Obama, President of the United States of America

As quoted in Vanity Fair, October 2012, Obama's Way, by Micahel Lewis

Should CSOs Have a Seat at the Table?

John Garrison's picture

The World Bank has experimented with different approaches to including civil society organizations (CSOs) in its decision-making processes over the years. These have varied from regular policy dialogue with CSOs through the Bank – NGO Committee in the 1980s and 1990s, to establishing CSO advisory committees in several Bank units during the 2000s.  Currently, two of these initiatives stand out: the Bank’s Climate Investment Funds have invited 19 CSO representatives (chosen competitively through online voting) to serve as ‘active observers’ on its five Committees and Sub-Committees; and the Bank’s Health Unit has established a CSO 'consultative group' to which it invited 18 CSO leaders to advise the Bank on its health, nutrition, and population agenda. 

Why Sound Technical Solutions Are Not Enough: Part I

Recently I was invited to hold the XI Raushni Deshpande Oration at the Lady Irwin College in New Delhi, India. This blog is a summary and a reflection of that presentation. As it can be inferred from the title, the focus is on why so many development initiatives have failed in the past and many are still failing in the present. Why after all these years, after all the money poured in, all the construction being made and all the resources dedicated to address this issue, are latrines still not being used in many places? Or they are used but not for the intended purpose? And why are bed nets aimed at preventing malaria adopted even when they are easily available? And many more ‘why’s’ such as these could be added to the list.

Participatory Video: A Tool for Good Governance?

Johanna Martinsson's picture

 

The use of relevant and credible evidence from the ground is crucial in strengthening arguments and incentives for reform.  The International Campaign to Ban Landmines, for example, was successful in part because of the evidence gathered and presented by experts with practical experience from conflict-torn societies.  Forging strong ties with local actors and ensuring inclusive representation in coalitions are crucial factors for successful campaigns.

To this point, Transparency International (TI), a global coalition to fight corruption, recently introduced Participatory Video (PV) as part of their program on Poverty and Corruption in Africa. The introduction of PV is a first for TI, and it is used as a tool to engage and partner with the poor in fighting corruption. In collaboration with InsightShare, a leading company in PV, TI’s African National Chapters have started training local communities on how to create their own films, capturing authentic stories about corruption and how it impacts their daily lives. Alfred Bridi discusses his experience about the training process in Uganda and has made a short film (see above) to illustrate the process and enthusiasm among the participants.

The “State-Sponsored” Public Sphere

Darshana Patel's picture

In India’s 2 million villages, public meetings at the village level called Gram Sabhas (GSs) have provided a structured, institutionalized space for dialogue between the local government and its citizens.  In a recently released paper on the topic, Vijayendra Rao and Paromita Sanyal have coined these GSs as “state-sponsored” public sphere.  In fact, these meetings are mandated by national legislation. 

In India, these public meetings not only offer a space to dialogue and feedback between citizens and local power holders, they also pair it with real decision-making on how to manage local resources for beneficiaries for public programs.  This is the most striking feature of the GSs.  While the government provides data on families living below the poverty line that could be eligible for local resources, the GSs are required to have these lists ratified by those attending the meeting.  Citizens can directly challenge the data in “a forum where public discourse shapes the meaning of poverty, discrimination and affirmative action.”