Syndicate content

Add new comment

Enforcing Accountability in Decision-Making

Heather Lanthorn's picture

A recent episode reminded us of why we began this series of posts, of which is this is the last. We recently saw our guiding scenario for this series play out: a donor was funding a pilot project accompanied by a rigorous evaluation, which was intended to inform further funding decisions.

In this specific episode, a group of donors discussed an on-going pilot programme in Country X, part of which was evaluated using a randomized-control trial. The full results and analyses were not yet in; the preliminary results, marginally significant, suggested that there ought to be a larger pilot taking into account lessons learnt.

Along with X’s government, the donors decided to scale-up. The donors secured a significant funding contribution from the Government of X — before the evaluation yielded results. Indeed, securing government funding for the scale-up and a few innovations in the operational model had already given this project a sort-of superstar status, in the eyes of both the donor as well as the government. It appeared the donors in question had committed to the government that the pilot would be scaled-up before the results were in. Moreover, a little inquiry revealed that the donors did not have clear benchmarks or decision-criteria going into the pilot about key impacts and magnitudes — that is, the types of evidence and results — that would inform whether to take the project forward.

There was evidence (at least it was on the way) and there was a decision but it is not clear how they were linked or how one informed the other.

Reminder: scenario

We started this series of posts by admitting the limited role evidence plays in decision-making — even when an agency commissions evidence specifically to inform a decision. The above episode illustrates this, as well as the complex and, sometimes, messy way that (some) agencies, like (some) donors, approach decision-making. We have suggested that, given that resources to improve welfare are scarcer than needs, this approach to decision-making is troubling at best and irresponsible at worst. Note that it is the lack of expectations and a plan for decision-making that are troublesome as the limited use of outcome and impact evidence.

In response to this type of decision-making, we have had two guiding goals in this series of posts. First, are there ways to design evaluations that will make the resultant outcomes more useable and useful (read posts one & two)? Second, given all the factors that influence decisions, including evidence, can the decision-making process be made more fair and consistent across time and space?

To address the second question, we have drawn primarily on the work of Norm Daniels, to consider whether and how decisions can be made through a fair, deliberative process that, under certain conditions, can generate outcomes that a wide range of stakeholders can accept as ‘fair’.

Daniels suggests that achieving four key criteria, these “certain conditions” for fair deliberation can be met, including deliberation about which programs to scale after receiving rigorous evidence and other forms of politically relevant feedback.

Closing the loop: enforceability

So far, we have reviewed three of these conditions: relevant reasons, publicity, and revisibility. In this post, we examine the final condition, enforceability (or regulation).

Meeting the enforceability criterion means providing mechanisms to ensure that the processes set by the other criteria are adhered to. This is, of course, easier said than done. In particular, it is unclear who should do the enforcing.*

We identify two key questions about enforcement:

First, should enforcement be external to or strictly internal to the funding and decision-making agency?

Second, should enforcement rely on top-down or bottom-up mechanisms?

Underlying these questions is a more basic, normative question: In which country should these mechanisms reside — the donor or the recipient? The difficulty of answer this question is compounded by the fact that many donors are not nation-states.

We don’t have clear answers to these questions, which themselves likely need to be subjected to a fair, deliberative process. Here, we lay out some of our own internal debates on two key questions, in hopes that they point to topics for productive conversation.

Should enforcement of agency decision making be internal or external to the agency?

This is a normative question but it links with a positive one: can we rely on donors to self-regulate when it comes to adopted decision-making criteria and transparency commitments?

Internal, self-regulation is the most common model we see around us, in the form of internal commitments such as multi-year strategies, requests for funds made to the treasury, etc. In addition, most agencies have an internal but-independent ‘results’ or ‘evaluation’ cell, intended to make sure that M&E is carried out. In the case of DFID for instance, the Independent Commission for Aid Impact (ICAI) seems to have a significant impact on DFID’s policies and programming. It also empowers the British parliament to hold DFID to account over a variety of funding decisions, as well as future strategy.

Outside the agency, oversight and enforcement of achieving relevancy, transparency, and revisibility could come from multiple sources. From above, it could be a multi-lateral agency/agreement or a global INGO, similar to a Publish What You Pay(?). Laterally, the government in which a programme is being piloted could play an enforcing role. Finally, oversight and enforcement could come from below, through citizens or civic society organisations, both in donor and recipient countries. This brings us to our next question.

Should enforcement come top-down or bottom-up?

While this question could be answered about internal agency functioning and hierarchy, we focus on the potential for external enforcement from one direction or the other. And, again, the question is a normative one but there are positive aspects related to capacity to monitor and capacity to enforce.

Enforcement from ‘above’ could come through multilateral agencies or through multi- or bi-lateral agreements. One possible external mechanisms is where more than one donor come together to make a conditional funding pledge to a programme – contingent on the achievement of pre-determined targets. However, as we infer from the opening example, it is important that such commitments should be based on a clear vision of success, not just on political imperatives or project visibility.

Enforcement from below can come from citizens in donor and/or recipient countries, including through CSOs and the media. One way in which to introduce bottom-up pressure is if donors adhere to the steps we have covered in our previous posts – agreement on relevant reasons, transparency and revisibility – and thereby involve a variety of external stakeholders, including media, citizens, CSOs. These can contribute to a mechanism where there is pressure from the ground on donors in living up to their own commitments.

Media is obviously a very important player in these times. Extensive media reporting of donor commitments is a strong mechanism for informing and involving citizens – in both donor and recipient countries; media is also relevant to helping citizens understand limits and how decisions are made in face of resource constraints.

Our gut feeling though is that in the current system of global aid and development, the most workable approach will probably include a mixture of formal top-down and informal bottom-up pressure. From a country-ownership point of view, we feel that recipient country decision-makers should have a (strong) role to play here (more than they seem to have currently), as well as citizens in those countries.

However, bilateral donors, will probably continue to be more accountable to their own citizens (directly and via representative legislatures) and, therefore, a key task is to consider how to bolster their capacity to ensure ‘accountability for reasonableness’ in the use of evidence and decision-making more generally. At the same time multilateral donors may have more flexibility to consider other means of enforcement, since they don’t have a narrow constituency of citizens and politicians to be answerable to. However, we worry that the prominent multilateral agencies we know are also bloated bureaucracies with unclear chains of accountability (as well as a typical sense of self-perpetuation).

While there is no clear blueprint for moving forward, we hope the above debate has gone a small step towards asking the right questions.

In sum

In this final post, we have considered how to enforce decision-making and priority-setting processes that are ideally informed by rigorous and relevant evidence but also, more importantly, in line with principles of fairness and accountability for reasonableness. These are not fully evident in the episode that opened this post.

Through this series of posts, we have considered how planning for decision-making can help in the production of more useful evidence and can set up processes to make fairer decisions. For the latter, we have relied on Norm Daniel’s framework for ensuring ‘accountability for reasonableness’ in decision-making. This is, of course, only one guide to decision-making, but one that we have found useful in broaching questions of not only how decisions are made but how they should be made.

In it, Daniels proposes that deliberative processes should be based on relevant reasons and commitments to transparency and revisibility that are set ex ante to the decision-point. We have focused specifically on decision-making relating to continuing, scaling, altering, or scrapping pilot programs, particularly those for which putatively informative evidence has been commissioned.

We hope that through these posts, we have been able to make a case for designing evaluations to generate evidence useful decision-making as well as for facilitating fair, deliberative processes for decision-making that can take account of evidence generated. At the very least, we hope that evaluators will recognise the importance of a fair process and will not stymie them in the pursuit of the perfect research design.

***
*In Daniels’s work, which primarily focuses on national or large private health insurance plans, the regulative role of the state is clear. In cases of global development, involving several states and agencies, governance and regulation become less clear. Noting this lack of clarity in global governance is hardly a new point; however, the idea of needing to enforce the conditions of fair processes and accountability for reasonableness provides a concrete example of the problem.


This post first appeared on the authors' blogs: Heather Lanthorn and Suvojit Chattopadhyay

Photo Credit: Craige Moore via Flickr, available here

Follow PublicSphereWB on Twitter