Syndicate content

It’s time to improve the ‘Value for Money’ toolkit, and not junk it

Suvojit Chattopadhyay's picture

 Julio Pantoja / World BankThe ‘results agenda’ of donor agencies have inspired several heated debates. Value for money is one of the main tools that helps further this agenda. There is significant pressure on donor development agencies to ‘demonstrate’ what they have achieved (results), and further, examine whether these results have been achieved in a cost-effective manner (‘value for money’). This pressure to demonstrate ‘value for money’ often leads to plenty of frustration, as those designing and implementing aid programmes struggle to strike a balance between what is easy to prove versus the complex nature of an intervention designed to tackle a real-world problem.

There are several problems with the results agenda – development interventions take place in a wide range of contexts, that lend themselves to comparisons on some counts and not, on others. These contexts change every day, and certainly over the lifetime of a development project, and attempting a grand theory or mathematical formulae to capture the entire process is nearly impossible.

Besides technical problems, there are valid fears that focusing too closely on ‘value for money’ will lead development workers to focus on ‘bean-counting’ and preferring interventions that can be easily measured and whose costs and benefits are easy to estimate. Some researchers have gone further and argued that an obsession with such metrics essentially forces development workers into lying about how their projects actually work.

And yet, I think this is worth persisting with. Here are three tongue-in-cheekreasons why:

  • No framework is going to perfectly capture complexity: There is no perfect framework that can fully do justice to our complex workplace. We have known this since long. With any framework, there are rational limits on how much complexity a person (or an organisation for that matter) can absorb, and even that will vary with time, and the context in which you place her. There will always be phases of fire-fighting when something we did not anticipate happens, or even when one of the foreseen risks (carefully “mitigated” in our risk matrices) actually come to pass. This is not a plea to abandon complexity.
  • No system is going to eliminate the lies either: For several reasons, not in the least depending on one’s levels of tolerance for ‘messy’ processes on the ground, people lie in their reports. Now, ‘lie’ is a harsh word. Let’s just say they are economical with the truth; or shall we say, they exercise judgement over what they think those reading their reports need to know. While working on the ground, one may well weigh the costs and benefits of completely transparent disclosure of every operational challenge they might face to their peers, donors, or government counterparts. Is it possible to create a culture where implementers and donors can be honest with each other?
  • Good sense will prevail (or another fad will take over): There have been several occasions when a new tool, or process has attracted heated reactions. Some of those tools increase control by donors, some lend themselves to greater flexibility in operational decisions on the ground. Ultimately, the useful tools get ‘mainstreamed’. From efficient finance and accounting systems to field-level technological advancements, the development sector (as with any other sector) will evolve, integrate what is useful, and discard those that hinder progress.

But seriously – this is also a demand for a commitment to continuously improve our toolkit. We haven’t quite cracked it yet. The recent ICAI review of ‘value for money’ in DFID programmes is very instructive in this regard. For one, it shows that while ‘value for money’ has been an important agenda in the sector, the quality of analysis in the sector remains quite rudimentary. Particularly when examined at the portfolio level, or when faced with capturing the inherent complexity of the contexts within which programmes are implemented, the current frameworks fall well short.

An important hurdle in the development of the ‘value for money’ toolkit is acceptance and ownership by people involved in commissioning the analysis, by those undertaking it, or by those providing data from the ground. I have come across several instances where people plead that certain types of analyses (and comparisons) just cannot be carried out because their projects or contexts are unique. Sometimes, they just do not want to wade into the uncomfortable questions that a basic analysis leads to – and while I am fully respectful of contextual differences, I think we are quite a long way away from that threshold when it comes to ‘value for money’ analysis.

Within organisations, and amongst organisations in a delivery chain, it is therefore vital to develop systems that facilitate and record learning. ‘Value for money’ analyses work best when they are used to initiate a conversation about variations in the computed metrics. The metric is not sacrosanct, achieving a consistent basis for comparison is, as is the ability to have a conversation that goes beyond the cost of “inputs” in projects. The most important argument for persisting with improving upon existing value for money assessments should be that it facilitates learning.

There is no single correct approach – and different donors already follow different frameworks. As we have learnt with evaluation techniques, for example, the best-fit solution depends on the circumstances, including the nature of the intervention and the context in which it is being implemented. Can 2012 Zambia be compared with 2016 Kenya? Can a primary education intervention in both of those countries be compared with one in 2015 India? This should be possible, assuming unit costs of inputs and the monetised values of outputs and outcomes are comparable. At least in these cases, financial comparisons can then account for the variations in contexts – what is missing sometimes is a large-enough data bank.

With market systems interventions or institutional reform, where does one draw the boundaries of attribution? Instead of saying these boundaries are impossible to determine, why not negotiate an acceptable basis for assessing ‘value for money’ at the outset, and then hold implementers responsible to what was agreed? Such a solution will vary from one project to another, as well as the level of ambition of the donors and implementers. But it will at least ensure that the assumptions behind the assessment methodology are well-thought out and tailored to the context.

The ‘value for money’ agenda may sometimes seem counterproductive, and we must strive to create room for manoeuvre, while remaining committed toimproving our toolkit. This will require multiple stakeholders in the development sector to collaborate, and develop trust along the delivery chain.

Development sector agencies face real-world trade-offs. They should worry about the impact they are able to achieve with the funds at their disposal. They also need to be able to analyse opportunity costs inherent in every fund allocation decision. This is exactly why the question “is it better than giving cash?” has such seductive value. There is nothing like ‘money’ to ground the trade-offs that donors and implementing agencies face – when used well, a ‘value for money’ analysis can do exactly that.


This blog post originally appeared on Suvojit's blog.

Add new comment