Syndicate content

Moneyballing Development: A Challenge to our Collective Wisdom of Project Funding

Tanya Gupta's picture
The biggest promise of technology in development is, perhaps, that it can provide us access to consistent, actionable and reliable data on investments and results.  However, somewhat shockingly, we in development have not fully capitalized on this promise as compared to the private sector.  Would you invest your precious pension hoping you will get something back but without having any reliable data on the rate of return or how risky your investment is?  If you have two job applicants, one who is a methamphetamine addict and the other is one who has a solid work history and great references, would you give equal preference to both?  If your answer to either is no, then take a look at the field of international development and consider the following:
  • Surprising lack of consistent, reliable data on development effectiveness: Among the various sectoral interventions, we have no uniformly reliable data on the effectiveness of every dollar spent.  For example of every dollar spent in infrastructure programs in sub-Saharan Africa, how many cents are effective? Based on the same assumptions, do we have a comparable number for South East Asia? In other words why don’t we have more data on possible development investments and the associated costs, benefits/returns and risks?
  • Failure to look at development effectiveness evidence at the planning stage: Very few development programs look at the effectiveness evidence before the selection of a particular intervention.  Say, a sectoral intervention A in a particular region has a history of positive outcomes (due to attributable factors such as well performing implementation agencies) as opposed to another intervention B where chances of improved outcomes are foggy.  Given the same needs (roughly) why shouldn’t we route funds to A instead of B in the planning stage? Why should we give equal preference to both based purely on need?

From a casual survey of the available literature only health and education sectoral programs appear to have more of an evidence based approach (WHO pdf file), but even this is far from a systematic intervention that ties funds to evidence (data on effectiveness) and takes into account likelihood of implementation success before choosing an intervention.  So who has been doing it right?  Well, interestingly enough, baseball is one such example.

In the 1990s, Billy Beane threw out years of traditional baseball wisdom on how to pick players for a completely new approach.  Conventional wisdom prompted scouts to pick players based on “instinct” and “gut feeling” based on “years of experience”.  Billy Beane needed to put together a terrific team on a limited budget and therefore could not obtain the players that were obviously good.  He needed to spot the players that were undervalued.  To do so, he used sabermatics - the concept that transformed baseball.  Sabermetrics is the specialized analysis of baseball through objective evidence, especially baseball statistics that measure in-game activity (both the current and future value of a player or team). Beane helped Oakland Athletics become one of the most cost-effective teams in baseball. For example, in the 2006 MLB season, the Athletics ranked 24th of 30 major league teams in player salaries but had the 5th-best regular-season record. He was successful too -- the Athletics reached the playoffs in four consecutive years from 2000 through 2003.  Interestingly, sabermetricians don’t just use data, they use evidence based data, even if it questions traditional measures of baseball skill. For instance, they say that the team batting average is a poor predictor fit for team runs scored. The story of Billy Bean is a great story about smart, cost-effective, evidence based management leading to outstanding results! Such a great story, in fact, that an Academy Award winning Brad Pitt starrer called Moneyball was made in 2011 and was critically appreciated by millions. 

Two former White House Budget Directors recently brought Moneyball back into the picture in an Atlantic article.  Talking about federal spending, they present some interesting facts: less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely.  A 2003 study found that the federal government was spending $223.5 billion on youth programs.  67 promoted “character education,” 89 built “self-sufficiency skills,” and 97 tried to “prevent substance abuse”—with little or no coordination or knowledge-sharing among programs with similar goals.  The quality of the data was poor, with basic operational data, hardly any rigorous evaluations looking at how the programs affected participants.  Among the different interventions discussed in the article, one was Program Assessment Rating Tool (PART) introduced by Bush’s Office of Management and Budget in 2002 but not currently in use.  

OMB used PART to determine a program's strengths and weaknesses and evaluate overall performance. The PART consists of about 30 questions divided into four assessment areas. The first set of questions evaluates program design and purpose for clarity and coherence.  The second looks at strategic planning and presence of annual and long-term goals for the programs. The third section rates agency management of programs. The fourth set of questions focuses on results that programs can report with accuracy and consistency. PART’s intention was to set clear, achievable, and measurable purposes and goals for federal agencies, and used as a complement to traditional management techniques.

Here are a few principles we could borrow from sabermatics and PART and apply to development:

  1. Follow the “Results for America's” lead and ensure that development organizations reserve 1 percent of program spending for evaluation e.g. for every $99 spent on a development program for a particular sector/region, we would spend $1 making sure the program actually works.
  2. Divide development programs into sector/region “buckets” and establish PART style evaluation mechanisms to assess the strengths and weaknesses of sectoral programs.  It is important here to distinguish between evaluation mechanisms for results that are dependent mainly on development organizations - related to internal management of development organizations, and the results that are tied to external factors such as the country’s economic and political climate.  While the internal mechanisms may be easier to measure and implement, the external mechanisms are even more important and should be prioritized.
  3. Incorporate moneyball rules into the planning sphere.  While this assertion may be controversial, development funds should not necessarily go where there is need.  Rather funds should go where there is need and evidence that the funds are being well spent.
  4. Focus on the transformation of evidence on performance into reusable knowledge.  Measurement may not improve a program currently in implementation, but can improve the next similar program if the data on performance is transformed into organization-wide knowledge.

World Bank President Jim, Kim exhorted the development world to move towards evidence based delivery and spoke about the importance of the data that we don’t yet have.  He gave the example of how we don’t know how many people are forced into poverty as a result of health expenditures in every country and in each year.  Moneyballing development may be the way to go.  The purpose of this blog is not to provide all the answers but to start a dialogue.  What do you think?  Can moneyball principles be applied to development?  Should development organizations and donors start saying no to countries that need our funds but do not show evidence of good management?

Photo Credit: Intel Free Press
Follow PublicSphereWB on Twitter

Add new comment