Guest Post by Laura Rawlings
In talking about the importance of generating evidence for policy making, we sometimes neglect to talk about the cost of generating that evidence -- not to mention the years it can take. Impact evaluations are critical, but most are expensive, time consuming and episodic. Policymakers increasingly rely on evidence to make sound decisions, but they want answers within a year or at most two—and their budgets for evaluation are often limited. As the Bank moves forcefully into impact evaluations, the question is how to make them not only effective – but more accessible.
Administrative data is one solution and there are a number of benefits to using it. By relying on regularly collected microdata, researchers can work with policymakers to run trials, generating evidence and answering questions quickly. Using administrative data can save hundreds of thousands of dollars over the cost of running the surveys needed to collect primary data – the single biggest budget item in most impact evaluations.
The benefits go on: The quality, as well as frequency, of administrative data collection is continuing to improve. Countries have databases tracking not only inputs and costs, but outputs and even outcomes. Quality data are now available on everything from health indicators like vaccination rates to student attendance and test scores—and information can often be linked across databases with unique IDs, which gives us a treasure chest of information. Indeed, “big data" is a buzzword these days, and as we move forward into evidence building, it’s important to realize that “big data,” when used properly, can also mean “better data”—more frequent, timely, and less costly.
Administrative data is particularly beneficial in helping test program design alternatives. Alternative options can be tested and assessed to see what route is most effective—and cost-effective.
Of course there are drawbacks as well. Administrative data can only answer questions to which the data are suited, and this rarely includes in-depth analysis of areas such as behavioral changes or consumption patterns. A recent impact evaluation of the long-term effects of a conditional cash transfer program in Colombia, for example, provided rich information about graduation rates and achievement test scores—but little in the way of information about household spending or the usage of health services, for example. And the information provided is usually relevant to individual beneficiaries of a specific program—rather than on the household level or between beneficiaries and non-beneficiaries.
Administrative data are also often of questionable quality: institutional capacity varies across the agencies that gather and manage the data and protocols for ensuring data quality are often not in place. Another drawback is accessibility: administrative data may not be publically available or organized in a way that is easily analyzed.
Clearly, researchers need to evaluate the usefulness of administrative data on a case-by-case basis. Some researchers at the World Bank who have weighed the pros and cons have embraced it as an important tool, as we saw in the impact evaluation of the Colombia program, which relied exclusively on administrative data. This included census data, baseline data from a previous impact evaluation, and the program database itself, as well as information-- registration numbers and results-- from a national standardized test. Linking all these data gave researchers answers in just six months at about one-fifth of the cost of an impact evaluation that would require traditional primary data collection. An impact evaluation looking at the results of Plan Nacer, a results-based financing program for women and children in Argentina, has done largely the same thing.
There are numerous examples outside the World Bank as well. David Halperin, director of the UK's Behavioral Insights Team-- commonly called "The Nudge Unit" for their work in encouraging changes in behaviors —routinely relies on administrative data. Together with his team, Halperin, who was at the Bank in early May to talk about their work, has discovered ways to encourage people to pay their court fines (send a text message with the person's name, but not the amount they owe) and to reduce paperwork fraud (put the signature box at the beginning, rather than the end of the form). The research they are leading on changing behaviors relies on data that the government already has—producing results that are reliable, affordable and quick.
How can we move ahead? First, we need to learn to value administrative data – it may not get you a publication in a lofty journal, but it can play a powerful role in improving program performance. Second, we have to help our clients improve the quality and availability of administrative data. Third, we need a few more good examples of how good impact evaluations can be done with administrative data. Moving to a more deliberate use of administrative data will take effort and patience, but the potential benefits make it worth prioritizing.