Syndicate content

Who's Afraid of Administrative Data? Why administrative data can be faster, cheaper and sometimes better

Guest Post by Laura Rawlings

In talking about the importance of generating evidence for policy making, we sometimes neglect to talk about the cost of generating that evidence -- not to mention the years it can take. Impact evaluations are critical, but most are expensive, time consuming and episodic. Policymakers increasingly rely on evidence to make sound decisions, but they want answers within a year or at most two—and their budgets for evaluation are often limited. As the Bank moves forcefully into impact evaluations, the question is how to make them not only effective – but more accessible.
Administrative data is one solution and there are a number of benefits to using it. By relying on regularly collected microdata, researchers can work with policymakers to run trials, generating evidence and answering questions quickly.  Using administrative data can save hundreds of thousands of dollars over the cost of running the surveys needed to collect primary data – the single biggest budget item in most impact evaluations.  
The benefits go on: The quality, as well as frequency, of administrative data collection is continuing to improve. Countries have databases tracking not only inputs and costs, but outputs and even outcomes.  Quality data are now available on everything from health indicators like vaccination rates to student attendance and test scores—and information can often be linked across databases with unique IDs, which gives us a treasure chest of information. Indeed, “big data" is a buzzword these days, and as we move forward into evidence building, it’s important to realize that “big data,” when used properly, can also mean “better data”—more frequent, timely, and less costly.
Administrative data is particularly beneficial in helping test program design alternatives. Alternative options can be tested and assessed to see what route is most effective—and cost-effective.
Of course there are drawbacks as well. Administrative data can only answer questions to which the data are suited, and this rarely includes in-depth analysis of areas such as behavioral changes or consumption patterns.  A recent impact evaluation of the long-term effects of a conditional cash transfer program in Colombia, for example, provided rich information about graduation rates and achievement test scores—but little in the way of information about household spending or the usage of health services, for example. And the information provided is usually relevant to individual beneficiaries of a specific program—rather than on the household level or between beneficiaries and non-beneficiaries.  
Administrative data are also often of questionable quality: institutional capacity varies across the agencies that gather and manage the data and protocols for ensuring data quality are often not in place. Another drawback is accessibility: administrative data may not be publically available or organized in a way that is easily analyzed.
Clearly, researchers need to evaluate the usefulness of administrative data on a case-by-case basis. Some researchers at the World Bank who have weighed the pros and cons have embraced it as an important tool, as we saw in the impact evaluation of the Colombia program, which relied exclusively on administrative data. This included census data, baseline data from a previous impact evaluation, and the program database itself, as well as information-- registration numbers and results-- from a national standardized test. Linking all these data gave researchers answers in just six months at about one-fifth of the cost of an impact evaluation that would require traditional primary data collection.  An impact evaluation looking at the results of Plan Nacer, a results-based financing program for women and children in Argentina, has done largely the same thing.
There are numerous examples outside the World Bank as well. David Halperin, director of the UK's Behavioral Insights Team-- commonly called "The Nudge Unit" for their work in encouraging changes in behaviors —routinely relies on administrative data. Together with his team, Halperin, who was at the Bank in early May to talk about their work, has discovered ways to encourage people to pay their court fines (send a text message with the person's name, but not the amount they owe) and to reduce paperwork fraud (put the signature box at the beginning, rather than the end of the form). The research they are leading on changing behaviors relies on data that the government already has—producing results that are reliable, affordable and quick.
How can we move ahead?  First, we need to learn to value administrative data – it may not get you a publication in a lofty journal, but it can play a powerful role in improving program performance.  Second, we have to help our clients improve the quality and availability of administrative data.  Third, we need a few more good examples of how good impact evaluations can be done with administrative data.  Moving to a more deliberate use of administrative data will take effort and patience, but the potential benefits make it worth prioritizing.
 

Comments

Essentially, Monitoring and Evaluation (M&E) is about learning and finding out what is or is not working in a project or programme set-up. Depending only on population based surveys to keep the projects on track is a notion that even affects the development of M&E frameworks. Many projects fall prey to this approach - confusing the impact with output - creating a vicious cycle of expensive data collection exercises.

It is incredible that in time of “Open Data” – administrative data is not getting its due place!

http://goo.gl/4PKgL

Submitted by Laura Rawlings on

Indeed this was a lot of my thinking in writing the blog. We need to take better advantage of administrative data -- and work to ensure its quality. Thanks for the comment. Laura

Submitted by Melanie Gulliver on

Yes!!! I thought that when I read your first sentence.. having just completed a course in Deelopment Studies (and being a chartered accountant) I kept questioning how much money was being spent on gathering all this evidence into policy that could have been actually spent on improving the situation in developing countries.
Although I like the fact that you say administrative data is useful, as an accountant I can assure you that any forms are filled out often with not too much attention to accuracy.. often as time is short it is more based on what was filled in last time.. this sort of collection and then analysis also involves costs.. I really question the usefulness of collecting any information at all.. I think the money should just be spent where it is most needed and the benefits will follow.. better to spend money on training people than on collecting results I say. Do an analysis once every 10 years if you must, but trust that everyone wants to lead better, healthier lives and that they will apply the lessons they learn from any training.

The key challenge facing policy makers is to narrow the gap between the affluent and the poorest. In adult education, administrative data can tell us lots about people the system reaches, but you need large representative household surveys, with disaggregatable data with questions about participation in learning to identify the impact on disabled people, ethnic and linguistic minorities, or the poorest. With good questions, survey data can highlight the balance of people's engagement in formal education, At work (whether in the waged or informal economy) and in community settings . That then helps target scarce resources.

Submitted by Daniel Ticehurst on

A refreshing read given the hegemony of evaluation in the aid industry and emergence of cockroach policies that emasculates those who manage implementation. At last, a view that explains why and how monitoring has its own purpose over and above playing second fiddle to evaluation: to improve, rather than simply measure and report on, performance. Done well, monitoring, through generating different types of admin data, can and should give primacy to helping those who manage, not just help donors account for, change. Such a purpose is not so much through measuring and reporting (on indicators and targets)rather through learning about the motivations, preferences and behaviours among those for whom aid is supposed to benefit (more often than not the assumptions). It is the rigour and empathy of how these processes are associated with monitoring that define very real and practical opportunities for those who manage; that is as opposed to procrastinating about how and in what ways surveys can isolate variables and people in the quest for rigour in ways defined by statisticans. Dear Laura, Great post.

Add new comment