Published on Development Impact

Notes from the field: Making the most of monitoring data

This page in:

I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.  

In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients).  However, in most cases a combination of monitoring and survey data is required.

So what’s monitoring data good for?   The most obvious thing is that it gives you another take (besides survey implementation) on who participated.   My general experience with surveys is that it is hard to nail down all of the names by which people know a program (for example:  for some strange reason very few villagers could identify the Canadian/World Bank/Italian cooperation Food Security Program).   Thus, program data gives you another measure of up-take.  

It also provides a useful check on what was done, particularly (as is inevitable) when the program implementation deviates from the plan.   Parts of the program may or may not have been implemented in different communities or for different populations and with monitoring data, you can make more sense of your results (be it the mean or heterogeneity). This extends over time – particularly as a program gets more complicated.   And you can also get a sense of the intensity of participants’ interaction with a program (e.g. how many visits to a club, did participants attend both days of training). Bottom line: spending more time with the monitoring data and maybe even spending some time supporting the design of the monitoring system (more on this in a second) really helps you get a feel for what’s actually going on in the program. 

Monitoring data can also contribute more directly to your analysis.    Given some plausible exogenity in roll out, you can use monitoring data to estimate dose response. Digging around in monitoring data might also help you find unexpected exogenity.   One example of this comes from an evaluation a group of us were doing in Kenya.   The field supervisor pointed out to us that the nurse at the clinic was absent a significant amount of the time, and her work on these days was slowing down.   We asked her to keep track of the dates the nurse was absent, and why.   Eventually, we were able to use this as an instrument.   In the case of one of the evaluations I am working on this week, the monitoring data is helping us understand who participated in training with whom and thus what kind of networks the program might have created.  

But a lot can go wrong.   First of all, what sounds like a great monitoring system before the project rolls out can be significantly less rosy in practice. For one of our super-designed monitoring systems, we found attendance sheets – some empty and some half full – muddy and downtrodden on the facility floors.   Second, a lot of programs just don’t see the need (or have the resources) to enter all of the monitoring data into computers.   And paper disappears fairly quickly.   My colleague Waly Wane has a great story of getting the very records he was looking for inside the ministry building handed to him as a wrapper for the food he had just bought, outside the building.   Third, all of those great fields that look so promising on the monitoring form may be filled out a small fraction of the time by providers who cannot be bothered and/or have something more important to do.   Fourth, linking the names on the monitoring data to your survey data may be an epic exercise in multi-lingual matching.   We were working on this last week in Ghana and it took a team of many different backgrounds to make the phonetic connections across the very different (and equally valid) creative spellings of names and villages.   We were helped by a process we dubbed “phone a friend” where we called some alleged program participants and survey respondents to see if they might be the same person (yes, as many know, phone numbers aren’t unique either).   The ensuing phone conversations were illuminating -- ranging from a better understanding of when Ghanaian women might keep their maiden names to the record keeping practices of our program partners.  

So, as my colleague Elena Bardasi pointed out to me yesterday morning, if you are going to use the program monitoring data the key is to engage, and engage early. And you have to keep close tabs on the system as it is rolled out and as data starts to roll in.   Elena pointed out that in some cases the impact evaluation team is going to have to devote a significant amount of hands-on time to building up the program’s capacity to collect and process monitoring data. Moreover, it helps to have a back up plan -- while we managed to keep the same IDs number in the program and impact evaluation databases here in Tanzania, my experiences in matching in Kenya has taught me that it helps to have a plan B, C, and D for matching across data sources.  

Those are some initial thoughts.   Does anyone else have experiences and lessons to share?  

 


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000