These past weeks I’ve visited several southern African nations to assist on-going evaluations of health sector pay-for-performance reforms. It’s been a whirlwind of government meetings, field trips, and periods of data crunching. We’ve made good progress and also discovered roadblocks – in other words business as usual in this line of work. One qualitative data point has stayed with me throughout these weeks, the paraphrased words of one clinic worker: “I like this new program because it makes me feel that the people in charge of the system care about us.”
Jed Friedman's blog
I’ve read several research proposals in the past few months, as well engaged in discussions, that touch on the same question: how to use the spatial variation in a program’s intensity to evaluate its causal impact. Since these proposals and conversations all mentioned the same fairly recent paper by Markus Frolich and Michael Lechner, I eagerly sat down to read it.
These past few weeks I’ve been immersed in reviews of health systems research proposals and it’s fascinating to see the common themes that emerge from each round of proposals as well as the literature cited to justify these themes as worthy of funding.
In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A recent working paper by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.
Worker training and skill upgrading programs are a major focus in impact evaluation work. The design of such training programs implicitly involves the identification of the activities that a worker needs to accomplish in a job. Only then can the program offer training in the set of skills required to complete these identified tasks.
In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.
Empirical evidence on the effectiveness of productivity incentives in the public sector is sparse. However donor enthusiasm is growing for this general approach and certain lessons are emerging.
A key determinant of good health is the quality of the care that sick patients receive, and donor attention in the health sector is increasingly focused on quality of care investments such as enhanced training and supervision of health providers. This interest in the quality of care will only increase further in the coming years as the epidemiological transition shifts the relative disease burden towards chronic illnesses. Why? Because proper management of chronic illness requires repeated high quality interactions with the health system.
The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.