These past few weeks I’ve been immersed in reviews of health systems research proposals and it’s fascinating to see the common themes that emerge from each round of proposals as well as the literature cited to justify these themes as worthy of funding.
Jed Friedman's blog
In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A recent working paper by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.
Worker training and skill upgrading programs are a major focus in impact evaluation work. The design of such training programs implicitly involves the identification of the activities that a worker needs to accomplish in a job. Only then can the program offer training in the set of skills required to complete these identified tasks.
In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.
Empirical evidence on the effectiveness of productivity incentives in the public sector is sparse. However donor enthusiasm is growing for this general approach and certain lessons are emerging.
A key determinant of good health is the quality of the care that sick patients receive, and donor attention in the health sector is increasingly focused on quality of care investments such as enhanced training and supervision of health providers. This interest in the quality of care will only increase further in the coming years as the epidemiological transition shifts the relative disease burden towards chronic illnesses. Why? Because proper management of chronic illness requires repeated high quality interactions with the health system.
The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.
Well I’m writing this on Election Day evening here in the U.S., and am rather consumed by the events at hand.
In the honor of Halloween (today), let’s talk about the nightmare of insect swarms, composed of millions of voracious insects, devouring everything they encounter.
As empiricists, we spend a lot of time worrying about the accuracy of economic and socio-behavioral measurement. We want our data to reflect the targeted underlying truth. Unfortunately misreporting, either accidental or deliberate, from study subjects is a constant risk. The deliberate kind of misreporting is much more difficult to deal with because it is driven by complicated and unobserved respondent intentions – either to hide sensitive information or to try to please the perceived intentions of the interviewer. Respondents who misreport information for their own benefit are said to be “gaming”, and the challenge of gaming extends beyond research activities to development programs that depend on the accuracy of self-reported information for success.