Worker training and skill upgrading programs are a major focus in impact evaluation work. The design of such training programs implicitly involves the identification of the activities that a worker needs to accomplish in a job. Only then can the program offer training in the set of skills required to complete these identified tasks.
Jed Friedman's blog
In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.
Empirical evidence on the effectiveness of productivity incentives in the public sector is sparse. However donor enthusiasm is growing for this general approach and certain lessons are emerging.
A key determinant of good health is the quality of the care that sick patients receive, and donor attention in the health sector is increasingly focused on quality of care investments such as enhanced training and supervision of health providers. This interest in the quality of care will only increase further in the coming years as the epidemiological transition shifts the relative disease burden towards chronic illnesses. Why? Because proper management of chronic illness requires repeated high quality interactions with the health system.
The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.
Well I’m writing this on Election Day evening here in the U.S., and am rather consumed by the events at hand.
In the honor of Halloween (today), let’s talk about the nightmare of insect swarms, composed of millions of voracious insects, devouring everything they encounter.
As empiricists, we spend a lot of time worrying about the accuracy of economic and socio-behavioral measurement. We want our data to reflect the targeted underlying truth. Unfortunately misreporting, either accidental or deliberate, from study subjects is a constant risk. The deliberate kind of misreporting is much more difficult to deal with because it is driven by complicated and unobserved respondent intentions – either to hide sensitive information or to try to please the perceived intentions of the interviewer. Respondents who misreport information for their own benefit are said to be “gaming”, and the challenge of gaming extends beyond research activities to development programs that depend on the accuracy of self-reported information for success.
The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.
Often in IE (and in social research more generally) the researcher wishes to know respondent views or information regarded as highly sensitive and hence difficult to directly elicit through survey. There are numerous examples of this sensitive information – sexual history especially as it relates to risky or taboo practices, violence in the home, and political or religious views.