The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.
Jed Friedman's blog
Well I’m writing this on Election Day evening here in the U.S., and am rather consumed by the events at hand.
In the honor of Halloween (today), let’s talk about the nightmare of insect swarms, composed of millions of voracious insects, devouring everything they encounter.
As empiricists, we spend a lot of time worrying about the accuracy of economic and socio-behavioral measurement. We want our data to reflect the targeted underlying truth. Unfortunately misreporting, either accidental or deliberate, from study subjects is a constant risk. The deliberate kind of misreporting is much more difficult to deal with because it is driven by complicated and unobserved respondent intentions – either to hide sensitive information or to try to please the perceived intentions of the interviewer. Respondents who misreport information for their own benefit are said to be “gaming”, and the challenge of gaming extends beyond research activities to development programs that depend on the accuracy of self-reported information for success.
The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.
Often in IE (and in social research more generally) the researcher wishes to know respondent views or information regarded as highly sensitive and hence difficult to directly elicit through survey. There are numerous examples of this sensitive information – sexual history especially as it relates to risky or taboo practices, violence in the home, and political or religious views.
When Development Impact shut down for August, I had ambitious goals. Unfortunately I didn’t meet them all (why does that always happen?). However I did manage to madly review almost 60 proposals for the funding of prospective impact evaluations financed by various organizations and donors. Many of these proposals were excellent (unfortunately not all could be funded). However it was surprisingly informative to read so many proposals in such a condensed time.
- Proposal writing
Last week David linked to a virtual discussion involving Dave Giles and Steffen Pischke on the merits or demerits of the Linear Probability Model (LPM).
The short-term benefits of certain social support programs such as CCTs have been well documented –CCT programs tend to raise household consumption as well as the utilization of schools and health clinics. It is a natural question, and one of great interest, to think more dynamically and ask whether these programs also enable households to invest in productive assets.
Allow me to take the occasion of the 236th “birthday” of my native-born country (celebrated on July 4th here in the U.S.) to go far afield and discuss a topic that, while grounded in empirical social science, doesn’t touch directly on impact evaluation. The topic is how the personality traits of an individual may be related to his or her relative wealth.