Syndicate content

Jed Friedman's blog

Sorting through heterogeneity of impact to enhance policy learning

Jed Friedman's picture

The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.

Sifting through data to detect deliberate misreporting in pay-for-performance schemes

Jed Friedman's picture

As empiricists, we spend a lot of time worrying about the accuracy of economic and socio-behavioral measurement. We want our data to reflect the targeted underlying truth. Unfortunately misreporting, either accidental or deliberate, from study subjects is a constant risk. The deliberate kind of misreporting is much more difficult to deal with because it is driven by complicated and unobserved respondent intentions – either to hide sensitive information or to try to please the perceived intentions of the interviewer. Respondents who misreport information for their own benefit are said to be “gaming”, and the challenge of gaming extends beyond research activities to development programs that depend on the accuracy of self-reported information for success.

Tools of the trade: The covariate balanced propensity score

Jed Friedman's picture

The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.

Being indirect sometimes gets closer to the truth: New work on indirect elicitation surveys

Jed Friedman's picture

Often in IE (and in social research more generally) the researcher wishes to know respondent views or information regarded as highly sensitive and hence difficult to directly elicit through survey. There are numerous examples of this sensitive information – sexual history especially as it relates to risky or taboo practices, violence in the home, and political or religious views.

Some basic reflections on strong IE proposal writing

Jed Friedman's picture

When Development Impact shut down for August, I had ambitious goals. Unfortunately I didn’t meet them all (why does that always happen?). However I did manage to madly review almost 60 proposals for the funding of prospective impact evaluations financed by various organizations and donors. Many of these proposals were excellent (unfortunately not all could be funded). However it was surprisingly informative to read so many proposals in such a condensed time.

Identifying the dynamic protective effects of social programs

Jed Friedman's picture

The short-term benefits of certain social support programs such as CCTs have been well documented –CCT programs tend to raise household consumption as well as the utilization of schools and health clinics. It is a natural question, and one of great interest, to think more dynamically and ask whether these programs also enable households to invest in productive assets.

Wealth and the endogeneity of behavior

Jed Friedman's picture

Allow me to take the occasion of the 236th “birthday” of my native-born country (celebrated on July 4th here in the U.S.) to go far afield and discuss a topic that, while grounded in empirical social science, doesn’t touch directly on impact evaluation. The topic is how the personality traits of an individual may be related to his or her relative wealth.

Pages