Syndicate content

Tools of the trade: The covariate balanced propensity score

Jed Friedman's picture

The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.

Opening Up Microdata Access in Africa

Gabriel Demombynes's picture

Recently I attended the inaugural meeting of the Data for African Development Working Group put together by the Center for Global Development and the African Population & Health Research Center here in Nairobi. The group aims to improve data for policymaking on the continent and in particular to overcome “political economy” problems in data collection and dissemination.

Being indirect sometimes gets closer to the truth: New work on indirect elicitation surveys

Jed Friedman's picture

Often in IE (and in social research more generally) the researcher wishes to know respondent views or information regarded as highly sensitive and hence difficult to directly elicit through survey. There are numerous examples of this sensitive information – sexual history especially as it relates to risky or taboo practices, violence in the home, and political or religious views.

The Tao of Impact Evaluation

Markus Goldstein's picture

Is in danger of being messed up.   Here is why:   There are two fundamental reasons for doing impact evaluation: learning and judgment.   Judgment is simple – thumbs up, thumbs down: program continues or not.   Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what. In learning evaluations, real failure is a lack of statistical power, more so than the program working or

Help for attrition is just a phone call away – a new bounding approach to help deal with non-response

David McKenzie's picture

Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.

Friday links: Long-term impacts of moving to a better neighborhood, hot workers are less productive, how NGOs do IE and more

David McKenzie's picture

·         In Science this week (gated), Katz and Kling add some co-authors and follow-up on their famous Econometrica paper on the Moving to Opportunity program to examine impacts 10-15 years after moving from a high-poverty to a low-poverty neighborhood.

Trials – A journal I did not know existed

Berk Ozler's picture

Reporting findings from studies in economics is changing, and likely for the better. It’s hard to not credit at least some of this improvement to the proliferation of RCTs in the field. As issues of publication bias, internal and external validity, ex-ante registration of protocols and primary data analysis plans, open data, etc. are being debated, the way we report research findings is changing.