Syndicate content

Jed Friedman's blog

Measuring the rate at which we discount the future: a comparison of two new field-based approaches

Jed Friedman's picture
Many key economic decisions involve implicit trade-offs over time: how much to save or invest today affects how much to spend both today and tomorrow, and individuals will differ in their preferences for satisfaction today versus delayed satisfaction tomorrow. Economists call the relative preference (or disfavor) for the present over the future a discount rate (i.e. the rate at which we discount the future for the present), and the discount rate is a core parameter in economic models of choice and behavior.

Behind low rates of participation in micro-insurance: a misunderstanding of the insurance concept?

Jed Friedman's picture
Micro-insurance pilot programs begin with grand hopes that the target population will enroll and obtain program benefits, but many are disappointed that after much planning and effort so few actually take up the program. Apparently take-up rates greater than 30% are rare and often do not exceed 15%. Furthermore, only a fraction of beneficiaries choose to renew their participation after the initial enrollment period.

Tools of the trade: recent tests of matching estimators through the evaluation of job-training programs

Jed Friedman's picture
Of all the impact evaluation methods, the one that consistently (and justifiably) comes last in the methods courses we teach is matching. We de-emphasize this method because it requires the strongest assumptions to yield a valid estimate of causal impact. Most importantly this concerns the assumption of unconfoundedness, namely that selection into treatment can be accurately captured solely as a function of observable covariates in the data.

Do financial incentives undermine the motivation of public sector workers? Maybe, but where is the evidence from the field?

Jed Friedman's picture
These past weeks I’ve visited several southern African nations to assist on-going evaluations of health sector pay-for-performance reforms. It’s been a whirlwind of government meetings, field trips, and periods of data crunching. We’ve made good progress and also discovered roadblocks – in other words business as usual in this line of work. One qualitative data point has stayed with me throughout these weeks, the paraphrased words of one clinic worker: “I like this new program because it makes me feel that the people in charge of the system care about us.”

Using spatial variation in program performance to identify causal impact

Jed Friedman's picture
I’ve read several research proposals in the past few months, as well engaged in discussions, that touch on the same question: how to use the spatial variation in a program’s intensity to evaluate its causal impact. Since these proposals and conversations all mentioned the same fairly recent paper by Markus Frolich and Michael Lechner, I eagerly sat down to read it.

Caution when applying impact evaluation lessons across contexts: the case of financial incentives for health workers

Jed Friedman's picture

These past few weeks I’ve been immersed in reviews of health systems research proposals and it’s fascinating to see the common themes that emerge from each round of proposals as well as the literature cited to justify these themes as worthy of funding.

Tools of the trade: when to use those sample weights

Jed Friedman's picture

In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A recent working paper by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.

Trying to measure what workers actually do: the task approach to job content

Jed Friedman's picture

Worker training and skill upgrading programs are a major focus in impact evaluation work. The design of such training programs implicitly involves the identification of the activities that a worker needs to accomplish in a job. Only then can the program offer training in the set of skills required to complete these identified tasks.

Thinking about the placebo effect as a “meaning response” and the implication for policy evaluation

Jed Friedman's picture

In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.

Pages