Syndicate content

Jed Friedman's blog

Trying to measure what workers actually do: the task approach to job content

Jed Friedman's picture

Worker training and skill upgrading programs are a major focus in impact evaluation work. The design of such training programs implicitly involves the identification of the activities that a worker needs to accomplish in a job. Only then can the program offer training in the set of skills required to complete these identified tasks.

Thinking about the placebo effect as a “meaning response” and the implication for policy evaluation

Jed Friedman's picture

In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.

Feigning illness to improve care: Recent lessons from standardized patients in rural

Jed Friedman's picture

A key determinant of good health is the quality of the care that sick patients receive, and donor attention in the health sector is increasingly focused on quality of care investments such as enhanced training and supervision of health providers. This interest in the quality of care will only increase further in the coming years as the epidemiological transition shifts the relative disease burden towards chronic illnesses. Why? Because proper management of chronic illness requires repeated high quality interactions with the health system.

Sorting through heterogeneity of impact to enhance policy learning

Jed Friedman's picture

The demand and expectation for concrete policy learning from impact evaluation are high. Quite often we don’t want to know only the basic question that IE addresses: “what is the impact of intervention X on outcome Y in setting Z”. We also want to know the why and the how behind these observed impacts. But these why and how questions, for various reasons often not explicitly incorporated in the IE design, can be particularly challenging.

Sifting through data to detect deliberate misreporting in pay-for-performance schemes

Jed Friedman's picture

As empiricists, we spend a lot of time worrying about the accuracy of economic and socio-behavioral measurement. We want our data to reflect the targeted underlying truth. Unfortunately misreporting, either accidental or deliberate, from study subjects is a constant risk. The deliberate kind of misreporting is much more difficult to deal with because it is driven by complicated and unobserved respondent intentions – either to hide sensitive information or to try to please the perceived intentions of the interviewer. Respondents who misreport information for their own benefit are said to be “gaming”, and the challenge of gaming extends beyond research activities to development programs that depend on the accuracy of self-reported information for success.

Tools of the trade: The covariate balanced propensity score

Jed Friedman's picture

The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.

Being indirect sometimes gets closer to the truth: New work on indirect elicitation surveys

Jed Friedman's picture

Often in IE (and in social research more generally) the researcher wishes to know respondent views or information regarded as highly sensitive and hence difficult to directly elicit through survey. There are numerous examples of this sensitive information – sexual history especially as it relates to risky or taboo practices, violence in the home, and political or religious views.