A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.
A reasonably common way to address this is to do the following:
- Take the control group data, and fit a model which relates the control group outcome (e.g. test score in the follow-up period) to the baseline characteristics.
- Then use the fitted coefficients from this model on both the treatment and control groups to get the predicted outcome in the absence of treatment for both groups
- Then split the sample into groups on the basis of these predicted outcome values (e.g. a predicted low test score group, a predicted medium test score group, and a predicted high test score group) and look at heterogeneity in treatment effects across these groups.
This bias arises from overfitting, and is larger when the control group sample size is small and the number of X variables you use to predict outcomes is relatively large. But “small” here is not so small – the authors show it can be substantial in samples of around 1000 units for example.
What should you do instead?
Abadie et al. describe a couple of close alternatives that have much lower bias, and maintain consistency.
- Leave-one-out estimation: here you estimate the fitted model for each control group unit by excluding it from the sample you use for estimation – this way observations can’t contribute to their own predicted values. Then you use the full control group sample to estimate the predicted value for the treatment group.
- Repeated split sample estimation: here you split the control group into two groups: a prediction group and an estimation group. You estimate the model just using the prediction group, and then use the treatment group and estimation group only in estimating the treatment effect. You do this for M different ways of splitting the sample, and average the treatment effect over these repeated estimates.
Join the Conversation