Published on Development Impact

Attrition rates typically aren’t that different for the control group than the treatment group – really? and why?

This page in:

When I start discussing evaluations with government partners, and note the need for us to follow and survey over time a control group who did not get the program, one of the first questions I always get is “Won’t it be really hard to get them to respond?”. I often answer with reference to a couple of case examples from my own work, but now have a new answer courtesy of a new paper on testing for attrition bias in experiments by Dalia Ghanem, Sarojini Hirshleifer and Karen Ortiz-Becerra.

As part of the paper, they conduct a systematic review of field experiments with baseline data published in the top 5 economics journals plus the AEJ Applied, EJ, ReStat, and JDE over the years 2009 to 2015”, covering 84 journal articles. They note that attrition is a common problem, with 43% of these experiments having attrition rates over 15% and 68% having attrition rates over 5%. The paper then has discussion over what the appropriate tests should be to figure out whether this is a problem. But I wanted to highlight this panel from Figure 1 in their paper, which plots the absolute value of the difference in attrition rates by treatment and control. They note “64% have a differential rate that is less than 2 percentage points, and only 10% have a differential attrition rate that is greater than 5 percentage points.” That is, attrition rates aren’t much different for the control group.



Image


Why is this?
This raised a bunch of questions for me, including:

  • Is this just publication bias? – good journals might not publish studies where the treatment-control difference is really high
  • Does this come from including a lot of studies with administrative data? – we usually won’t expect attrition rates to differ if using administrative data (so long as treatment does not affect entry into the administrative data, as can be the case with e.g. labor treatments that affect whether workers enter into the social security records).
  • Does it largely reflect overall response rates being high, so there is not much room to move? If you are in a setting where almost everyone responds easily (so overall attrition rates even for the control group are 95+%), then there isn’t much room for a large difference.
  • Does the timing of follow-up matter and how many surveys have already been taken? Perhaps people don’t hold a grudge for long about missing out on treatment, or perhaps in the short-run they are eager to answer because they somehow think they might still get treated, and it is only after long periods of time or multiple survey rounds that the gap opens up.
  • Maybe people didn’t even know (or care) they were being treated? While it may be obvious to people whether they get a big grant or a training program or not, for more subtle treatments (e.g. getting offered a new interest rate, or a particular text message or advertising offer), people may not even realize if they have missed out on treatment, so have no reason to be more likely to refuse surveys.
To explore these questions a bit, I took my last 5 years of impact evaluations (either published post 2014, or currently in working paper stage). These are experiments on firms (grants, training, formalization assistance, wage subsidies), workers (vocational training, migration assistance, wage subsidies, matching assistance) and on financial education, and one RD study on grants. This yielded 57 rounds of surveys from 21 different impact evaluation papers covering 19 countries. Nine of these were published (or in R&R status) in the 9 journals Ghanem focus on. In all cases I focus only on survey data, and when multiple treatments were used, choose the treatment which has the biggest attrition rate difference relative to the control group to provide an upper bound on the problem. Exploring my thoughts above:
  1. Probably not a big publication bias effect: my mean absolute attrition differential between treatment and control groups is 3.1% both for papers published in the 9 journals they consider, and is 3.1% for papers published in other journals or not yet published.
  2. Higher control group attrition rates ARE associated with higher differential rates: The figure below plots the control minus treatment attrition rate against the control rate. Note there are a few negative values (where the treated are actually less likely to respond). A regression of the absolute differential rate on the control attrition rate has a coefficient of 0.13 (p=0.003, clustered at the study level) – so that a 10% increase in the control group attrition rate is associated with a 1.3% increase in the differential rate.
Image

3.  Treatment-Control Differential Attrition rates are NOT associated with either the length of time between treatment and surveying, NOR with the number of survey rounds. This is seen in the graph below (regressions have p-values of 0.54 for time since treatment, and 0.47 for survey round).

Image

4. Does it matter what the treatment was?  I have a lot of different types of interventions here, and can’t separate this completely from effects of country and other factors. But the differential rates do seem to be highest when the treatment group are given large grants and the control group is not – the highest positive differentials among my studies come from my matching grants ($10,000) evaluation in Yemen (10%), large grants ($660,000) to research consortia in Poland (10%), and the first survey round of my large grants ($50,000)  to firms in a business plan competition in Nigeria paper (9.5%). These are programs where the value of the treatment is large and clear, so missing out might be really disappointing. In contrast, differential attrition rates are really low in financial education, wage subsidy, vocational training, and macroinsurance evaluations where treatment effects are often very small. The figure below shows this relationship against the log value of the program in USD – with a slope coefficient of 0.017 (p=0.000) – so moving from a program worth $100 to one worth $10,000 is associated with a 3.4 percentage point increase in the treatment-control response gap
Image

That is, it may be precisely the most effective treatments that are at most risk of differential attrition making it hard to measure this effectiveness. The good news at least is that even in these cases, a bit more effort can often dramatically reduce attrition rates – this was the case in Nigeria, where after a big treatment-control gap in round 1 (the top point at log(50K) in the figure), additional effort reduced this gap in subsequent rounds (the other points for log(50K)).
 
Final Reminder: while the focus in this post is on the differential attrition rate between treatment and control group, it is neither the case that equality of treatment rates is enough not to have to worry about attrition (the attritors might still be selected differently in the two groups), nor that a treatment-control gap in response rates must doom all hopes of evaluation (attrition may still be unrelated to the outcome of interest).

 

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000