External validity as seen from other quantitative social sciences - and the gaps in our practice

|

This page in:

For impact evaluation to inform policy, we need to understand how the intervention will work in the intended population once implemented. However impact evaluations are not always conducted in a sample representative of the intended population, and sometimes they are not conducted under implementation conditions that would exist at scale-up. Last week I blogged about a clear example of this challenge as presented in a paper by Hunt Allcott. After reviewing the relative handful of work in economics on this topic (at least that I could find) I was left curious about how other disciplines approach these external validity challenges. So I spent some time reviewing the literature from epidemiology, public health, and public policy. The literature here also was somewhat sparse but there are papers that grapple with these challenges and indeed have arrived at similar methods to that used in, say, this econ paper by Hotz, Imbens, and Mortimer.
 
First a bit of a conceptual framework. Let’s think about the assessment of external validity as an assessment of how well the impact estimate (based on the trial sample) predicts the impact of the same intervention implemented at scale in the target population. In short we care about Δ, which we define as the difference in the average treatment effect between the estimate from the trial and what would be obtained in the population. If there is no difference – if the impact evaluation is fully externally valid – then Δ = 0. Yet Δ can be non-zero due to differences in characteristics between the sample and the population, if such characteristics also mediate the treatment impact. Δ may also be non-zero due to differences in program implementation between small-scale and at-scale. More formally we can decompose Δ into its constituent components:
 
Δ  =Δ_xo + Δ_xu + Δ_io + Δ_iu + interaction terms

Where Δxo and Δxu are the differences in observed and unobserved characteristics between the trial population and the target population, and Δio and Δiu are the differences in observed and unobserved implementation factors between trial and at-scale. For any one impact evaluation to be readily generalizable, we would hope for all of these delta terms to be zero or close to it. But what if they aren’t?
 
A paper by Olsen, Orr, Bell, and Stuart doesn’t attempt to answer this question directly, but does seek to clarify the conditions under which Δ will be non-zero. They model external validity bias as a product of three factors:
  • the degree of variance in impact across sites in the population of interest
  • the coefficient of variation in the inclusion probability of sites across the population of interest
  • the correlation between site specific impact and site inclusion probabilities in the population
If any of these three factors equals zero, then bias will be zero. In other words, there will be no external validity bias if (1) the impact is the same in all sites, or (2) the probability of being included in the sample is the same in all sites, or (3) the site specific impacts and site inclusion probabilities are uncorrelated with each other even if either or both varies in the population. I find this a helpful thought experiment to clarify the conditions when we have to worry about external confoundedness, but for the many impact evaluations that are implemented in one or two sites such conditions almost assuredly won’t hold.
 
A recent paper by Cole and Stuart explores techniques to extrapolate trial estimates to a larger population. Specifically the two researchers standardize the observed results of one of the first ARV trials in the U.S. to a broader specified target population – the population of all HIV positive U.S. residents. The original trial found clear mortality reductions from ARV therapy but this original RCT of 1150 patients was largely older, whiter, and better educated than the overall population of HIV positive individuals.
 
To adjust the trial results so that the impact reflects the broader population, the researchers require values in this target population for the key characteristics that mediate the treatment effect but also vary between the study and target population. In this study, the characteristics considered are sex, race, and age. The conditional probabilities of selection in the trial sample are then estimated as a function of these characteristics, and used to reweight the estimated trial effect in order to reflect the target population. This process still finds overall mortality reductions but a 12% lower effect than found in the trial.
 
Similar to the above, a paper by Stuart, Cole, Bradshaw, and Leaf uses propensity scores to quantify the difference between trial participants and the target population. Once this is done, the scores can be used to either match or weight control group outcomes to the population in order to assess how well these control group outcomes track the outcomes actually observed in the population. They use an example of a school-based intervention in the U.S.
 
In order to apply this approach there must be sufficient overlap in the propensity scores for the sample and target population – no method can help extrapolate trial results to a segment of the population if that segment is not observed at all in the trial. One measure of similarity between sample and population is simply the difference in the mean propensity score. While there is no magic threshold, differences of .25 standard deviations in mean propensity scores suggest that a large amount of extrapolation, perhaps unsubstantiated by the data, would be necessary.
 
Note that all of these papers discuss the importance of the study sample being sufficiently similar to the target population in order to ensure externally valid estimates. If this is not the case then perhaps these impact estimates can be corrected through re-weighting. As such these are all attempts to extrapolate impact estimates to a population when Δxo is non-zero. So this speaks to the importance for impact evaluations to comprehensively measure key mediating characteristics that also may vary respect to the target population. Doing so in a comprehensive fashion also reduces the potential importance of the Δxu term as fewer factors will be unobserved.
 
But what about the implementation factors, i.e. the Δio term? I couldn’t yet find anything that deals with implementation differences in a quantitative fashion. This is not for lack of recognition of the problem. For example the Stuart et al. paper, which evaluates a school program, mentions that relatively little is known about the school-level moderators of the program impact. Some possible key moderators they mention include the schools organizational capacity to implement the program, the principal’s support for the program, and the institutional motivations for participating in the program. However factors such as these were not assessed, and in fact it’s not clear how to best to measure them.
 
It is fairly clear however that, with respect to implementation factors, the external validity work is severely underdeveloped – for most studies there is no discussion of which implementation aspects matter the most and, from among these, which of them can be accurately measured. This is a major gap in our evaluative practice and a ripe area for future work.
 
So some take away messages from this review, as well as the handful of papers in econ that explore this topic:
  • To estimate possible bias from Δxo, follow an approach similar to explorations of internal validity bias by comparing characteristics of study sites to target population
  • Where divergences exist, perhaps extrapolations can be improved by re-weighting
  • Ensure a sufficient number of sites whenever possible, and either select them randomly or, if you must select purposively, do so with goal of broader representation
  • Devote additional effort and resources to recruiting sites that initially resist inclusion
  • Think hard about factors of implementation that vary between the trial and scale versions of the intervention, and develop suitable quantitative measures where possible

Authors

Jed Friedman

Lead Economist, Development Research Group, World Bank

stefano pagiola
June 03, 2014

I like your decomposition of the average treatment effect between the estimate from the trial and what would be obtained in the population. Of course, writing it that way immediately suggests that one could manipulate Δ_io so as to offset the effect of Δ_xo. But first we have to understand how Δ_io and Δ_xo work...

Jed Friedman
June 06, 2014

Stefano, thanks very much for the comment, and I agree with the possibility you suggest, although that implies a high level of project self-knowledge. I think we also definitely agree that we need to measure the relevant Δ_io and Δ_xo - we don't even do that yet very well....

Joseph
February 11, 2015

Sorry Jed, even your mathematical model is no match to the dynamics of external environments in which development interventions operate. Simply put, the existing tools and techniques used in ex-post development evaluations these days are no match to the dynamics of external environments. No wonder why managers of development interventions are hesitant to use ex-post development evaluation reports' recommendations. I have the impression that given such dynamics, development evaluation reports are already obsolete as soon as they are created. They are good for 'post-mortem analysis' rather than cure the 'disease' while the development intervention is still 'alive'.

Sean Muller
June 26, 2014

Hi Jed, further to my comment on your previous post I thought I would just add a link to my ABCDE paper here for those who are interested in the above ideas in more detail:
www.seanmuller.co.za/EV_RCTs_ABCDEdraft_Muller.pdf
(earlier version here: http://opensaldru.uct.ac.za/handle/11090/691)