Syndicate content

external validity

Why similarity is the wrong concept for External Validity

David McKenzie's picture
I’ve been reading Evidence-based policy: a practical guide to doing it better by Nancy Cartwright and Jeremy Hardle. The book is about how one should go about using existing evidence to move from “it works there” to “it will work here”. I was struck by their critique of external validity as it is typically discussed.

Thinking about the placebo effect as a “meaning response” and the implication for policy evaluation

Jed Friedman's picture

In recent conversations on research, I’ve noticed that we often get confused when discussing the placebo effect. The mere fact of positive change in a control group administered a placebo does not imply a placebo effect – the change could be due to simple regression to the mean.

Mind Your Cowpeas and Cues: Inference and External Validity in RCTs

Berk Ozler's picture

There is a minor buzz this week in Twitter and the development economics blogosphere about a paper (posted on the CSAE 2012 Conference website) that discusses a double blind experiment of providing different seeds of cowpeas to farmers in Tanzania.

WEIRD samples and external validity

Jed Friedman's picture

A core concern for any impact evaluation is the degree to which its findings can be generalized to other settings and contexts, i.e. its “external validity”. But of course external validity concerns are not unique to economic policy evaluation; in fact they are present (implicitly or explicitly) in any empirical research with prescriptive implications.

What the HIV prevention gel trial failure implies for trials in economics

Berk Ozler's picture

For the World AIDS Day, there is a sign at the World Bank that states that taking ARVs reduces rate of HIV transmission by 96%. If this was last year, a sign somewhere may well have read “A cheap microbicidal gel that women can use up to 12 hours before sexual intercourse reduces HIV infection risk by more than half – when used consistently.” Well, sadly, it turns out, so much for that.

Moving from Internal to External Validity – and problems with Partner Selection Bias

David McKenzie's picture

When done well, randomized experiments at least provide internal validity – they tell us the average impact of a particular intervention in a particular location with a particular sample at a particular point in time. Of course we would then like to use these results to predict how the same intervention would work in other locations or with other groups or in other time periods.

Teachers don’t matter says Nobel Laureate: A new study in Science, and why economists would never publish it…

David McKenzie's picture

At a recent seminar someone joked that the effect size in any education intervention is always 0.1 standard deviations, regardless of what the intervention actually is. So a new study published last week in Science which has a 2.5 standard deviation effect certainly deserves attention. And then there is the small matter of one of the authors (Carl Wieman) being a Nobel Laureate in Physics and a Science advisor to President Obama.

It worked for you. Will it work for me?

Berk Ozler's picture

Following on David’s rant on external validity yesterday, which turned out to be quite popular, I decided to keep the thread going. Despite the fact that the debate is painted in ‘either/or’ terms, my feeling is that there are things that careful researchers/evaluators can do to improve the external validity of their studies.


Pages