Syndicate content

Add new comment

Submitted by M. Over on

Great post Adam. I agree that when a literature review pertains to a topic in health economics or any social science, reliance on PubMed keyword searches is likely to mislead the reviewer. I like your idea of requiring such reviews to be posted for comment before they are set in stone. Health and medical journals are coming late to an appreciation of the value of quasi-experimental methods, but don't forget that propensity score matching, one of the favorite tools in the modern econometrician's impact evaluation toolkit, was developed outside economics with its earliest applications being to the medical sciences.

Also including an econometrician is not enough since two equally well-qualified econometricians can reasonably disagree on the ranking of the internal-, not to mention the external-, validity of any two quasi-experimental studies.

Since you cite my colleague David Roodman, let me mention an alternative way of reviewing a set of papers which apply quasi-experimental methods to infer a causal impact: replication. When I used to teach econometrics, I used to require students (at Williams College) to replicate the results in a published journal article. This required that they request the original authors to submit their data and, sometimes, their code so the student could ideally replicate their results - and then test them for robustness. As a professional economist, I was shocked at how many authors (a) failed to answer queries from my students; or, if they answered, (b) failed to provide the data in a usable format. And when authors did respond and provide data, my students not infrequently failed to replicate the exact same regressions the authors had run. Or discovered a transcription error or a sign reversal the author had apparently missed. Or discovered extraordinary vulnerability of the results to a single questionable assumption.

So my suggestion is for a real review of quasi-experimental impact evaluations, the job should be crowd sourced to econometrics students throughout the world! The review organizer and lead author would compile the results from all these student replication papers and note whether the results were replicable at all, and how fragile they are. Authors who fail to provide their data and code would be downgraded or excluded altogether. Now THAT would be a good systematic review of impact evaluation studies.