Published on Let's Talk Development

Should you trust a medical journal?

This page in:

While we non-physicians may feel a bit peeved when we hear “Trust me, I’m a doctor”, our medical friends do seem to have evidence on their side. GfK, apparently one of the world’s leading market research companies, have developed a GfK Trust Index, and yes they found that doctors are one of the most trusted professions, behind postal workers, teachers and the fire service. World Bank managers might like to know that bankers and (top) managers come close to the bottom, just above advertising professionals and politicians.

Given the trust doctors enjoy, the recent brouhaha over allegations of low quality among some of the social science articles published in medical journals must be a trifle embarrassing to the profession. Here’s the tale so far, plus a cautionary note about a recent ‘systematic review’.

Did the Millennium Villages Project really cut child mortality?
The first incidence occurred in May this year. On May 10 Gabriel Demombynes, a World Bank staffer, wrote a post on the Bank’s Development Impact blog critical of an article published in The Lancet. The article – by Paul Pronyk, Jeff Sachs and others at Columbia University’s Earth Institute – claimed that the UN Millennium Villages Project (MVP) had accelerated the rate of decline of child mortality.

Demombynes wasn’t convinced. He argued that the calculations had been done wrong and the comparisons were wrong too. Doing the math right led to the opposite conclusion – under-5 mortality had declined less quickly in MVP sites than it had nationally in the countries where the sites are located. The Economist waded into the fray on May 14 siding with Demombynes. On May 18 the Lancet published an online erratum by Pronyk acknowledging the article’s mistakes, retracting the mortality claim, and promising in future to “invite an independent panel of experts, including critics of the project, to participate in scrutinizing the vital events and survey data and in assessing their validity.”

The tone of the editor’s erratum, by contrast, was highly defensive, and it and his subsequent Twitter tweets caused a firestorm of protest. Development blogger Roving Bandit wrote “I've being trying to think of a polite way of expressing my total dismay and despair at the tripe written by the Lancet editors in response to the retraction”, and on AidWatch Bill Easterly and Luara Freschi collated other instances where medical journals had published what turned out to be poor-quality social science. Bill also offered a cynical but rather humorous flow chart to researchers to help them decide whether to publish their work in an economics or medical journal.

Is development assistance to health really that fungible?
The second incident dates back to 2010 when The Lancet published an article on fungibility in development assistance to health (DAH). The article – by Chunling Lu, Dean Jamison, Chris Murray and others at the University of Washington’s Institute for Health Metrics (IHM) – claimed that “DAH to government had a negative and significant effect on domestic government spending on health such that for every US$1 of DAH to government, government health expenditures from domestic resources were reduced by $0.43 (p=0) to $1.14 (p=0).” In the long-run the figure was even higher, they claimed – $1.14. That’s a pretty strong claim, and unsurprisingly it was picked up by the New York Times and USA Today.

David Roodman at the Center for Global Development wasn’t convinced about the methods the study used. He isn’t a casual observer – he wrote a widely used Stata program that makes it easy for people to use them. Roodman expressed concerns at the time on his blog and requested the data and computer code so he could investigate. However, it wasn’t until January 2012 that IHM agreed. Roodman did some checks and tests, and found that the instrumental variables – or “instruments” – used to rule out reverse causality were invalid. He concluded in his letter to the Lancet published last Saturday that “the analysis by Lu and colleagues does not support the confident claim that health aid is largely displaced.” Roodman is quick to acknowledge that aid may well be fungible – it’s just that the paper doesn’t show so credibly.

In the original draft of his letter, Roodman took the Lancet to task over its refereeing of economics and econometric articles, and over its lack of requirement that authors make their data available for replication. That text was axed by the Lancet, but you can read it on Roodman’s blog.
What are the impacts of health insurance in poor countries?
I have written a little about this topic (particularly in Asia), and it’s a topic of considerable concern given the current push toward universal health coverage. I was therefore interested to see that the Bulletin of the World Health Organization has just published a systematic review of the topic, albeit focusing on low- and lower middle-income countries in Africa and Asia.

I like the idea of systematic reviews, because they force the reviewer to declare their search methods and their quality assessment criteria. It should stop unscrupulous authors performing a selective review of the literature which they use to justify their particular stance on a subject. Imagine my surprise then to find that not only were my studies of China and Vietnam excluded from the review, but so too were several other papers in the field that I’m either familiar with or were included in the excellent review by Ursula Giedion and Beatriz Yadira Díaz. In a quick trawl, I counted around 20 studies missing from the review, all listed below for those interested, most from economics journals.

I’m not sure how successful each in my list below is in getting a credible estimate of the insurance impact, given that there are huge challenges in the identification of insurance effects posed by selection bias – see the excellent review by Helen Levy and David Meltzer. But then I’m not sure how well the studies that were included in the review do in this regard. The authors noted that few studies used a randomized design, but don’t seem to be familiar with the tools that econometricians and statisticians use to reduce selection bias, such as double differencing, fixed effects, propensity score matching, and instrumental variables. The paper therefore doesn’t give the reader much of a sense of how credible the estimates are. The selection of papers that are included is also rather odd: they include many book chapters (including from the World Bank) that weren’t subject to anonymous peer review, and many papers that aren’t actually impact evaluations at all!

Some thoughts
I don’t think all this should dent our trust in doctors as doctors. But it does suggest that medical journals are having some problems judging the quality of empirical social science. I wonder whether the practice of barring authors from bringing out a working paper version first may contribute to the problem. This isn’t clear-cut, because as Berk Ozler reminds us working papers aren’t working in the way they were supposed to either. Adoption of an open data policy would probably help.

But at the end of the day, it’s the editorial and peer-review process that needs strengthening. The good news is that from personal experience I can reassure readers that it’s not a question of starting from scratch. I’ve submitted twice to the Lancet with a 50% success rate. I concede that my second submission was rather shoddy in the methods department, and I would never have submitted it to an economics journal. Surprisingly perhaps in the light of the MVP and fungibility fiascos, the paper was rejected, with one of the referees writing “If this were an economics journal, the paper would be rejected out of hand for not being well 'identified'. I don't see why the Lancet should have lower standards.” Naturally I was a bit disappointed at the time – who loves a rejection letter? – but it was absolutely the right call. So here’s a suggestion to the Lancet editors: find out who the referee was, and offer her or him a job!

Excluded articles of relevance to the systematic review of health insurance impacts
Aggarwal, A. (2010). "Impact Evaluation of India's 'Yeshasvini' Community-Based Health Insurance Programme." Health Economics 19: 5-35.
Brown, P. H. and C. Theoharides (2009). "Health-Seeking Behavior and Hospital Choice in China's New Cooperative Medical System." Health Economics 18 S2: S47-64.
Chen, Y. and G. Z. Jin (2012). "Does Health Insurance Coverage Lead to Better Health and Educational Outcomes? Evidence from Rural China." Journal of Health Economics 31 1: 1-14.
Das, J. and J. Leino (2011). "Evaluating the RSBY: Lessons from an Experimental Information Campaign." Economic & Political Weekly 46(32): 85.
Ekman, B. (2007). "Catastrophic health payments and health insurance: some counterintuitive evidence from one low-income country." Health Policy 83(2-3): 304-13.
Giedion, U. and B. Y. Diaz (2011). A Review of the evidence. In: The Impact of Health Insurance in Low-and Middle-Income Countries. M. L. Escobar, C. Griffin and R. P. Shaw (ed). Washington DC, Brookings Inst Press.
Hidayat, B., H. Thabrany, H. Dong, and R. Sauerborn. 2004. “The Effects of Mandatory Health Insurance on Equity in Access to Outpatient Care in Indonesia.” Health Policy and Planning 19 (5): 322–35.
Jowett, M., A. Deolalikar and P. Martinsson (2004). "Health Insurance and Treatment Seeking Behaviour: Evidence from a Low-Income Country." Health Economics 13 9: 845-57.
Jowett, M., P. Contoyannis, and N. Vinh. 2003. “The Impact of Public Voluntary Health Insurance on Private Health Expenditures in Vietnam.” Social Science & Medicine 56 (2): 333–42.
Lei, X. and W. Lin (2009). "The New Cooperative Medical Scheme in Rural China: Does More Coverage Mean More Service and Better Health?" Health Economics 18 S2: S25-46.
Nguyen, H. T. H., Y. Rajkotia and H. Wang (2011). "The financial protection effect of Ghana National Health Insurance Scheme: evidence from a study in two rural districts." International journal for equity in health 10(1): 4.
Quimbo, S. A., J. W. Peabody, R. Shimkhada, J. Florentino and O. Solon (2011). "Evidence of a Causal Link between Health Outcomes, Insurance Coverage, and a Policy to Expand Access: Experimental Data from Children in the Philippines." Health Economics 20 5: 620-30.
Sepehria, A., W. Simpsona, and S. Sarma. 2006. “The Influence of Health Insurance on Hospital Admission and Length of Stay—The Case of Vietnam.” Social Science & Medicine 63 (7): 1757–70.
Wagstaff A, Lindelow M. Can insurance increase financial risk? The curious case of health insurance in China. J Health Econ 28(1), 1–19.
Wagstaff, A. (2010). "Estimating Health Insurance Impacts under Unobserved Heterogeneity: The Case of Vietnam's Health Care Fund for the Poor." Health Economics 19 2: 189-208.
Wagstaff, A., M. Lindelow, G. Jun, X. Ling and Q. Juncheng (2009). "Extending Health Insurance to the Rural Population: An Impact Evaluation of China's New Cooperative Medical Scheme." Journal of Health Economics 28 1: 1-19.
Wagstaff, A., W. Yip, M. Lindelow and W. C. Hsiao (2009). "China's Health System and Its Reform: A Review of Recent Studies." Health Economics 18 S2: S7-23.
Wang, H., W. Yip, L. Zhang and W. C. Hsiao (2009). "The Impact of Rural Mutual Health Care on Health Status: Evaluation of a Social Experiment in Rural China." Health Economics 18 S2: S65-82.
Xu K, Evans DB, Kadama P, Nabyonga J, Ogwal PO, Nabukhonzo P, Aguilar AM. Understanding the impact of eliminating user fees: Utilization and catastrophic health expenditures in Uganda. Social Science & Medicine 2006;62(4):866-876.
Zhou, Z., J. Gao, Q. Xue, X. Yang and J. e. Yan (2009). "Effects of Rural Mutual Health Care on Outpatient Service Utilization in Chinese Village Medical Institutions: Evidence from Panel Data." Health Economics 18 S2: S129-36.


Adam Wagstaff

Research Manager, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000