Nobody likes to be stung. Doctors regard it as unethical. Publishers say it betrays the trust of their profession. But the fact is, as three recent studies have demonstrated, sting operations can be extremely effective at exposing questionable professional practices, and answering questions that other methods can't credibly answer.
Sting #1: Are open-access journals any good?
Much of the world has gotten fed up with the old academic publishing business. Companies like the Anglo-Dutch giant Elsevier and the German giant Springer earn high profit margins from their academic journals (Elsevier earns 36% profit, according to The Economist), through a mix of ‘free’ inputs from academics (the article itself and the peer-review process) and high (and rapidly rising) subscription charges that impede access by academics working in universities whose libraries can’t afford the subscriptions. Of course, many of these universities paid for the authors’ time in the first place, and/or that of the peer-reviewers; tax-payers also contributed, by direct subsidizing universities and/or by the research grants that supported the research assistants, labs, etc. Unsurprisingly, libraries, universities, academics and tax-payers aren’t happy.
Two 'open-access' solutions emerged in response. In the 'Green model' authors continue to publish in traditional journals, but universities and other institutions set up repositories (the World Bank has its Open Knowledge Repository) into which authors place the word-processed version of their accepted paper that others can access for free, usually after an embargo period to allow the publisher to recoup its costs. The 'Gold model', by contrast, involves authors paying the publisher hundreds if not thousands of dollars for the right to have their article published and freely available to everyone immediately. While some traditional journals offer authors open access for a fee, the big growth has been in dedicated open-access journals.
Skeptics of the Green model worry about unscrupulous publishers entering the market and focusing on profit at the expense of quality, exploiting the vulnerability of academics who are keener to publish than perish.
Testing this predatory behavior hypothesis through an observational study would be hard. The submission process is not public, and if the skeptics are right, unscrupulous publishers aren't going to admit a researcher into their archives to check their work. And to do it properly, each submission would have to be audited by someone expert in the field. The fact that each paper is (supposedly at least) different would make it hard to implement and would make the results hard to interpret.
A sting operation gets round these problems. In his open-access sting operation published this October in Science and featured on NPR (h/t Julie McLaughlin), John Bohannon, crafted a fatally flawed paper claiming a cancer therapy discovery but actually so riddled with holes that a good editor wouldn't even have bothered sending it for peer-review. Bohannon couldn't credibly invent an institution or authors in a developed country — his paper would be rumbled straight away — so his 'authors' had plausible-sounding African names, and came from plausible-sounding African academic institutions. He threw in some English errors for added realism. He also couldn't submit exactly the same paper to every journal, so he 'replicated' his fake paper multiple times: he varied the names of the molecule and lichen species from which the molecule was supposed to come, and the name of the cancer cell the molecule was supposed to inhibit; he also varied the names of the authors and their institutions. He then submitted his 'paper' to 304 dedicated open-access journals over an 8-month period.
The results look like confirmation of predatory behavior hypothesis. Over half (157) accepted the paper, 16 after substantial peer review, 59 after superficial peer review, and 82 after no peer review. Only 36 returned peer-review comments that recognized any of the papers' scientific problems; in 16 cases, the journal accepted the paper anyway. Nearly half the journals listed in the Directory of Open Access Journals accepted the paper. Around one third of the targeted journals are based in India; of these, 64 accepted the article, and only 15 rejected it. But while many of the journals are based in the developing world, some are published by reputable Western publishers — journals published by Elsevier, Wolters Kluwer, and Sage all accepted the fake paper.
Sting #2: Just how bad is medical care quality in India?
You often hear people expressing concerns about the quality of medical care in developing countries (and in developed ones too). But much of the 'evidence' is anecdotal, and some of it is tinged by ideology — public-sector and private-sector advocates both lament the poor quality of care in their ideological opponent's sector.
Getting at quality through observation is hard. Doctors may 'pull their socks up' when being observed. And patients may learn about the quality of different providers, risking the poorer quality ones only for minor conditions, creating a selection bias problem.
The sting operation gets round these problems. My colleague Jishnu Das, spent 150 hours training each of 22 Indians to credibly fake three conditions: unstable angina; asthma; and dysentery in a fictitious child who had been left at home. These actors were then sent into the consulting rooms of 305 medical providers. The doctors had been told beforehand they would get some fake patients visiting, but not when — only doctors in Delhi correctly spotted some, and even then spotted less than 1% of them.
I've blogged elsewhere about Das's work, which appeared last December in Health Affairs. What he and coauthors found was alarming. In only one third of fake patient interactions did the provider ask all the essential questions and do all the essential exams. In only one third of cases in rural Madhya Pradesh (MP) did the provider give a diagnosis, and only 12% of these were fully correct. Providers in Delhi did a little better, but managed only a 22% fully-correct diagnosis rate. Unsurprisingly, the rate at which providers prescribed the right treatment was highly unspectacular: 30% in MP and 46% in Delhi. Amazingly, while unqualified providers in both MP and Delhi asked fewer of the recommended questions and did fewer of the recommended exams, they were no less likely to prescribe the correct treatment. And while private providers were significantly more likely to ask the right questions and do the right exams, they didn't do any better in terms of prescribing the right treatment.
Sting #3: Do doctors induce demand, or are they just being considerate?
You might have noticed that there's a global push going on right now — in the United States and across the developing world — on universal health coverage, or UHC. In practice this means expanding existing health coverage / insurance schemes to hitherto excluded groups, and/or deepening coverage by covering more procedures and/or reducing co-payments. The idea in each case is to reduce the amount people pay out-of-pocket for their health care.
What could possibly go wrong? Well, as Magnus Lindelow, I and our two Chinese co-authors found out, quite a bit. When we looked at China's UHC efforts in the mid-2000s, we found that as coverage increased, people didn't in fact end up paying less, and in some cases ended up paying more. We attributed this to the fact that China's doctors are paid on a fee-for-service basis, with highly profitable rates for high-tech care and drugs and below-cost rates for 'basic care'. What we speculated was that doctors reacted to insurance by delivering more expensive care, so they ended up with higher profits, and patients ended up paying at least the same out-of-pocket as before.
What we couldn't say was whether the more expensive care was medically justified. Detecting this through an observational study would be hard, because we'd want to be able to compare the effects of giving someone insurance when faced by two types of doctor — one who financially gains from whatever care is prescribed, and another who doesn't. And we'd want to hold constant the patient's condition so we'd be comparing like with like.
Once again a sting operation looks a promising way forward. In a forthcoming paper in the Journal of Development Economics, Fangwen Lu trained herself and an assistant to act the part of a family member seeking medical advice for an absent patient. (In China, it seems, it's not uncommon — especially if you're quite sick — to have a family-member go and consult the doctor on your behalf.) One 'patient' had hypertension, was already taking a brand-name drug, but his blood pressure was still abnormal. A second 'patient' had recently received test results showing elevated triglycerides, high blood sugar, and high blood pressure; the 'patient' was not yet taking any medication, but the level of triglycerides in the test result was insufficient to warrant medication. Each of the two 'patients' notched up roughly 100 consultations, with Lu and her assistant dividing the consultations evenly between them.
The really neat feature of the study is that Lu randomly varied across fake patient consultations two pieces of information passed on to the doctor: (i) whether the patient had insurance; and (ii) whether the patient would buy any drugs from the hospital's pharmacy, in which case the doctor would get a cut, or from somewhere else, in which case the doctor wouldn't gain financially.
If doctors prescribe more to insured patients when they face no financial incentive to do so, they're doing so because they think the insurance allows the patient to afford the drugs they really need — they're being 'considerate' as Lu puts it. We'd expect insurance to make an even larger difference to a doctor's behavior when they stand to gain financially. The difference between these two insurance effects gives us the demand-inducement that comes with insurance — the 'bad' bit of the insurance effect that Lindelow and I would love to have computed but couldn't.
Lu found that without any incentive to prescribe more drugs, doctors treated insured and uninsured fake patients exactly the same. But when they faced an incentive to prescribe more, doctors prescribed more — and more expensive — drugs to the fake patients claiming to have insurance; they were also more likely to prescribe drugs for triglycerides even though none should have been prescribed given the test results. So according to Lu's results, actually all the extra drug spending that comes with insurance is due to demand-inducement.
Long live the academic sting!
So lots of open-access journals are apparently dreadful. That doesn’t mean the goal of open access is flawed: there’s the Green model for a start, and not all Gold model journals fell for the flawed paper. And some subscription-based traditional journals may also be awful – the study doesn’t tell us that. But still, the results are worth worrying about.
And the quality of medical care in India really can be very bad. What the sting operation brings home is just how bad it can be, how it’s bad in both the public and private sectors, and how little difference training seems to make to the quality of medical care. That’s a serious amount of food for thought.
And Chinese doctors apparently aren't terribly considerate. Rather they exploit the opportunity that insurance offers to induce demand when they have an incentive to do so. This doesn't mean that universal health coverage is wrong as a goal, of course. But it does mean that policy measures that tinker only with insurance coverage may not work as planned.
These are all pretty fascinating results, and it would be hard to reach them through observational studies. Hopefully, the academic sting operation is here to stay!
Join the Conversation