Published on Let's Talk Development

Shocking facts about primary health care in India, and their implications

This page in:

There’s nothing quite like a cold shower of shocking statistics to get you thinking. A paper that came out in Health Affairs today, written by my colleague Jishnu Das and his collaborators, is just such a cold shower.

Fake patients
Das and his colleagues spent 150 hours training each of 22 Indians to be credible fake patients. These actors were then sent into the consulting rooms of 305 medical providers – some in rural Madhya Pradesh (MP), others in urban Delhi – to allow the study team to assess the quality of care that the providers were delivering.

A lot of thought went into just what conditions the fake patients should pretend to have. The team wanted the conditions to be common, and to be ones that had established medical protocols with government-provided treatment checklists. The fake patients shouldn’t be subjected to invasive exams, and they needed to be able to be able to credibly describe invisible symptoms.

The conditions the team chose were unstable angina, asthma, and dysentery in a fictitious child that had been left at home. The fake patients were trained to consistently and credibly portray the physical, emotional, and psychosocial aspects of the condition, and were told what answers to provide to questions the provider might plausibly ask. They were coached in how to avoid an invasive exam, and in what to remember from the encounter. Fake patients retained any medicines from the consultation, and were debriefed within two hours of the encounter.

In case you’re a little skeptical (I must admit I was early on in this research), consider these two facts. First, in follow-up visits, no provider in MP voiced any suspicion about fake patients; in Delhi, private providers did spot some fake patients, but spotted less than 1% of them. Second, providers who stuck closest to the checklist were more likely to get the “correct” diagnosis; had the fake patients been unconvincing, the study team would have found the opposite.

As a way of getting at quality of care, fake patients have advantages over other methods, such as observations of provider-patient interactions, exit interviews, etc. There’s no observation bias with fake patients. Fake patients also allow for standardization of casetype, severity, and non-health characteristics: this allows case detection rates to be estimated, and allows for valid quality comparisons across providers because patients don’t choose providers on the basis of their symptoms and severity.

Das and his colleagues had to think, of course, about where to send their fake patients. In MP, they gathered data on providers and the population’s use of them, and came up with a sample of 226 primary care providers that was representative of the primary facilities used by rural households in the state. That meant a sample that included mostly private providers – with and without formal training – and some public clinics. In Delhi, the sample of 64 providers was simply a convenience sample, not necessarily representative of the facilities used by the Delhi population.

What the fake patients encountered
In just one third of fake patient interactions in both MP and Delhi did the provider ask all the essential questions and do all the essential exams. This didn’t vary much across the three conditions. And in only one third of cases in MP did the provider give a diagnosis. Shockingly, only 12% of the diagnoses the MP providers offered were completely correct; another 41% got a partially correct diagnosis. Providers in Delhi did better, but managed only a 22% fully-correct diagnosis rate. Unsurprisingly, the rate at which providers prescribed the right treatment was highly unspectacular: 30% in MP and 46% in Delhi.

What the study team uncovered next was even more shocking. While unqualified providers in both MP and Delhi asked fewer of the recommended questions and did fewer of the recommended exams, they were no less likely to prescribe the correct treatment. Moreover, while providers in better-equipped facilities in MP asked rather more questions and did rather more tests, they were also no more likely to prescribe the right treatment. Interestingly, private providers were significantly more likely to ask the right questions and do the right exams. However, they were not more likely to prescribe the right treatment; in Delhi, in fact, they were significantly less likely to do so.

Implications
It’s pretty staggering that – at least for these conditions – the rural residents of MP and Delhi face 70% and 55% chances respectively of being prescribed the wrong treatment.

It’s also pretty staggering that hiring qualified staff doesn’t appear to increase this probability. Das and colleagues suggest that part of the issue might be the variation in the quality of instruction in Indian medical training institutions. So there may be some institutions from which a qualification does make a difference. But given the paper’s results, the effect of such institutions must be rather small. The fact that providers working in better equipped facilities don’t have a higher probability of prescribing the right treatment is also alarming.

The results on the public-private differences are pretty interesting. In rural MH both sectors do equally – and very – badly in terms of ensuring the patient gets the right treatment. In Delhi by contrast going to the private sector halves the odds of getting the correct treatment, even though it raises the number of recommended questions the provider asks. It is the latter quality indicator that Das et al are presumably referring to in their conclusion when they say “we observed better quality care in the private sector”. That’s a bit misleading – it’s surely the correctness of the treatment prescribed that matters at the end of the day, not the number of questions asked. This work isn’t exactly a great advertisement for India’s private sector, or for the view that financial incentives will improve quality. But it’s not exactly a great advertisement for the public sector either. It also begs the question: Why does the private sector in Delhi do worse on the correctness-of-treatment indicator while the private and public sectors in rural MH do just as badly as one another?

A cold shower that invigorates but befuddles 
It would be unwise to generalize too much from this one study, but at the same time it would be unwise to assume that these results apply to just unstable angina, asthma, and childhood dysentery, and to just Madhya Pradesh and Delhi. It seems more likely that these results will be replicable elsewhere in India and using other conditions. It’s also likely the results aren’t India-specific.

If so, the paper suggests that the developing world may well face a huge challenge in terms of the quality of care at primary level – much bigger than we probably thought. And while this paper isn’t an impact evaluation of any program or policy, the results don’t exactly inspire confidence in the usual policy knobs we reach for when thinking about improving quality. It doesn’t look like more training and better equipment will solve the problem. Nor does it look like the quality deficit will be reduced simply by building up one side of the public-private divide and scaling back the other. The paper also makes it clear that simply giving everyone free access to health providers (à la Universal Health Coverage) isn’t necessarily going to do much to improve population health.

Some “cold showers” leave you invigorated and with a clear sense of where to head next. This one’s a bit different – awake yes, but a clear sense of where to go next, no.
 


Authors

Adam Wagstaff

Research Manager, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000