Syndicate content

Feigning illness to improve care: Recent lessons from standardized patients in rural

Jed Friedman's picture

A key determinant of good health is the quality of the care that sick patients receive, and donor attention in the health sector is increasingly focused on quality of care investments such as enhanced training and supervision of health providers. This interest in the quality of care will only increase further in the coming years as the epidemiological transition shifts the relative disease burden towards chronic illnesses. Why? Because proper management of chronic illness requires repeated high quality interactions with the health system. Given the growing importance of quality, a challenge for empirically minded policy-makers and researchers is to comprehensively and accurately measure it. It turns out that this is exceedingly difficult.

Often researchers measure the dimensions of quality that is in their power to measure through the standard facility and provider surveys. These dimensions include “structural” quality measures such as the presence of key commodities, equipment, and trained staff. Although structural measures are fairly straightforward to measure, the problem is that they are partial at best. Additional, complementary, methods include patient exit interviews, provider knowledge tests, and the direct observation of care. Unfortunately each of these measures also has drawbacks:

·         Patient exit interviews may be subject to the recall and response bias of the patient.

·         Tests may capture the knowledge of the provider, but not the effort expended with the patient, i.e. the provider “practice”.

·         And of course direct observation is likely subject to Hawthorne effects.

There is another method that, when properly executed, is not subject to any of these problems – the standardized patient. What does this method entail? Actor patients, thoroughly trained to feign a specific health condition and record the actions of the provider, appear unannounced at a health clinic and undergo service. They then convey every action undertaken by the provider through a debriefing with field staff shortly after the visit.

Standardized patients are considered the gold standard as they are likely free of recall bias and Hawthorne effects. And well-trained standardized patients record more complete information than what is found in patient records. Furthermore, the standardization in case presentation and training allows for direct comparisons of quality across different providers. And perhaps most importantly, the standardized patient measures quality of care as it actually transpires in the examination room.

But these studies, due to their resource demands, are not common. Fortunately, my World Bank colleagues Jishnu Das, Alaka Holla, and co-authors recently summarized an extensive project they undertook in Madhya Pradesh, a largely rural state in India, to introduce standardized “actor” patients on a large scale. Their experience is one of the first in a low-income setting and is a great example of the challenges and benefits of a standardized patient approach to quality of care measurement.

Their approach is typical for standardized patient studies in that it is time and resource intensive: Das and conspirators recruited actor patients from local communities and trained then for an average of 150 hours (!) on one of three standardized (and common) cases: unstable angina, asthma, or dysentery of a child (who was not present). After training, the investigators unleashed the 22 trained standardized patients on 305 providers of care for a total of 926 clinical interactions. Within one hour of the encounter, the standardized patient was debriefed with a structured questionnaire/checklist and all medicines were saved.

What did the actor-patients find? In brief, correct diagnoses were rare, incorrect treatments widespread, and the overall results sobering:

·         Quite often the patient was attended by an unqualified provider – 63 percent of interactions in public clinics were with a provider without medical training (either the facility had no such staff or the qualified provider was absent).

·         The average duration of interaction was a brief 3.6 minutes with low levels of patient history probing, few actual examinations, and an emphasis on the dispensation of medicine.

·         Only about a third of the essential questions related to the condition were asked and 2.5 medications were prescribed.

·         Only one-third of providers articulated a diagnosis, whether correct or incorrect – and close to half of articulated diagnoses were incorrect.

·         The correct treatment protocol was followed 30% of the time, while unnecessary or harmful treatment was prescribed 42% of the time!

·         Provider qualifications are only marginally related to quality of care. Unqualified providers completed three percentage points fewer recommended questions and exams than medically-trained providers, but were just as likely to articulate a diagnosis and provide correct treatment compared with a qualified provider.

So there is vast scope for quality improvements. But these results are likely not a unique indictment of the health system in rural Madhya Pradesh. The authors find similar results in a sample for urban Delhi, and human resource challenges such as lack of skilled providers and absenteeism are widespread problems for developing country health systems all over the world. Rather I take these results as a clarion call for the need to address care quality in a variety of developing country settings.

Reconciliation of standardized patients with other methods

Another important finding from Das et al. is that structural measures of quality such as the state of infrastructure and the patient case-load had little association with any standardized patient quality measure. It is clear that provider effort is a key determinant of the quality of care a patient receives and standard survey based measures fail to capture this.

This presents a quandary for the applied researcher, as standardized patients cannot be used in all settings. Besides the relatively high cost, limitations of the approach include the need to restrict simulated cases to those that did not require invasive examinations, as well as cases that can be credibly simulated (i.e. it is difficult to simulate pregnancy related conditions from an actor who is not actually pregnant).

Might other, less resource intensive, quality measures such as patient-exit interviews, direct observation, or knowledge vignettes serve as an adequate stand-in for standardized patients? A 2004 study by Peabody et al. argues that standardized and open-ended knowledge vignettes can serve as a suitable alternative. Unfortunately the analysis presented in that study makes it difficult to definitively accept its conclusion: the only results presented are differences in summary mean quality scores for standardized patients and vignettes offered to the same practitioners, but the possible presence of counterbalancing (where clinicians overestimate their performance in certain dimensions and underestimate in other dimensions vis-à-vis standardized patients) renders the summary scores fairly inscrutable.

A more recent review article of 15 studies concludes that, despite their widespread use, the extent to which proxy measures of provider behavior (such as vignettes) accurately reflect the provider’s actual behavior is unclear. Furthermore, virtually all of the methodological studies that test the validity of proxy measures do so in relatively small samples in OECD countries – the validity of such studies for the developing country context is further unclear. We just don’t know enough at the moment. Until we do, if standardized patients are not possible, it is probably safest to infer quality of care measure from multiple approaches such as patient exit-interviews, vignettes, and direct observation.

Add new comment