Published on Development Impact

Pitfalls of Patient Satisfaction Surveys and How to Avoid Them

This page in:

A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.

How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?

Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.

But – you might say – that all seems quite costly (it is) and complicated (it is). Why not just ask the patients about their experience? Enter the “patient satisfaction survey,” which goes back at least to the 1980s in a clearly recognizable form. (I’m sure someone has been asking about patient satisfaction in some form for as long as there have been medical providers.) Patient satisfaction surveys have pros and cons. On the pro side, health care is a service, and a better delivered service should result in higher patient satisfaction. If this is true, then patient satisfaction could be a useful summary measure, capturing an array of elements of the service – were you treated with respect? did you have to wait too long? On the con side, patients may not be able to gauge key elements of the service (is the health professional giving good advice?), or they may value services that are not medically recommended (just give me a shot, nurse!).

Two recently published studies in Nigeria provide evidence that both gives pause to our use of patient satisfaction surveys and points to better ways forward. Here is what we’ve learned:

  1. Patient satisfaction – measured at the facility after a visit – tends to be high, regardless of the objective quality of the facility. In one study in Nigeria, interviewers asked patients whether they agree or disagree with a set of 20 positive statements about their experience (such as “The health facility is clean” and “You had enough privacy during your visit”): For the statement with the lowest level of agreement, “The transport fees for this visit to the health facility were reasonable,” 90 percent of people agreed. For all the other positive statements, more than 90 percent of the patients agreed. Neither drug stockouts, nor a lack of electricity, nor missing medical equipment could drag down these high levels of patient satisfaction. Another study – also in Nigeria – showed similar results. And while we have seen this in Nigeria, it’s in no way a Nigeria-specific or even Africa-specific phenomenon: These “ceiling effects” – where answers are so high that there’s almost nowhere else for them to go – are well documented in high-income environments as well.
  2. If the quality of the facility doesn’t drag down patient satisfaction, what does? How you ask the question. Patient satisfaction responses can be manipulated. Some colleagues and I recently ran this experiment in Nigeria: What if, instead of inviting people to agree or disagree with positive statements (“The health facility is clean”), we invite them to agree or disagree with negative statements (“The health facility is dirty”)? Patient satisfaction dropped significantly for 10 out of 11 items when we reframed the questions negatively. 88 percent of people thought that lab fees were reasonable when we asked them to agree or disagree with “The lab fees were reasonable today” whereas only 69 percent disagreed when we proposed that “The lab fees were too expensive.” Across the items, average satisfaction drops from 95 percent to 88 percent. 88 percent is still high, but it represents more than a doubling in the level of dissatisfaction among patients. In Paraguay, waiting and asking the patients about their satisfaction once they get home from the clinic and a few days have passed reduces the level of satisfaction by 40 percent.
  3. Patient satisfaction can reflect differences in the quality of care, but the effects are small. In Nigeria, despite the fact that patient satisfaction is really high on average, higher patient satisfaction comes with better health provider knowledge. In other words, in clinics where providers can demonstrate better diagnostic ability in hypothetical situations, patients report higher levels of satisfaction. But while we observe these differences, they are small in size. And contrary to popular stories, patients don’t have to receive a prescription to be satisfied.  
Should we give up on patient satisfaction measures? We don’t think so. A positive patient experience is an important element of a high-quality health care system. But providers and researchers who want a better sense of satisfaction and quality might consider these recommendations:
  1. Don’t read a list of positive statements and ask patients to agree or disagree. Patient satisfaction surveys can do better. The 15-item Picker Patient Experience Questionnaire skips agree/disagree questions completely. The Patient Experience Questionnaire retains agree/disagree questions, but it mixes positive and negative framing to avoid the upward bias that comes from all positive questions.
  2. Don’t rely on patient satisfaction alone to measure the quality of care. Patients may not always like to hear hard truths, which is part of why patient satisfaction – even if measured with perfect accuracy – could never be a complete measure of the quality of care. Health providers and researchers who want to gauge the quality of care will have to draw on an array of tools – mystery patients, vignettes, inspections, and patient outcomes – to know how they’re doing and how to do better.
 
The Credits

This blog post draws principally on two recently published papers:  
Do you want to read more?
  • If you’re doing research with patient satisfaction surveys, what estimation method should you use? Welander Tärneberg and I made a simple table of what researchers have done in the past.
  • You can read a humor-inflected summary of the Dunsch et al. paper.
  • Das and Sohnesen, cited above, show a link between doctor effort and patient satisfaction in Paraguay, but it disappears when controlling for the “doctor fixed effect.” In other words, there are doctors who expend more effort, and patients tend to be more satisfied with those doctors. But if you compare patients with the same doctor, patients on whom the doctor expends greater effort are no more satisfied than those on whom the doctor expends less effort. So it’s probably not the effort that’s driving the satisfaction, but rather some other unmeasured characteristic of the doctor. (A sunny personality? Who knows?!)

Authors

David Evans

Senior Fellow, Center for Global Development

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000