Syndicate content

Nigeria

Pitfalls of Patient Satisfaction Surveys and How to Avoid Them

David Evans's picture

A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.

How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?

Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.

But – you might say – that all seems quite costly (it is) and complicated (it is). Why not just ask the patients about their experience? Enter the “patient satisfaction survey,” which goes back at least to the 1980s in a clearly recognizable form. (I’m sure someone has been asking about patient satisfaction in some form for as long as there have been medical providers.) Patient satisfaction surveys have pros and cons. On the pro side, health care is a service, and a better delivered service should result in higher patient satisfaction. If this is true, then patient satisfaction could be a useful summary measure, capturing an array of elements of the service – were you treated with respect? did you have to wait too long? On the con side, patients may not be able to gauge key elements of the service (is the health professional giving good advice?), or they may value services that are not medically recommended (just give me a shot, nurse!).

Two recently published studies in Nigeria provide evidence that both gives pause to our use of patient satisfaction surveys and points to better ways forward. Here is what we’ve learned:

Can predicting successful entrepreneurship go beyond “choose smart guys in their 30s”? Comparing machine learning and expert judge predictions

David McKenzie's picture

Business plan competitions have increasingly become one policy option used to identify and support high-growth potential businesses. For example, the World Bank has helped design and support these programs in a number of sub-Saharan African countries, including Côte d’Ivoire, Gabon, Guinea-Bissau, Kenya, Nigeria, Rwanda, Senegal, Somalia, South Sudan, Tanzania, and Uganda. These competitions often attract large numbers of applications, raising the question of how do you identify which business owners are most likely to succeed?

In a recent working paper, Dario Sansone and I compare three different approaches to answering this question, in the context of Nigeria’s YouWiN! program. Nigerians aged 18 to 40 could apply with either a new or existing business. The first year of this program attracted almost 24,000 applications, and the third year over 100,000 applications. After a preliminary screening and scoring, the top 6,000 were invited to a 4-day business plan training workshop, and then could submit business plans, with 1,200 winners each chosen to receive an average of US$50,000 each. We use data from the first year of this program, together with follow-up surveys over three years, to determine how well different approaches would do in predicting which entrants will have the most successful businesses.

How hard are they working?

Markus Goldstein's picture
I was at a conference a couple of years ago and a senior colleague, one who I deeply respect, summarized the conversation as: “our labor data are crap.”   I think he meant that we have a general problem when looking at labor productivity (for agriculture in this case) both in terms of the heroic recall of days and tasks we are asking survey respondents for, but also we aren’t doing a good job of measuring effort. 

Biased women in the I(C)T crowd

Markus Goldstein's picture
This post is coauthored with Alaka Holla

The rigorous evidence on vocational training programs is, at best, mixed.   For example, Markus recently blogged about some work looking at long term impacts of job training in the Dominican Republic.   In that paper, the authors find no impact on overall employment, but they do find a change in the quality of employment, with more folks having jobs with health insurance (for example). 
 

What makes bureaucracies work better? Lessons from the Nigerian Civil Service

Markus Goldstein's picture
Given Jed's post last week on thinking through performance incentives for health workers, and the fact that the World Bank is in the throes of a reform process itself, a fascinating new paper from Imran Rasul and Daniel Rogger on autonomy and performance based incentives in Nigeria gives us some other food for thought.   In a nutshell, Rasul and Rogger f