Should we try to incorporate the cost of forgone care into a measure of financial protection?
In my first post on UC in this series I argued that UC is best thought of as a means to achieving lower inequalities and improved financial protection in the health sector, but that in practice UC is unlikely to be sufficient – and may not even be necessary – for us to achieve these goals.
In this post, I want to probe a little on the measurement of financial protection; in particular I want to ask whether it should incorporate an allowance for forgone care.
Measuring financial protection – the story so far
Measures of financial protection in health relate actual out-of-pocket spending to total household spending and to a threshold. 'Catastrophic' spending occurs when out-of-pocket spending — expressed as a percentage of total household spending — exceeds a threshold, e.g. 25 percent. 'Impoverishing' spending occurs when a household is above the poverty line when out-of-pocket medical spending is added to nonmedical spending but below the poverty line when living standards are assessed just on the basis of nonmedical spending. The idea is that a household that is poor on the basis of its nonmedical spending alone would not have been poor if the health shock hadn’t compelled it to buy medical care; it could instead have used the money on budget items that actually add to its well being, rather than simply restore its well being to its pre-shock level.
There are, of course, debates about how much out-of-pocket spending is translated into lower living standards in the current period and how much of it is financed through savings from earlier periods and borrowing against future period incomes. When this information is available — which isn’t often — researchers have adjusted their measure of catastrophic spending accordingly.
What of forgone health care utilization?
Such adjustments don't get at an issue raised recently by Rodrigo Moreno-Serra and colleagues. They argue that people may forgo care — or use less care — precisely because they lack proper financial protection. Someone with kidney failure but without health insurance might simply never get the transplant or dialysis they need. The conventional measures of financial protection don’t flag any problem, because the out-of-pocket spending is low or zero. But without the care, the person may experience a lower quality of life, and die prematurely.
Moreno-Serra & Co object to saying that such a person had adequate financial protection. They would like us to 'improve' on the currently used financial protection measures, by factoring in forgone utilization.
Adjusting out-of-pocket spending for need
I’ve thought about this issue on a number of occasions, and have always shied away from what Moreno-Serra & Co are recommending.
The problem is how to estimate the out-of-pocket spending that an individual — or a household, or a country for that matter — ought to have incurred, given their medical needs.
Americans pay more out-of-pocket for their health care, and because of this we tend to think that Americans have less financial protection than Britons. But if we think along the lines of Moreno-Serra & Co, perhaps we should be thinking the opposite. Britain spends less in total on health care than America, in part at least because Britain lies further from the medical technology frontier — in America, the latest technology gets adopted quickly, and soon becomes part of the service norm. In the Moreno-Serra & Co sense, Britons have less financial protection than Americans because they forgo medical care that could make them live longer and/or improve their quality of life.
The same logic applies within a country. The better off often spend out-of-pocket to get more — or more expensive — medicines, tests and procedures. In the Moreno-Serra & Co sense, these people face better financial protection than the poor because — unlike the poor — they don’t forgo potentially beneficial medical care. Of course, the conventional approach would likely conclude the opposite.
My sense is that in adjusting reported out-of-pocket spending, we should be as worried about overspending as we should about underspending. We can’t realistically look to the US or to the top 1 percent of the income distribution in a country to get a sense of what people 'ought' to be spending when they’re financially protected.
We could try to chart a middle course. We could look at what people pay (or would have to pay) out-of-pocket for items that that appear on a list of essential tests, medicines, interventions, etc. In this approach, we might end up concluding that Britain’s NHS covers every essential item for free already, and Britons enjoy 100 percent financial protection. If the list includes all the essential items, but people have to pay something out-of-pocket for them, we add up what they spend on the listed items. If people choose to spend on items that aren’t on the list, that’s their choice — when measuring financial protection we simply discard the out-of-pocket spending they incur in relation to these unlisted items.
What if some items we think ought to be on the essential list aren't? (By the way, who are 'we' here?) We'd have to work out the cost people would incur if they were to purchase these items, and we’d add these unobserved expenditures to the expenditures we observe on the listed items.
For each person, we need to know — for each possible medical need — how much was spent out-of-pocket, and on what. For each item of incurred expenditure, we'll be looking to assess whether it should count toward the spending figure we’re going to use when assessing catastrophic or impoverishing spending. We will likely end up adjusting lots of items of spending downwards — where, for example, someone spent on a brand name drug when the essential list includes only generic drugs. We may end up discarding some altogether — e.g. spending on in-vitro fertilization, if that’s not on our essential list. And for each unmet need, we’d be looking to calculate how much the person would have spent out-of-pocket meeting that need.
We'd have to go through this exercise for each and every household member. After all, we typically assume that households pool their resources and that all household members share the burden of a health shock that hits one of them.
This is a mammoth exercise, and well beyond what's feasible with current household surveys. In practice, most contain limited and very crude information on health care needs, and even less information (actually, often none at all) on the reasons for an item of expenditure. And household surveys often collect information on health status for just one household member but ask spending for all household members combined.
What we can infer using current measures
If we can’t credibly make the necessary upward — and downward — adjustments to reported household spending, what can we infer from the existing crude and broad-brush measures?
I tend to look beyond the overall incidence of catastrophic spending to the distributions of catastrophic spending and utilization by income. If catastrophic spending is concentrated among the better off and utilization rates are higher among the better off, I suspect we have a mix of underuse by the poor and some discretionary spending by the better off. The challenge in this case isn’t one of financial protection but rather of raising utilization rates among the poor — lowering financial barriers could be one strategy, but I'd want to dig a bit before concluding this is the best strategy. If, by contrast, catastrophic spending is concentrated among the poor, and utilization rates are higher among the poor, I'd be less concerned about underuse by the poor. And I'd probably not be worrying about discretionary spending among the poor. Rather I’d be looking to find ways to reduce the out-of-pocket payments that the poor are making on their current utilization.
In short, by looking beyond the overall incidence of catastrophic spending, I think we can probably learn quite a lot even with the crude measures we have. And I suspect that the data requirements of the alternative approach are likely to prove so severe we’d end up with a half-baked set of estimates that would be hard to interpret properly. But I’m perfectly willing to be convinced otherwise!