Given the massive debate in the U.S. about government health insurance, the just released results of a new experiment are justly making headlines. In 2004, the state of Oregon, due to budgetary shortfalls, closed its public health insurance program for low-income people. In early 2008, the state decided it had enough budget to fund 10,000 new spots. Given that it expected demand for these new slots to far exceed supply, the state Government opened up a sign-up window, getting 90,000 people to sign-up for a waitlist, and then used random lottery draws to select people from the waitlist.
This provides the unique opportunity to study how much difference access to public health insurance actually makes in a U.S. setting. A large study team, including Amy Finkelstein and Jonathan Gruber, well-known economists from MIT, have just released a working paper  with results from the first year. Here’s the abstract:
“In 2008, a group of uninsured low-income adults in Oregon was selected by lottery to be given the chance to apply for Medicaid. This lottery provides a unique opportunity to gauge the effects of expanding access to public health insurance on the health care use, financial strain, and health of lowincome adults using a randomized controlled design. In the year after random assignment, the treatment group selected by the lottery was about 25 percentage points more likely to have insurance than the control group that was not selected. We find that in this first year, the treatment group had substantively and statistically significantly higher health care utilization (including primary and preventive care as well as hospitalizations), lower out-of-pocket medical expenditures and medical debt (including fewer bills sent to collection), and better self-reported physical and mental health than the control group.”
Ezra Klein has a nice summary  of the work at Bloomberg news. He calls for more experiments of this nature:
“What we need now are many more randomized studies looking at different types of insurance and care. Doing those studies right would cost money, but the returns, in savings and health, would be enormous. After all, knowing that Medicaid matters is good, but we already sort of knew that. Knowing how to make it matter most, and for the lowest possible cost -- that’s where we’re still struggling. “
Ray Fisman also covers this experiment in his Slate column , and you can find many other news articles about its findings – all touting the fact that this the first “gold standard” randomized study of health insurance – this is certainly a case where the ease of explaining the findings of a randomized experiment and lack of internal validity concerns are appreciated in the policy domain.
However, I wanted to point out a few more technical features of the study that are likely to be of interest to those involved in impact evaluations:
· Prior to looking at the data on outcomes for the treatment group, virtually all of the analysis was pre-specified and publicly archived  in a 159 page detailed analysis plan. This is certainly uncommon, and a good protection against data mining in a study which is sure to be very politically sensitive and debated in policy circles. This is different from the trial registry idea in medicine, since it was archived after data collection and after looking at data for the control group. Basically the authors lay out what they plan to look at in terms of outcomes, and what tests they will run before doing any of them.
· They pay careful attention to issues of multiple hypothesis testing, given they are looking at many different outcomes, and calculate familywise error rate adjusted p-values based the free step-down resampling method of Westfall and Young, as well as reporting standardized treatment effects for families of outcomes.
· They use both administrative data (from hospital discharge records, Credit bureaus, and mortality data) and survey data.
· The survey was done by mail, with a more intensive protocol involving additional tracking efforts done on a subset to learn more about selective non-response and to reweight the data (the reweighted response rate is still only 50% though).
· They do some quantile analysis to look at medical collections and medical debt which affect only a tail of the sample.
· The paper has author disclaimers acknowledging which authors have links to the medical industry and how.
· There are appropriate caveats and discussion about some of the issues extrapolating the results to other populations.