Syndicate content

Berk Ozler's blog

False positives in sensitive survey questions?

Berk Ozler's picture

This is a follow-up to my earlier blog on list experiments for sensitive questions, which, thanks to our readers generated many responses via the comments section and emails: more reading for me – yay! More recently, my colleague Julian Jamison, who is also interested in the topic, sent me three recent papers that I had not been aware of. This short post discusses those papers and serves as a coda to the earlier post…

Random response techniques (RRT) are used to provide more valid data than direct questioning (DQ) when it comes to sensitive questions, such as corruption, sexual behavior, etc. Using some randomization technique, such as dice, they introduce noise into the respondent’s answer, in the process concealing her answer to the sensitive question while still allowing the researcher to estimate an overall prevalence of the behavior in question. These are attractive in principle, but, in practice, as we have been trying to implement them in field work recently, one worries about implementation details and the cognitive burden on the respondents: in real life, it’s not clear that they provide an advantage to warrant use over and above DQ.

The Puzzle with LARCs

Berk Ozler's picture

Suppose that you’re at your doctor’s office, discussing an important health issue that may become a concern in the near future. There are multiple drugs available in the market that you can use to prevent unwanted outcomes. Some of them are so effective that there is practically no chance you will have a negative event if you start taking them. Effectiveness of the other options range from 94% to much lower, with the most commonly used drug failing about 10% of the time for the typical user. Somehow, you go home with the drug that has a one in 10 failure rate: worse, you’re not alone; most people end up in the same boat…

List Experiments for Sensitive Questions – a Methods Bleg

Berk Ozler's picture

About a year ago, I wrote a blog post on issues surrounding data collection and measurement. In it, I talked about “list experiments” for sensitive questions, about which I was not sold at the time. However, now that I have a bunch of studies going to the field at different stages of data collection, many of which are about sensitive topics in adolescent female target populations, I am paying closer attention to them. In my reading and thinking about the topic and how to implement it in our surveys, I came up with a bunch of questions surrounding the optimal implementation of these methods. In addition, there is probably more to be learned on these methods to improve them further, opening up the possibility of experimenting with them when we can. Below are a bunch of things that I am thinking about and, as we still have some time before our data collection tools are finalized, you, our readers, have a chance to help shape them with your comments and feedback.

The importance of study design (why did a CCT program have no effects on schooling or HIV?)

Berk Ozler's picture

A recent paper in Lancet Global Health found that generous conditional cash transfers to female secondary school students had no effect on their school attendance, dropout rates, HIV incidence, or HSV-2 (herpes simplex virus – type 2) incidence. What happened?

Weekly Links, April 7: Unpaywall, good and fake news from Malawi, doing research in conflict zones, and more...

Berk Ozler's picture
  • Just this week, I provided a journalist with a bunch of citations, most of which she could not access. Perhaps, no more? LSE Impact Blog discusses the Unpaywall: "The extension is called Unpaywall, and it’s powered by an open index of more than ten million legally-uploaded, open access resources. Reports from our pre-release are great: “Unpaywall found a full-text copy 53% of the time,” reports librarian, Lydia Thorne. Fisheries researcher Lachlan Fetterplace used Unpaywall to find “about 60% of the articles I tested. This one is a great tool and I suspect it will only get better.” And indeed it has! We’re now getting full-text on 85% of 2016’s most-covered research papers."  

Should I stay or should I go? Marriage markets and household consumption

Berk Ozler's picture

“We propose a model of the household with consumption, production and revealed preference conditions for stability on the marriage market. We define marital instability in terms of the consumption gains to remarrying another individual in the same marriage market, and to being single. We find that a 1 percentage point increase in the wife’s estimated consumption gains from remarriage is significantly associated with a 0.6 percentage point increase in divorce probability in the next three years.”

A pre-analysis plan is the only way to take your p-value at face-value

Berk Ozler's picture

Andrew Gelman has a post from last week that discusses the value of preregistration of studies as being akin to the value of random sampling and RCTs that allow you to make inferences without relying on untestable assumptions. His argument, which is nicely described in this paper, is that we don’t need to assume nefarious practices by study authors, such as specification searching, selective reporting, etc. to worry about the p-value reported in the paper we’re reading being correct.

Fact checking universal basic income: can we transfer our way out of poverty?

Berk Ozler's picture
New York Times published an article last week, titled “The Future of Not Working.” In it, Annie Lowrie discusses the universal basic income experiments in Kenya by GiveDirectly: no surprise there: you can look forward to more pieces in other popular outlets very soon, as soon as they return from the same villages visited by the Times.

Pages