Syndicate content

Berk Ozler's blog

U.S. Law and Order Edition: Indoor prostitution and police body-worn cameras

Berk Ozler's picture
Today, I cover two papers from two ends of the long publication spectrum – a paper that is forthcoming in the Review of Economic Studies on the effect of decriminalizing indoor prostitution on rape and sexually transmitted infections (STIs); and another working paper that came out a few days ago on the effect of police wearing cameras on use of force and civilian complaints. While these papers are from the U.S.A, each of them has something to teach us about methods and policies in development economics. I devote space to each paper proportional to the time it has been around…

Teacher training and parenting education in preschool

Berk Ozler's picture
Lack of adequate preparation for primary school through pre-primary education is one of the key risk factors for poor performance in primary school (Behrman et al., 2006).* Thus, a popular approach for trying to improve outcomes in children has to do with increasing enrollment in preschool programs, and/or trying to improve the quality of existing programs. Children in low-resource settings are less likely to attend school, and they are less likely to learn when they are in the school setting – partly because they are unprepared for school when they get there.

Dealing with attrition in field experiments

Berk Ozler's picture

Here is a familiar scenario for those running field experiments: You’re conducting a study with a treatment and a comparison arm and measuring your main outcomes with surveys and/or biomarker data collection, meaning that you need to contact the subjects (unlike, say, using administrative data tied to their national identity numbers) – preferably in person. You know that you will, inevitably, lose some subjects from both groups to follow-up: they will have moved, be temporarily away, refuse to answer, died, etc. In some of these cases there is nothing more you can do, but in others you can try harder: you can wait for them to come back and revisit; you can try to track them to their new location, etc. You can do this at different intensities (try really hard or not so much), different boundaries (for everyone in the study district, region, or country, but not for those farther away), and different samples (for everyone or for a random sub-sample).

Question: suppose that you decide that you have the budget to do everything you can to find those not interviewed during the first pass through the study areas (doesn’t matter if you have enough budget for a randomly chosen sub-sample or everyone), i.e. an intense tracking exercise to reduce the rate of attrition. In addition to everything else you can do to track subjects from both groups, you have a tool that you can use for those only in the treatment arm (say, your treatment was group-based therapy for teen mums and you think that the mentors for these groups may have key contact information for subjects who moved in the treatment group. There were no placebo groups in control, i.e. no counterpart mentors). Do you use this source to track subjects – even if it is only available for the treatment group?

Sometimes (increasingly often times), estimating only the ITT is not enough in a RCT

Berk Ozler's picture

"In summary, the similarities between follow-up studies with and without baseline randomization are becoming increasingly apparent as more randomized trials study the effects of sustained interventions over long periods in real world settings. What started as a randomized trial may effectively become an observational study that requires analyses that complement, but go beyond, intention-to-treat analyses. A key obstacle in the adoption of these complementary methods is a widespread reluctance to accept that overcoming the limitations of intention-to-treat analyses necessitates untestable assumptions. Embracing these more sophisticated analyses will require a new framework for both the design and conduct of randomized trials."

False positives in sensitive survey questions?

Berk Ozler's picture

This is a follow-up to my earlier blog on list experiments for sensitive questions, which, thanks to our readers generated many responses via the comments section and emails: more reading for me – yay! More recently, my colleague Julian Jamison, who is also interested in the topic, sent me three recent papers that I had not been aware of. This short post discusses those papers and serves as a coda to the earlier post…

Random response techniques (RRT) are used to provide more valid data than direct questioning (DQ) when it comes to sensitive questions, such as corruption, sexual behavior, etc. Using some randomization technique, such as dice, they introduce noise into the respondent’s answer, in the process concealing her answer to the sensitive question while still allowing the researcher to estimate an overall prevalence of the behavior in question. These are attractive in principle, but, in practice, as we have been trying to implement them in field work recently, one worries about implementation details and the cognitive burden on the respondents: in real life, it’s not clear that they provide an advantage to warrant use over and above DQ.

The Puzzle with LARCs

Berk Ozler's picture

Suppose that you’re at your doctor’s office, discussing an important health issue that may become a concern in the near future. There are multiple drugs available in the market that you can use to prevent unwanted outcomes. Some of them are so effective that there is practically no chance you will have a negative event if you start taking them. Effectiveness of the other options range from 94% to much lower, with the most commonly used drug failing about 10% of the time for the typical user. Somehow, you go home with the drug that has a one in 10 failure rate: worse, you’re not alone; most people end up in the same boat…

List Experiments for Sensitive Questions – a Methods Bleg

Berk Ozler's picture

About a year ago, I wrote a blog post on issues surrounding data collection and measurement. In it, I talked about “list experiments” for sensitive questions, about which I was not sold at the time. However, now that I have a bunch of studies going to the field at different stages of data collection, many of which are about sensitive topics in adolescent female target populations, I am paying closer attention to them. In my reading and thinking about the topic and how to implement it in our surveys, I came up with a bunch of questions surrounding the optimal implementation of these methods. In addition, there is probably more to be learned on these methods to improve them further, opening up the possibility of experimenting with them when we can. Below are a bunch of things that I am thinking about and, as we still have some time before our data collection tools are finalized, you, our readers, have a chance to help shape them with your comments and feedback.

The importance of study design (why did a CCT program have no effects on schooling or HIV?)

Berk Ozler's picture

A recent paper in Lancet Global Health found that generous conditional cash transfers to female secondary school students had no effect on their school attendance, dropout rates, HIV incidence, or HSV-2 (herpes simplex virus – type 2) incidence. What happened?

Pages