Syndicate content

Berk Ozler's blog

Your go-to regression specification is biased: here’s the simple way to fix it

Berk Ozler's picture

Today, I am writing about something many of you already know. You’ve probably been hearing about it for 5-10 years. But, you still ignore it. Well, now that the evidence against it has mounted enough and the fix is simple enough, I am here to urge you to tweak your regression specifications in your program evaluations.

I just signed my first referee report

Berk Ozler's picture

I once received a referee report for a journal submission that said, “In fact, in my view its contribution to science is negative…” The report continued with comments about how the paper lacked “proper and sound scientific inquiry” and was “…unsuitable for publication pretty much anywhere, I think.” Just in case the four-page assault was not sufficient, the report ended with encouraging the authors to “…move onto the next project.” It was hard to avoid the feeling that the referee was suggesting a career change for us rather than simply giving up on this paper… The paper was subsequently published in the Journal of Health Economics, but the bad taste of receiving that report lingered long afterwards…

12 of our favorite development papers of the year

Berk Ozler's picture

Development Impact will now be on break over the next couple of weeks for the holidays, resuming in early January after the AEA annual meetings. Inspired by some of the interesting lists of favorite papers of the year (e.g. Noah Smith, Matt Notowidigdo) we thought we’d each offer three of our favorite development economics papers for the year...

The Economics and Law of Sexual Harassment in the Workplace

Berk Ozler's picture

This week, I leave you with this short 2003 paper in the Journal of Economic Perspectives by Kaushik Basu. It both follows somewhat from my last post, is related to the day's news, and relevant for thinking about principles for intervention in labor markets for a host of issues that our colleagues deal with in developing and developed economies...Here is the abstract - but you can read the paper in 30 minutes...

U.S. Law and Order Edition: Indoor prostitution and police body-worn cameras

Berk Ozler's picture
Today, I cover two papers from two ends of the long publication spectrum – a paper that is forthcoming in the Review of Economic Studies on the effect of decriminalizing indoor prostitution on rape and sexually transmitted infections (STIs); and another working paper that came out a few days ago on the effect of police wearing cameras on use of force and civilian complaints. While these papers are from the U.S.A, each of them has something to teach us about methods and policies in development economics. I devote space to each paper proportional to the time it has been around…

Teacher training and parenting education in preschool

Berk Ozler's picture
Lack of adequate preparation for primary school through pre-primary education is one of the key risk factors for poor performance in primary school (Behrman et al., 2006).* Thus, a popular approach for trying to improve outcomes in children has to do with increasing enrollment in preschool programs, and/or trying to improve the quality of existing programs. Children in low-resource settings are less likely to attend school, and they are less likely to learn when they are in the school setting – partly because they are unprepared for school when they get there.

Dealing with attrition in field experiments

Berk Ozler's picture

Here is a familiar scenario for those running field experiments: You’re conducting a study with a treatment and a comparison arm and measuring your main outcomes with surveys and/or biomarker data collection, meaning that you need to contact the subjects (unlike, say, using administrative data tied to their national identity numbers) – preferably in person. You know that you will, inevitably, lose some subjects from both groups to follow-up: they will have moved, be temporarily away, refuse to answer, died, etc. In some of these cases there is nothing more you can do, but in others you can try harder: you can wait for them to come back and revisit; you can try to track them to their new location, etc. You can do this at different intensities (try really hard or not so much), different boundaries (for everyone in the study district, region, or country, but not for those farther away), and different samples (for everyone or for a random sub-sample).

Question: suppose that you decide that you have the budget to do everything you can to find those not interviewed during the first pass through the study areas (doesn’t matter if you have enough budget for a randomly chosen sub-sample or everyone), i.e. an intense tracking exercise to reduce the rate of attrition. In addition to everything else you can do to track subjects from both groups, you have a tool that you can use for those only in the treatment arm (say, your treatment was group-based therapy for teen mums and you think that the mentors for these groups may have key contact information for subjects who moved in the treatment group. There were no placebo groups in control, i.e. no counterpart mentors). Do you use this source to track subjects – even if it is only available for the treatment group?

Sometimes (increasingly often times), estimating only the ITT is not enough in a RCT

Berk Ozler's picture

"In summary, the similarities between follow-up studies with and without baseline randomization are becoming increasingly apparent as more randomized trials study the effects of sustained interventions over long periods in real world settings. What started as a randomized trial may effectively become an observational study that requires analyses that complement, but go beyond, intention-to-treat analyses. A key obstacle in the adoption of these complementary methods is a widespread reluctance to accept that overcoming the limitations of intention-to-treat analyses necessitates untestable assumptions. Embracing these more sophisticated analyses will require a new framework for both the design and conduct of randomized trials."

False positives in sensitive survey questions?

Berk Ozler's picture

This is a follow-up to my earlier blog on list experiments for sensitive questions, which, thanks to our readers generated many responses via the comments section and emails: more reading for me – yay! More recently, my colleague Julian Jamison, who is also interested in the topic, sent me three recent papers that I had not been aware of. This short post discusses those papers and serves as a coda to the earlier post…

Random response techniques (RRT) are used to provide more valid data than direct questioning (DQ) when it comes to sensitive questions, such as corruption, sexual behavior, etc. Using some randomization technique, such as dice, they introduce noise into the respondent’s answer, in the process concealing her answer to the sensitive question while still allowing the researcher to estimate an overall prevalence of the behavior in question. These are attractive in principle, but, in practice, as we have been trying to implement them in field work recently, one worries about implementation details and the cognitive burden on the respondents: in real life, it’s not clear that they provide an advantage to warrant use over and above DQ.

The Puzzle with LARCs

Berk Ozler's picture

Suppose that you’re at your doctor’s office, discussing an important health issue that may become a concern in the near future. There are multiple drugs available in the market that you can use to prevent unwanted outcomes. Some of them are so effective that there is practically no chance you will have a negative event if you start taking them. Effectiveness of the other options range from 94% to much lower, with the most commonly used drug failing about 10% of the time for the typical user. Somehow, you go home with the drug that has a one in 10 failure rate: worse, you’re not alone; most people end up in the same boat…