Syndicate content

David McKenzie's blog

Six Questions with Chris Udry

David McKenzie's picture
This is the first in a potential new series of posts of short interviews with development economists. Chris Udry was one of the pioneers of doing detailed fieldwork in development as a grad student and has continued to be one of the most respected leaders in the profession. While at Yale he taught David, and advised both David and Markus, and is famous for the amount of time he puts into his grad students. Most recently he has moved from Yale to Northwestern. We thought this might be a good time for him to reflect on his approach to teaching and advising, and to share his thoughts on some of the emerging issues/trends in development economics.
  1. Let’s start with your approach to teaching development economics at the graduate level. The class when you taught David in 1999 was heavy on the agricultural household model and understanding micro development through different types of market failures. Most classes would involve in-depth discussion of one or at most two papers, with a student assigned most weeks to lead this discussion. There was a lot of discussion of the empirical methods in different papers, but no replication tasks and the only empirical work was as part of a term paper. How has your approach to teaching development changed (or not) since this time?

Try as I might, I have made little progress on changing my basic approach to teaching. The papers and topics have changed, but the essence of my graduate teaching remains the in-depth discussion of a paper or two each class. I’ve tried to expand the use of problem sets, and had a number of years of replication assignments. The first was hindered by my own inadequate energy (it’s hard making up decent questions!). I found that replication exercises required too much time and effort in data cleaning by students relative to their learning gain. Students were spending too much time cleaning, merging and recreating variables and too little time thinking about the ideas in the paper. I’ll reassess assigning replication this year, because there may now be enough well-documented replication datasets and programs available. With these as a starting point, it would be possible to get quickly into substantive issues in the context of a replication.

Weekly links September 15: the definitive what we know on Progresa, ethics of cash, a new approach to teaching economics, and more…

David McKenzie's picture
  • In the latest JEL, Parker and Todd survey the literature on Progresa/Oportunidades: some bits of interest to me included:
    • CCTs have now been used in 60+ countries;
    • over 100 papers have been published using the Progresa/Oportunidades data, with at least 787 hypotheses tested – multiple testing corrections don’t change the conclusions that the program had health and education effects, but do cast doubt on papers claiming impacts on gender issues and demographic outcomes;
    • FN 16 which notes that at the individual level, there are significant differences in 32% of the 187 characteristics on which baseline balance is tested, with the authors arguing that this is because the large sample size leads to a tendency to reject the null at conventional levels – a point that seems inconsistent with use of the same significant levels for measuring treatment effects;
    • Two decades later, we still don’t know whether Progresa led to more learning, just more years in school;
    • One of the few negative impacts is an increase in deforestation in communities which received the CCT
  • Dave Evans asks whether it matters which co-author submits a paper, and summarizes responses from several editors; he also gives a short summary of a panel on how to effectively communicate results to policymakers.

Weekly links September 8: career advice, measuring empowerment, is anyone reading, lumpy cash, and more…

David McKenzie's picture

Is it possible to re-interview participants in a survey conducted by someone else?

David McKenzie's picture

I recently received an email from a researcher who was interested in trying to re-interview participants in one of my experiments to test several theories about whether that intervention had impacts on political participation and other political outcomes. I get these requests infrequently, but this is by no means the first. Another example in the last year was someone who had done in-depth qualitative interviews on participants in a different experiment of mine, and then wanted to be able to link their responses on my surveys to their responses on his. I imagine I am not alone in getting such requests, and I don’t think there is a one-size-fits-all response to when this can be possible, so thought I would set out some thoughts about the issues here, and see if others can also share their thoughts/experiences.

Confidentiality and Informed Consent: typically when participants are invited to respond to a survey or participate in a study they are told i) that the purpose of the survey is X ,and will perhaps involve a baseline survey and several follow-ups; and ii) all responses they provide will be kept confidential and used for research purposes only. These factors make it hard to then hand over identifying information about respondents to another researcher.
However, I think this can be addressed via the following system:

Monthly links for August: What did you miss while we were on summer break?

David McKenzie's picture

Weekly links July 28: overpaid teachers? Should we use p=0.005? beyond mean impacts, facilitating investment in Ethiopia, and more…

David McKenzie's picture
  • Well-known blog skeptic Jishnu Das continues to blog at Future Development, arguing that higher wages will not lead to better quality or more effective teachers in many developing countries – summarizing evidence from several countries that i) doubling teacher wages had no impact on performance; ii) temporary teachers paid less than permanent teachers do just as well; and iii) observed teacher characteristics explain little of the differences in teacher effectiveness.
  • Are we now all doomed from ever finding significance? In a paper in Nature Human Behavior, a multi-discipline list of 72 authors (including economists Colin Camerer, Ernst Fehr, Guido Imbens, David Laibson, John List and Jon Zinman) argue for redefining statistical significance for the discovery of new effects from 0.05 to using a cutoff of 0.005. They suggest results with p-values between 0.005 and 0.05 now be described as “suggestive”. They claim that for a wide range of statistical tests, this would require an increase in sample size of around 70%, but would of course reduce the incidence of false positives. Playing around with power calculations, it seems that studies that are powered at 80% for an alpha of 0.05 have about 50% power for an alpha of 0.005. It implies using a 2.81 t-stat cutoff instead of 1.96. Then of course if you want to further adjust for multiple hypothesis testing…

A new answer to why developing country firms are so small, and how cellphones solve this problem

David McKenzie's picture
Much of my research over the past decade or so has tried to help answer the question of why there are so many small firms in developing countries that don’t ever grow to the point of adding many workers. We’ve tried giving firms grants, loans, business training, formalization assistance, and wage subsidies, and found that, while these can increase sales and profits, none of them get many firms to grow.

Weekly links July 21: a 1930s RCT revisited, brain development in poor infants, Indonesian status cards, and more…

David McKenzie's picture

What does a game-theoretic model with belief-dependent preferences teach us about how to randomize?

David McKenzie's picture

The June 2017 issue of the Economic Journal has a paper entitled “Assignment procedure biases in randomized policy experiments” (ungated version). The abstract summarizes the claim of the paper:
“We analyse theoretically encouragement and resentful demoralisation in RCTs and show that these might be rooted in the same behavioural trait –people’s propensity to act reciprocally. When people are motivated by reciprocity, the choice of assignment procedure influences the RCTs’ findings. We show that even credible and explicit randomisation procedures do not guarantee an unbiased prediction of the impact of policy interventions; however, they minimise any bias relative to other less transparent assignment procedures.”

Of particular interest to our readers might be the conclusion “Finally, we have shown that the assignment procedure bias is minimised by public randomisation. If possible, public lotteries should be used to allocated subjects into the two groups”

Given this recommendation, I thought it worth discussing how they get to this conclusion, and whether I agree that public randomization will minimize such bias.

Weekly links July 7: Making Jakarta Traffic Worse, Patient Kids and Hungry Judges, Competing for Brides by Pushing up Home Prices, and More…

David McKenzie's picture
  • In this week’s Science, Rema Hanna, Gabriel Kreindler, and Ben Olken look what happened when Jakarta abruptly ended HOV rules – showing how traffic got worse for everyone. Nice example of using Google traffic data – MIT news has a summary and discussion of how the research took place : “The key thing we did is to start collecting traffic data immediately,” Hanna explains. “Within 48 hours of the policy announcement, we were regularly having our computers check Google Maps every 10 minutes to check current traffic speeds on several roads in Jakarta. ... By starting so quickly we were able to capture real-time traffic conditions while the HOV policy was still in effect. We then compared the changes in traffic before and after the policy change.”All told, the impact of changing the HOV policy was highly significant. After the HOV policy was abandoned, the average speed of Jakarta’s rush hour traffic declined from about 17 to 12 miles per hour in the mornings, and from about 13 to 7 miles per hour in the evenings”
  • From NPR’s Goats and Soda: 4-year kids of Cameroonian subsistence farmers take the marshmallow test, as do German kids – who do you think did best?

Pages