This is the fourth in our series of job market posts this year.
Research from numerous corners of psychology suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. For instance, it is often argued that people tend to hold rather favorable views of their abilities - both in absolute and relative terms. In spite of a recent and growing literature on the extent to which poor information can negatively affect educational choices (e.g. Hasting and Weinstein, 2008; Jensen, 2010; Dinkelman and Martinez, 2014), there is little systematic evidence establishing how inaccurate self-assessments distort schooling decisions.
This is the fourth in our series of job market posts this year.
This is the third in our series of posts by students on the job market this year.
Economists tend to believe that travel and trade costs reduce welfare. Trade papers like Irwin (2005), Redding & Sturm (2008), Storeygard (2014), and Etkes & Zimring (2014) draw on evidence from the United States, West Germany, sub-Saharan Africa, and the Gaza Strip to support this idea. One might reasonably expect, therefore, that the welfare of Palestinian commuters declined during the Second Palestinian Uprising (2000-2007), when the Israeli army deployed hundreds of roadblocks and checkpoints along the West Bank’s internal road network in order to defend Israeli civilian settlements. Although these obstacles were intended to deter and intercept militants, they had the unintended consequence of delaying Palestinian civilian travel between Palestinian towns, and from Palestinian towns to Israel (B’Tselem (2007), World Bank (2007)). Two World Bank working papers (Cali & Miaari (2014), van der Weide et al (2014)) take advantage of this ‘natural experiment’ to study the effects of travel costs on commuters’ welfare, finding that economic outcomes of Palestinians declined in the face of obstacle deployment. My job market paper, however, finds a very different result: while obstacles reduced the welfare of laborers in some towns, laborers from other towns actually benefited from obstacles. The salient outcome of obstacle deployment was not welfare reduction, but rather welfare inequality.
In the latest Journal of Economic Perspectives several papers with a development angle:
- Jim Tybout on why the missing middle may exist after all (at least in some poor countries).
This is the second in our series of posts by students on the job market this year.
Relaxing supply-side constraints is not always sufficient to ensure delivery of public services to poor and remote communities. It may be necessary to stimulate demand by exploiting local agents who can link the relevant parties. We thus see the use of intermediaries in a variety of sectors in development; for example through the use of agricultural extension agents (Anderson 2004), loan officers for microfinance (Siwale 2011), and referral incentive programs – like that used by the British colonial army in Ghana (Fafchamps 2013). My job market paper studies the use of intermediaries in the maternal health sector in the Western Province of Kenya. I use an RCT to evaluate the efficacy of financial incentives for Traditional Birth Attendants (TBAs). The program provides payments for TBAs to encourage pregnant women to attend antenatal care (ANC) visits at a local health facility. In this way, TBAs link pregnant women with health facilities, the TBAs’ rivals. This potential competition, which is absent from most intermediary relationships, is a noteworthy feature of this program as it creates a nontrivial incentive problem for the TBA.
This is the first in our series of posts by students on the job market this year.
Impact evaluations are often used to justify policy, yet there is reason to suspect that the results of a particular intervention will vary across different contexts. The extent to which results vary has been a very contentious question (e.g. Deaton 2010; Bold et al. 2013; Pritchett and Sandefur 2014), and in my job market paper I address it using a large, unique data set of impact evaluation results.
I gathered these data through AidGrade, a non-profit research organization I founded in 2012 that collects data from academic studies in the process of conducting meta-analyses. Data from meta-analyses are the ideal data with which to answer the generalizability question, as they are designed to synthesize the literature on a topic, involving a lengthy search and screening process. The data set currently comprises 20 types of interventions, such as conditional cash transfers (CCTs) and deworming programs, gathered in the same way, double-coded and reconciled by a third coder. There are presently about 600 papers in the database, including both randomized controlled trials and studies using quasi-experimental methods, as well as both published and working papers. Last year, I wrote a blog post for Development Impact based on this data, discussing what isn't reported in impact evaluations.
- On the CGD blog, Lant Pritchett offers his 4-part smell test for whether your impact evaluation is asking a question that matters.
- On the Monkey Cage blog, Macartan Humphreys on how to make field experiments (in politics) more ethical – very useful discussion, although his suggestion that one solution is that researchers should try to avoid doing the intervention themselves seems to me debatable – I think it deals with the paperwork concerns and deflects blame, but creates this dichotomy between researchers subject to ethical constraints and others who are not.
- Jason Kerwin on a few highlights from the NEUDC conference.
I wanted to alert our readers to a new competition for ideas of how to best foster Small and Medium Enterprise (SME) growth. Typically with impact evaluation we end up evaluating a program that others have designed, or working with the occasional bank or NGO that is willing to try a new idea, but usually with firms that are very small in size. What is missing is a space where people with innovative ideas can get them into the hands of governments designing SME programs. I am working with the new Trade and Competitiveness Global Practice at the World Bank to try to do something new here, to give researchers and operational staff with ideas the chance to get them to a stage where they can become part of World Bank projects, and thereby have the potential to be implemented at much larger scale on lots of SMEs.
My summary of recent attempts to quantify the Hawthorne effect a few weeks back led to some useful exchanges with colleagues and commenters who pointed me to further work I hadn’t yet read. It turns out that, historically, there has been a great deal of inconsistent use of the term “Hawthorne effect”. The term has referred not only to (a) behavioral responses to a subject’s knowledge of being observed – the definition we tend to use in impact evaluation – but also to refer to (b) behavioral responses to simple participation in a study, or even (c) a subject’s wish to alter behavior in order to please the experimenter. Of course all these definitions are loosely related, but it is important to be conceptually clear in our use of the term since there are several distinct inferential challenges to impact evaluation arising from the messy nature of behavioral responses to research. The Hawthorne effect is only one of these possible challenges. Let me layout a classification of different behavioral responses that, if and when they occur, may threaten the validity of any evaluation (with a strong emphasis on may).
- From the Stata blog: how to put the Stata user manuals on your ipad.
- Chris Blattman discusses the controversy surrounding a field experiment being done by political scientists in the Montana election – much of the controversy seems very odd to a development economist –especially a concern that political scientists might actually be doing research that could affect politics….Dan Drezner notes the irony “political scientists appear to be damned if they do and damned if they don’t conduct experiments. In the absence of experimental methods, the standard criticism of political science is that it’s not really a science because of [INSERT YOUR PREJUDICE OF CHOICE AGAINST THE SOCIAL SCIENCES HERE]. The presence of experimental methods, however, threatens to send critics into a new and altogether more manic forms of “POLITICAL SCIENTISTS ARE PLAYING GOD!!” panic.”