Syndicate content

“Your Ex Knows Best” - The Value of Reference Letters: Guest Post by Martin Abel

This is the first in our series of posts by Ph.D. students on the job market this year.
One of the key challenges of markets is to assess the quality of goods. A look at online dating websites – a market where information asymmetries loom particularly large - shows different ways in which people try to communicate that they are of “high quality”. A common strategy is to start your introduction with “My friends describe me as…” (to be followed by some glowing testimony “…smart, athletic, high-achieving – yet humble”). Why may this strategy not be effective? It raises questions about whether these friends are truthful and whether they have all the relevant information about your quality as a partner. The really interesting question you never see answered is: “How would your ex-partner describe you?”

My job market paper “The Value of Reference Letters”, coauthored with Rulof Burger (SU) and Patrizio Piraino (UCT), is about the challenges hiring firms face in identifying high-quality applicants. While the literature has largely focused on the role of friends and family members (Topa 2011, Beaman and Magruder 2012) in job referrals, we investigate whether information from ex-employers can facilitate the matching process. Specifically, we test the effect of a standardized reference letter asking previous employers to rate workers on a range of hard skills (e.g. numeracy, literacy) and soft skills (e.g. reliability, team ability).

Weekly links November 18: doing exploratory analysis well, not so hasty in evaluating, active labor market policies and more…

David McKenzie's picture
  • Andrew Gelman on how to think more seriously about the design of exploratory studies
  • Overcoming premature evaluation discussed on the From Poverty to Power blog “There is a growing interest in safe-fail experimentation, failing fast and rapid real time feedback loops…When it comes to complex setting there is a lot of merit in ‘crawling the design space’ and testing options, but I think there are also a number of concerns with this that should be getting more air time…it can simply take time for a program to generate positive tangible and measurable outcomes, and it maybe that on some measures a program that may ultimately be successful dips below the ‘its working’ curve on its way to that success…more importantly it ignores some key aspects of the complex adaptive systems in which programs are embedded…if we are serious about going beyond saying ‘context matters’ then exhortations to ‘fail fast’ need to be more thoroughly debated.”

Lessons from a crowdsourcing failure

Maria Jones's picture

We are working on an evaluation of a large rural roads rehabilitation program in Rwanda that relies on high-frequency market information. We knew from the get-go that collecting this data would be a challenge: the markets are scattered across the country, and by design most are in remote rural areas with bad connectivity (hence the road rehab). The cost of sending enumerators to all markets in our study on a monthly basis seemed prohibitive.
Crowdsourcing seemed like an ideal solution. We met a technology firm at a conference in Berkeley, and we liked their pitch: use high-frequency, contributor-based, mobile data capture technology to flexibly measure changes in market access and structure. A simple app, a network of contributors spanning the country, and all the price data we would need on our sample of markets.
One year after contract signing and a lot of troubleshooting, less than half of the markets were visited at the specified intervals (fortnightly), and even in these markets, we had data on less than half of our list of products. (Note: we knew all along this wasn't going well, we just kept going at it.)

 So what went wrong, and what did we learn?
 

The long run effects of job training

Markus Goldstein's picture
I am always on the lookout for impact evaluations that give us the long term effects of interventions.   I recently came across a paper by Pablo Ibarraran, Jochen Kluve, Laura Ripani and David Rosas Shady looking at the effects of a youth training program in the Dominican Republic.    While we have some evidence on the long term effects of these kind of programs from developed countries, this is quite possibly the first in a developing context.   

Blogging your job market paper? Some more tips

David McKenzie's picture
There is just over a week left until our deadline of Tuesday November 22 for our “blog your job market paper” series.  We have started receiving submissions, and so I thought I’d share a few more tips (in addition to those already posted) for those of you who are still planning to submit something.
  • Don’t write a big block of text with no breaks: Whether it is several subheadings, some bullet points or numbered lists, or something else, make the blog post easier for readers to read by using something to break the text up. Remember, readers might be reading this on a mobile phone or skimming it quickly to see if they think it interesting to read, so having 2 pages of solid text with nothing else will not hold reader attention.
  • Make sure to give magnitudes, not just significance: don’t just say “we found the program increased education for women”, but tell us by how much, and, where appropriate, some benchmark to help us tell whether this is a big or small effect.
  • Hyperlink any references, and spell the authors’ names correctly.
  • Get quickly to what you did, and make clear what your methods are: while general motivation for why what you are doing is important is useful, you should be able to make the case for why we should care in a paragraph or less – then we want to hear about what you did, and how you did this. Then give key details – if you do an experiment, make clear the sample sizes, unit of randomization etc.; if you do difference-in-differences, make clear why the parallel trends assumption seems reasonable and what checks you did; if you use an IV, discuss the exclusion restriction and why it seems reasonable; etc.
  • Look at previous years for examples: e.g. here is Sam Asher’s, who we hired; here is Mounir Karadja’s explanation of using an IV; and here is Paolo Abarcar’s clear explanation of an experiment he did.

Weekly links November 11: new research round-up, small sample experiments, refugee research, and more…

David McKenzie's picture

Cash transfers and health: It matters when you measure, and it matters how many health care workers are around to provide services

David Evans's picture

This post was co-authored with Katrina Kosec of IFPRI.

A whirlwind, surely incomplete tour of cash transfer impacts on health
Your run-of-the-mill conditional cash transfer (CCT) program has significant impacts on health-seeking behavior. Specifically, there are conditions (or co-responsibilities, if you prefer) that children get to school and/or that they get vaccinated or have some wellness visits. While the school enrollment effects are well established, the effects on both health seeking behavior and on health outcomes have been much more mixed. CCTs have led to better child nutritional status and improved child cognitive development in Nicaragua, better nutritional outcomes for a subset of children in Colombia, and had no impacts for child health in studies on Brazil and Honduras. CCTs conditioned only on school enrollment did not lower HIV infections among adolescent girls in South Africa; and in Indonesia CCTs increased health visits but did not translate into measurably improved health. Unconditional cash transfer programs have also had mixed results on health, with better mental health and food consumption in Kenya, better anthropometric outcomes for girls (not boys) in South Africa, no average impacts (although some for the poorest quarter) on child outcomes in Ecuador, and no average impacts on maternal health care utilization in Zambia (albeit yes effects for women with better access to such services).

Lessons from some of my evaluation failures: Part 2 of ?

David McKenzie's picture

I recently shared five failures from some of my impact evaluations. Since this is just scratching the surface of all the many ways I’ve experienced failures in attempting to conduct impact evaluations, I thought I’d share a second batch now too.

Case 4: working with a private bank in Uganda to offer business training to their clients, written up as a note here.

If you want your study included in a systematic review, this is what you should report

David Evans's picture


This post is co-authored with Birte Snilstveit of 3ie
 
Impact evaluation evidence continues to accumulate, and policy makers need to understand the range of evidence, not just individual studies. Across all sectors of international development, systematic reviews and meta-analysis (the statistical analysis used in many systematic reviews) are increasingly used to synthesize the evidence on the effects of programmes. These reviews aim to identify all available impact evaluations on a particular topic, critically appraise studies, extract detailed data on interventions, contexts, and results, and then synthesize these data to identify generalizable and context-specific findings about the effects of interventions. (We’ve both worked on this, see here and here.)
 
But as anyone who has ever attempted to do a systematic review will know, getting key information from included studies can often be like looking for a needle in a haystack. Sometimes this is because the information is simply not provided, and other times it is because of unclear reporting. As a result, researchers spend a long time trying to get the necessary data, often contacting authors to request more details. Often the authors themselves have trouble tracking down some additional statistic from a study they wrote years ago. In some cases, study results can simply not be included in reviews because of a lack of information.

Pages