Published on Development Impact

Sudoku quilts and job matches: An experiment on networks and job referrals

This page in:

One of the frustrations facing job seekers worldwide, but especially in many developing countries, is how much finding a job depends on who you know rather than what you know. For example, in work I’ve done with small enterprises in Sri Lanka, less than 2 percent of employers openly advertised the position they last hired – with the most common ways of finding a worker being to ask friends, neighbors or family members for suggestions. Clearly networks matter for finding jobs. But networks do a whole lot of things apart from helping members find jobs, and so it is difficult to know exactly what they do in job referrals.

An experiment by Lori Beaman and Jeremy Magruder aims to experimentally test the extent to which networks are able to act as a screening mechanism for firms - it’s a paper that’s been on my “to read” list for a while, and the long flight to New Zealand is a perfect opportunity to get to reading it. They combine a lab and field experiment, which operated as follows:

The first step was to recruit 562 males (the study doesn’t say why women weren’t included) from a peri-urban area around Kolkata India to take part in a laboratory exercise, in which they were asked to carry out a cognitive task which was essentially “Sudoku quilts” – designing quilts by arranging colored swatches according to logical rules (e.g. a 4x4 design in which each of 4 colors was only allowed to appear once in each row and column). They recorded how long it took to do this, and whether it was done correctly.

In the second step, these individuals were randomized into 5 treatment groups, with each person invited to return on a different day with a male friend or family member who they thought would be good at the task. The treatments varied in terms of:

·         Whether they were offered a fixed payment for referring someone or performance pay – a payment which would depend on how well the person recruited did on the task

·         Whether the stakes were high or low (that is, how much money they would get for the fixed portion and for the performance portion).

What do they find?

They find some nicely intuitive results:

1)      When given a high-stakes performance payment (i.e. a payment that would vary between 0 and 50 rupees), individuals were more likely to refer co-workers and less likely to refer relatives than in the fixed payment treatments. The effect sizes were quite sizeable – reducing in half the number of relatives recruited.

2)      More able people also were more likely to refer people who would perform well on the task if given performance pay, whereas low ability people weren’t able to predict how well the person they referred would perform, and didn’t refer more able people.

3)      The high ability people when offered performance pay were able to identify people who would perform well on the test in a way that couldn’t easily be mimicked by observing Raven test, digit span, and a few basic observable characteristics.

Issues

The paper is also a useful read for considering a couple of methodological issues that apply more broadly:

1)      Two-stage offers and deception: the authors are justly concerned that although the performance pay goes to the recruiter, and not the recruitee, side deals might be made which might also lead to it incentivizing the new recruit to try extra hard (this is a concern, since then differential performance wouldn’t just reflect differential screening, but also differential effort by the person recruited). So once the original sample turn up with their recruits, they say something like “we know we said we’d pay you 0 to 50 rupees depending on how well your recruitee performs, but good news, we will now just pay you 50 regardless of how they perform”.

This is an increasingly popular strategy (e.g. used by Karlan and Zinman in credit screening, and Cohen and Dupas in their malaria prevention work). On one hand this deception seems ethically harmless- people are ex post better off as a result, and it is a useful way of unpacking economic behavior. However, I have several potential concerns about this form of deception – I’m not sure how serious any of them are, but thought since this is becoming somewhat common, it is worth a discussion:

i)                    From an internal validity point of view, one worries that word would spread quickly that this was what was happening – since not everyone was treated at once, people might find out about this.

ii)                   A second issue from the internal validity point of view is that there could still be residual dynamic incentives to exhibit effort built in. For example, if my friend tells me that I have been recruited by him as someone who will do well on this task, and that he will be paid according to how well I do, then I might think that my own or my friend’s eligibility for future experiments might depend on how well I do – even if I am then told there is no performance pay this time around.

iii)                 It is not completely clear that it does no harm – the stakes are pretty low here, but imagine if they were quite a lot higher. Then I might have to incur some mental and social cost in choosing a friend over my brother for the task, which could only be justified by the fact that my friend would be much better at it. If I then find out ex post that I would have gotten the same payment anyway, I might now have regrets about not choosing my brother.

Again I’m not sure how serious any of these are, but wonder what others think of this type of practice.

2)      Dealing with impacts on extensive and intensive margins: In this experiment the treatment can have effects at both the extensive margin (do they turn up with another person - 72% did), and on the intensive margin (how did this person perform). Randomization ensures balance across treatment groups when looking at the extensive margin, but if the treatment effects whether people invite another along, this introduces selection concerns into analysis of the impact of the treatments among the sample who does show up. This isn’t only an issue here – another good example is in studies of business programs – they might affect whether or not businesses get started or survive, as well as performance conditional on being in business. The authors use two approaches for dealing with this: (i) a Heckman-selection approach, using rainfall on the days when people were meant to attend to determine who shows up; and also (ii) using an ITT approach where they look at outcomes like number of correct quilts, with zeros put in for those who don’t turn up. Both seem reasonable, although I think rainfall is over-used as an instrument- it is not clear here for example that I should expect rainfall not to affect my performance if I show up (I might perform worse on rainy days because I’m cold, because I’m worried about my crops or thinking about how to get home, etc.).

As always with lab experiments, there is always the issue of how well performance in a lab setting is the same as it would be in a more natural environment. Nevertheless, I think this is a very interesting first look at how networks work. There are all sorts of things employers would potentially like to screen workers for – trustworthiness, sales ability, loyalty, tenacity, etc. etc. – so it would be interesting to see in follow-up work how well networks do at screening on these alternative characteristics – and whether one can use approaches that substitute for the network in doing this – something I’m thinking about in the context of ongoing work.


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000