Syndicate content

Yelp, I’m pregnant! Crowdsourced ratings improve artificial insemination services in Pakistan: Guest Post by Arman Rezaee

This is the tenth in our series of posts by students on the job market this year.

What do crowdsourcing, livestock artificial insemination, and mobile technology have to do with each other? Would you be surprised if I told you that the answer might be a widely scalable system to improve service delivery for the poor?

Let’s start with crowdsourcing. Most of us don’t realize how many aspects of our lives as consumers have been streamlined and made cheaper by crowdsourcing. We get our restaurant reviews on Yelp and our product reviews on Amazon. These sites have surpassed more traditional (and more costly) information providers that rely on expert reviews, such as the New York Times. Similarly, we find rooms for rent on Airbnb and cabs on Uber. These apps have succeeded by crowdsourcing service provision in regulated industries (lowering costs in the process), with rating systems to ensure service provider quality.
Which brings us to artificial insemination (AI). AI is important for the livelihood of people across the developing world, but the market for AI suffers from poor service provision. Livestock agriculture accounts for twelve percent of GDP in Pakistan, and is a key growth sector for the rural poor (Pakistan Economic Survey 2013-14). AI is crucial to renewing livestock. Most households only keep female cows because of the dual advantage of producing milk and calves, both of which require that they be pregnant. But shirking by inseminators reduces AI success rates, costing poor farmers income.
Our project leverages mobile technology to link crowdsourcing and artificial insemination. We developed and implemented a mobile phone-based clearinghouse to overcome information asymmetries and improve public service delivery to farmers in Punjab, Pakistan. Like Yelp or Amazon, our clearinghouse collects and disseminates ratings—here, on the success of government veterinarians in inseminating livestock, an objective measure of veterinarian effort. It gathers and disseminates locally relevant information from a large base of farmers automatically, in real time, using a call center.
My job market paper (co-authored with Ali Hasanain and Yasir Khan) evaluates this clearinghouse using a randomized controlled trial. Using data from a representative in-person survey, we find that farmers treated with information on local government veterinarians’ AI success rates have a 26 percent higher AI success rate than controls. This averages a treatment effect of 83 percent for farmers who return for government AI after treatment and a treatment effect of 4 percent for those who instead switch to private providers.
We find more precise results using data generated by the clearinghouse. In this data, we only observe farmers who return for government AI after treatment and not those who switch to private providers. Returning farmers must then also choose to answer the phone and to report AI success to the clearinghouse. Using this data, we find a 27 percent treatment effect on AI success for farmers who return for government AI after treatment. In addition, treatment farmers are 33 percent more likely to return to a government veterinarian for AI rather than to seek a private provider.
These results complement a growing literature documenting similar gains to information provision—on food subsidies (Banerjee et al 2015), on school quality (Andrabi et al 2014), on elected official corruption (Ferraz and Finan 2011), and even on restaurant quality (Jin and Leslie 2003). In addition, they speak to a literature showing increased market efficiency with the advent of cellular networks to ease asymmetric information (Jensen 2007, Aker 2010).
What sets our intervention apart from previous information provision interventions is that it relies on crowdsourcing technology that is cost-effective, self-sustaining, and scalable. Conservative estimates suggest a 27 percent higher AI success rate translates into nearly an additional half of one month's median income per AI provided, a 300 percent return on the cost of the intervention.  Note two previous papers evaluate information clearinghouses meant to help poor farmers. However, in both cases, the authors find no treatment effects (Fafchamps and Minten 2012, Mitra et al 2014).
Our approach stands in contrast to government monitoring schemes that provide information to agents' superiors, relying on the “long route” of accountability in which citizens must influence policymakers to improve service provision. We attack the problem more directly, taking the “short route” of accountability by increasing citizens' direct influence on government agents (World Development Report 2004).
Farmer switching or veterinarian effort?
Multiple mechanisms could explain this treatment effect on AI success rates: treated farmers could select better veterinarians than do controls, or veterinarians could shirk less in transactions with farmers who know their ratings. Several of our results suggest the latter. First and foremost, treatment farmers are no more likely than control farmers to switch veterinarians after treatment. Thus, the effect cannot be driven by farmers simply switching to the `best vet' in terms of AI success (or price). Second, treated farmers pay lower prices after treatment (government veterinarians are allowed to charge prices in Punjab). While farmers may be able to improve AI success rates through their behavior alone, a change in prices requires a change in veterinarian behavior.
Consistent with this shirking interpretation of our treatment effect, multiple critical aspects of veterinarian effort are in fact unobserved by farmers. For example, veterinarians must keep semen straws properly frozen in liquid nitrogen canisters from the time when they are delivered to AI centers until right before insemination. In addition, veterinarians must then precisely insert these straws during insemination.
Our solution is novel not only in that it leverages crowdsourcing to help the poorest, but also in that it does so in a tough setting. In rural Punjab, the market for artificial insemination is thin, literacy rates are low, and cellular networks are limited—yet we were able to employ an information clearinghouse with success.
These results are hopeful. The fact that our clearinghouse improved outcomes purely through providing information confirms the existence of asymmetric information in this setting. And the fact that veterinarians respond with increased effort confirms that this asymmetric information is about unobserved effort (moral hazard). While these confirmations are neither novel nor heartening in and of themselves, they allow us to fit the livestock sector in Punjab into a context that is much more general. Moral hazard has been documented in numerous sectors, public and private, across the developing world. We might expect our clearinghouse to help citizens in any sectors where shirking undermines service delivery.
On scalability, we have already begun conversations about expanding the clearinghouse to all of Punjab. This would require no additional fixed costs and less than proportional marginal costs. Across contexts, we are already experimenting with using an information clearinghouse to relieve asymmetric information between citizens and pollution regulators in Punjab. We hope to learn how crowdsourcing can work in a regulatory rather than a market environment, and for public rather than private goods.
We hope this paper and other new studies will improve our understanding of how technology can be leveraged to improve the feasibility and effect of already tried-and-true interventions, such as monitoring to reduce asymmetric information. As cellular networks improve and as technology to collect, aggregate, and disseminate information advances, our results suggest we may see improved outcomes for citizens across the rural developing world.
Arman Rezaee is a PhD student at the University of California, San Diego. His webpage is here.

Add new comment