We are working on an evaluation of a large rural roads rehabilitation program in Rwanda that relies on high-frequency market information. We knew from the get-go that collecting this data would be a challenge: the markets are scattered across the country, and by design most are in remote rural areas with bad connectivity (hence the road rehab). The cost of sending enumerators to all markets in our study on a monthly basis seemed prohibitive.
Crowdsourcing seemed like an ideal solution. We met a technology firm at a conference in Berkeley, and we liked their pitch: use high-frequency, contributor-based, mobile data capture technology to flexibly measure changes in market access and structure. A simple app, a network of contributors spanning the country, and all the price data we would need on our sample of markets.
One year after contract signing and a lot of troubleshooting, less than half of the markets were visited at the specified intervals (fortnightly), and even in these markets, we had data on less than half of our list of products. (Note: we knew all along this wasn't going well, we just kept going at it.)
So what went wrong, and what did we learn?
-
The potential for crowdsourcing is limited in rural Africa by technology constraints and low levels of social media connectivity
- Recruiting a large network of contributors is essential to crowdsourcing success. On face value, it seemed simple. There are lots of under-employed people in rural Rwanda. Getting paid to visit a local market and report on availability and prices of goods through a simple app must be appealing, no? But the firm struggled to raise and maintain a network of contributors, and never got to sufficient scope in terms of number of contributors or geographic coverage. Part of this is approach: they had been successful recruiting through Facebook and similar social media in South Asia, but those networks were far too thin, especially in the rural areas we needed to cover. The firm ended up recruiting through a local NGO, but success was still limited.
- Lesson: Find your 'crowd'. Recruitment by a partner with a strong local presence in the areas of interest is the best bet here. Follow network growth rate carefully.
-
The reliability of crowdsourcing data is often questioned because of the lack of underlying sampling frame. We imposed a lot of structure to avoid that: a fixed set of markets, a fixed list of products, and a set frequency (observations at least once a fortnight). But we were never able to get close to that level of regularity.
- Mainly, this was a technical problem – the app was not designed to task people to certain markets or certain products to ensure coverage. This hadn't come up for them as an issue before; more ad hoc data had always been sufficient, but this was a breaking point from our side.
- Lesson: Crowdsourcing may not be the right tool when rigorous sampling and data structure are required
-
Don't fall for a sexy pitch!
- We took the promises of our Silicon Valley partner at face value – but the available version of their technology delivered less than we hoped. In practice, it looked rather like traditional enumeration - one contributor per market at most, collecting all information. This took away the advantage of multiple observations and triangulation we had assumed. Moreover, it made the advantage of going with this model, where the contributor had very little training, rather than traditional enumeration, much less clear.
- Each contributor took images, but they were only used as a location check (as they were geo-coded) - we had assumed more sophisticated image processing allowing for assessment of quality (e.g. of tomatoes) or size of bundles (consistency in units was a huge problem).
- Lesson: have a healthy degree of skepticism; differentiate between the tech vision and what is currently possible. Ask to see and use the actual app / interface early in the process.
-
Simple tasks only, please
- We started out with an instrument that looked very much like a simple questionnaire – skip codes, relevancies, constraints. We wanted to collect a few data points from traders, and more than just availability and price of goods. However, through piloting we realized that to work with contributors with only the most basic amount of training, we had to dramatically simplify. In addition, for the app to work on limited data connections and old mobile operating systems, it had to be very simple – purely linear progression of questions and no built-in checks.
- Lesson: quantify trade-offs carefully. What are the cost savings compared to traditional enumeration? Will they offset losses in precision or quality?
Join the Conversation