Syndicate content

Effective monitoring and evaluation practices for competitions and crowdsourcing: Lessons from India

Natalia Agapitova's picture

What’s the key ingredient for successful innovations? I often hear people answer creativity, collaboration, open mindset, leadership. For me, it is the ability to learn and adapt.

But learning is meaningful only if it’s based on reliable data, and adaptation leads to the expected results if the data is timely and feeds into the decision-making process.

For example, take GNRC Medical (formerly known as Guwahati Neurological Research Centre), a hospital in North Guwahati, India that aims to provide quality healthcare service at an affordable cost to underprivileged populations. GNRC has an inclusive multi-specialty facility, provides ambulance services, and offers customized healthcare packages to the poor, promoting preventive healthcare and early intervention. Despite its unique service offer, GNRC faced major challenges, including the lack of awareness among local communities on medical conditions and available treatments.

GNRC Medical Reception. Photo © Natalia Agapitova
Through the 2014 Development Marketplace (DM) India competition, we financed GNRC’s outreach to rural populations. One project component was the Swastya Yatra awareness program, developed to educate participants about nutrition, hygiene, sanitation, and preventative health.

Surveys of participants indicated that the desired behavior changes were not likely to be achieved without a longer and more intensive program. This data was surfaced through a rigorous Monitoring and Evaluation (M&E) system and the component was restructured so that project resources could be redirected to activities with more promising outcomes.

This is just one of many examples from the DM grantees. Since its start in 1998, the DM has awarded over $60 million in grants to more than 1,200 projects in over 33 countries to accelerate the replication and scaling of social innovations addressing the needs of the poor.

The numerous design changes marking the program’s evolution have been accompanied by a growing level of attention to M&E at both the grantee (innovation entrepreneurs) and DM program levels. We moved away from a “fund and forget” model towards a “learn and adapt” model.

These lessons could be useful for other programs that provide competitive funding for social innovation targeting grassroots SMEs and entrepreneurs (grantees).

1) A common reporting framework and standard M&E guidance are needed to systematically demonstrate results at the program level
  • Core indicators should be specified for all grantees to include in their project monitoring plans (see table). 
  • A standard monitoring template, with instructions and examples, can help grantees report consistent information that can be aggregated at the program level.   

2) M&E practices should be embedded at the beginning of an innovation cycle rather than just being added later.  
  • Grantees need to establish accurate baselines against which they can measure changes. 
  • Grantees need support in building their M&E capacity; this can help ensure the effective planning, achievement, documentation, and reporting of results.
For example, the India DM 2014 grantees were required to have both an implementation plan and a monitoring plan based on a results chain. We built the capacity of entrepreneurs through a three-day M&E workshop, follow-up mentoring, and feedback on regular M&E reporting. 

3) Customized sector-specific indicators help grantees track their progress along a results chain.
  • The evaluation of the progress within a specific sector provides precise information, which can be used to adjust implementation plans as needed. 
  • The detailed documentation of outcomes within a sector allows grantees to highlight their positive impact on the lives of people who live in extreme poverty.
For examples, Vigyan Ashram (DM India 2013) has provided villages with no electricity with lighting and has created new livelihoods for unemployed youth. By successfully bringing solar lamps to remote areas in two states in India, the project provided lighting to 9,000 beneficiaries in 1,500 households, saved about 7,917 litres of kerosene oil, reduced carbon dioxide emissions by an estimated 19.7 tons, and provided village youths who sell the lamps with a sustainable livelihood option.

4) Monitoring the satisfaction of beneficiaries allows for corrective actions that strengthen program results.
  • A continuous flow of end-user feedback during implementation helps adjust project design.
  • User monitoring can lead to an increased user base through improved targeting of innovations.
For example, Babajob (DM India 2011) created a job portal for informal workers that linked them to prospective employers through web, mobile phone, and other technologies. The SE actively sought feedback from both employers and jobseekers to adapt its services, and its model has become an effective tool for helping address issues like unemployment, migration, and exploitation by employers. 

5) A systematic assessment of organizational capacity and needs can help grantees understand and overcome challenges to scaling or replicating their social innovations. 
  • The process of reflecting on and rating the strength of various characteristics of the organization informs both grantees and investors on what kind of support is needed during the planning stage. 
  • The use of a standard assessment tool at the beginning of the program allows each grantee to set a baseline and then assess changes in organizational capacity over time. 
 
 

Add new comment