Syndicate content

Recent comments

  • Reply to: Is My NGO Having a Positive Impact?   16 hours 46 min ago

    Good article, and useful further links in comments. Thanks to all who have contributed. But do you really think this is impact ? By whose definition ? The results you include in your examples are outcomes, not impacts. Medium term and long term outcomes to be sure, but not impacts. Outcomes are behaviour changes by other people, e.g. going to school instead of not going to school. The impact is the societal level benefit of a generation of better educated children. Or the outcome (behaviour change) is an increase in consumption activity, which then leads to society level impacts of increased health, wellbeing, etc (or does not lead to positive impact if the increased consumption is all spending on alcohol, drugs, or prostitution). Impacts are very difficult to measure in less than a few years, and there is a widespread illusion that measuring outcomes can be called measuring impacts. There are lot of good reasons why this is not the case. Measuring outcomes is really important and far too rarely done, and even less commonly based on measurements and observations. Calling it what it isn't doesn't help and is a bit disappointing in an otherwise good and useful guide. Let's encourage better measurement of medium and long term outcomes.

  • Reply to: Is My NGO Having a Positive Impact?   3 days 6 hours ago

    Correct link for Impact Matters: http://www.impactm.org/

  • Reply to: Responses to the policymaker complaint that “randomized experiments take too much time”   4 days 3 hours ago

    The justification for the use of experimental design impact studies, which are particularly expensive and take up increasing amounts of the aid budget of many Governments, obviously itself requires an impact study. There is a great need to find out whether the investment is worthwhile in terms of improvement in economic policy - from the point of view especially of a) replicability of results across projects, economies and geographical areas, b) the inclusion of the complete picture of both costs and benefits of aid projects, including transaction costs to recipient countries of accessing project aid, and c), to the extent possible, an assessment project by project of its contribution to the more general, indirect, effects on incentives and motivation within recipient communities of accommodating a foreign aided project rather than relying on internal learning by doing. The study needs to be very hard-headed in order to come up with results that can provide clear guidance on the conditions in which the benefits outweigh the costs ,

  • Reply to: Is My NGO Having a Positive Impact?   4 days 7 hours ago

    “If the [NGO] sector wants to properly serve local populations, it needs to improve how it collects evidence.” Donors are also increasingly demanding evidence of impact from NGOs, no longer just the large funders, but the small individual donors as well.

    The 'impact study' here seems to be narrowly defined to the WBG flavor of counterfactuals, which work fine if you have a simple intervention, though more complex interventions perhaps demand a more broader and pragmatic view of what is an impact study.

    Possible methods for examining the factual (extent to which actual results match what was expected):

    Comparative case studies: Did the intervention produce results only in cases when the other necessary elements were in place?

    Dose-response: Were there better outcomes for participants who received more of the intervention?

    Beneficiary/expert attribution: did participants/key informants believe the intervention had made a difference, and could they provide a plausible explanation of why this was the case?

    Predictions: did those participants or sites predicted to achieve the best impacts (because of the quality of implementation and/or favorable context) do so? How can anomalies be explained?

    Temporality: did the impacts occur at a time consistent with the theory of change – not before the intervention was implemented?

    Possible methods for examining the counterfactual (what would have happened in the absence of the intervention) include:

    Difference-in-difference: The before-and-after difference for the group receiving the intervention (where they have not been randomly assigned) is compared to the before-after difference for those who did not. (Difference-in-Differences)

    Logically constructed counterfactual: In some cases it is credible to use the baseline as an estimate of the counterfactual. For example, where a water pump has been installed, it might be reasonable to measure the impact by comparing time spent getting water from a distant pump before and after the intervention, as there is no credible reason that the time taken would have decreased without the intervention. Process tracing can support this analysis at each step of the theory of change.

    Matched comparisons: Participants (individuals, organizations or communities) are each matched with a nonparticipant on variables that are thought to be relevant. It can be difficult to adequately match on all relevant criteria. (Techniques for improving constructed matched comparison group impact/outcome evaluation designs)

    Multiple baselines or rolling baselines: The implementation of an intervention is staggered across time and intervention populations. Analysis looks for a repeated pattern in each community of a change in the measured outcome after the intervention is implemented, along with an absence of substantial fluctuations in the data at other time points.

    Propensity scores: this technique statistically creates comparable groups based on an analysis of the factors that influenced people’s propensity to participate in the program – it is particularly useful when participation is voluntary (for example, watching a television show with health promotion messages).

    Randomized controlled trial (RCT): Potential participants (or communities, or households) are randomly assigned to receive the intervention or be in a control group (either no intervention or the usual intervention) and the average results of the different groups are compared.

    Regression discontinuity: Where an intervention is only available to participants above or below a particular cutoff point (for example, income), this approach compares outcomes of individuals just below the cutoff point with those just above the cutoff point.

    Statistically created counterfactual: A statistical model, such as a regression analysis, is used to develop an estimate of what would have happened in the absence of an intervention. This can be used when the intervention is already at scale – for example, an impact evaluation of the privatization of national water supply services.

    Possible methods for identifying and ruling out alternative possible explanations
    include:

    General elimination methodology: possible alternative explanations are identified and then investigated to see if they can be ruled out.
    Searching for disconfirming evidence/Following up exceptions

    Multiple lines and levels of evidence (MLLE): a wide range of evidence from different sources is reviewed by a panel of credible experts spanning a range of relevant disciplines. The panel identifies consistency with the theory of change while also identifying and explaining exceptions. MLLE reviews the evidence for a causal relationship between an intervention and observed impacts in terms of its strength, consistency, specificity, temporality, coherence with other accepted evidence, plausibility, and analogy with similar interventions.

    Contribution analysis: a systematic approach that involves developing a theory of change, mapping existing data identifying challenges to the theory – including gaps in evidence and contested causal links – and iteratively collecting additional evidence to address these.

    Collaborative outcomes reporting: this new approach combines contribution analysis and MLLE. It maps existing data against the theory of change and fills in important gaps in the evidence through targeted additional data collection. Then a combination of expert review and community consultation is used to check the evidence’s credibility regarding what impacts have occurred and the extent to which these can be realistically attributed to the intervention.

  • Reply to: Responses to the policymaker complaint that “randomized experiments take too much time”   5 days 5 hours ago
    The South Island is sometimes referred to as the "mainland" since it has a larger land mass. The cheese comes from there.