How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”
Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.
In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.
Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.
To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.
Now, back to the opening statement, a subversion of the goal of evidence-informed decision-making. There are many interesting ways that stakeholders may try to dodge an honest sharing of results once they know what the results are. One is to claim that the public — whether in office or general public — will not be able to make sense of the results, so anything confusing, or, really, unexpected, needs to be pruned from the public report. Instead, all the not-as-hoped results can be relegated to internal rather than public learning.
Decision-makers may indeed need brief synopses (written or otherwise) rather than being presented with a long report. Different combinations and permutations of the evidence may be presented to different stakeholders using different modes of communication, in line with what is salient to them.
However, this is not a suitable excuse to fail make the full set of findings public. Moreover, an assessment of what stakeholders can/not interpret that fails to account for how they say they want to receive evidence misses a key point of participation and partnership. It might reveal our (mis-)estimation of the policymaker’s intelligence and the complex policy challenges decision-makers encounter as part of their daily work.
We’ve talked elsewhere about committing to a decision process informed by evidence. In this post, we are after something even more simple: for key stakeholders to commit ex ante to making the results of a commissioned study public, irrespective of their respective priors regarding the intervention being studied. Of course, the piece of research should be deemed as technically sound. Assuming that it is, the goal is to encourage the honest sharing of results regardless of the direction of the results.
In theory, everyone party to a good ex ante evaluation (and ex post, though there may be slightly less stakeholder engagement; or the degree of engagement could vary depending on the emerging results from the study) is aware that the results for the effect of an intervention on an outcome of interest can be as hoped, opposite, null, or otherwise mixed and confusing. In practice, everyone has a prior, which may involve not just an educated hypothesis but an emotional commitment to a particular outcome.
So what can help reduce the impulse and potential to cover-up unexpected results?1. Better explanation of research processes and norms. In some cases, key actors within commissioning agencies may be initially enthusiastic about the idea of evaluation without fully understanding what it — and a measurement and results focus more generally — really entails. Here, one often makes the mistake of focusing on agency-capacity, rather than the capacity of individuals within these agencies. By capacity, we refer not only to technical know-how of evaluation methods but also familiarity with research processes and norms. Disparity in capacity can lead to serious contradictions within the same agency on the way research findings are treated. Too often, though, efforts at “capacity-building” and other modes of education for individuals within agencies about evaluation focus on evaluation designs and analysis. This comes at the expense of explaining the research process, the variety of possible evaluation outcomes, and norms around transparent reporting of results.
Patrick Dunleavy recently outlined the process of storyboarding research from the get-go to improve working in teams and helping to visualize the end-product. Such a process may be useful for a broader array of stakeholders than the research team, so that the whole process (the whole magic of “analysis and writing up”) can be made transparent. This represents a potentially softer, friendlier and more feasible version than drafting the entire report in advance, as Humphreys et al. attempted in their paper on fishing. It also may allow more of the process to be visible, rather than just the final reporting structure.
2. Invest time in bringing all stakeholders to understand and agree with the research objectives and processes. Several research studies (especially evaluations) have a committee of advisers to steer the process. These are critical stakeholders in addition to those that commission and carry out the research. Ideally, all of those involved — including this committee of advisers — would reach a common understanding of the research objectives and methods to be followed. This would also include identifying policy messages from the study and engagement strategies. However, common ground is sometimes elusive, as these wider groups do not arrive early on at a fruitful working arrangement or basic understanding of the research process. Setting clearly understood objectives and a shared understanding of research processes may be time consuming but is invaluable when seen in the context of decision-making and transparency over research findings that may not match everyone’s priors.
3. Formal commitment to results reporting across stakeholders. Right now, commitment to analyses and results reporting exist between researchers and the public or, really, other researchers. But researchers are not the only ones determining the content of results reporting — and thus reporting requires additional sets of (public? formal? registered?) commitments. These could, like pre-analysis plans or commitment-to-analysis plans, take the form of committing to a core set of analyses and reporting on these results. It may also take the form of MOUs that are less technical than ex ante analysis plans but still represent a commitment to reporting a certain set of results regardless of the direction of those results. In any case, the goal is to move the commitment from being between researchers (and perhaps mostly intelligible to researchers) to also involving study commissioners, other stakeholders with the power to block the publication of findings, and the public (such as the public paying the taxes to fund the program).
4. Early engagement with decision-makers. If decision-makers are a primary audience for the evaluation and if communicating to decision-makers is seen as a barrier to a complete, nuanced presentation of evaluation findings, then engaging with decision-makers early on may help. We recognize the time constraints of decision-makers and the importance of clarity in messaging. But the clarity of presentation and the complicatedness of the results need not be zero-sum.
One way to reduce this tension and to better communicate complex or complicated findings to decision-makers is to engage them in the evaluation from the very beginning, so that the potential for nuanced findings can be gradually introduced. If faced with a passive policy audience at the end of an evaluation, whose only role has been to turn up to listen to research findings in a workshop, the space for taking in complexity, nuance, and caveats in messaging will be limited. But assuming that evaluations findings need to assert only non-complex finding and straightforward recommendation is hugely problematic since we are talking about evaluations in social systems. As such, getting early buy-in and opening channels to gradually introduce results are important.
With these steps in place, chances are better that our based-on-a-true-story colleagues could have avoided the scenario that we referred to at the beginning of this post. An early commitment to the research processes and an agreement on the way forward would have helped prime key stakeholders to the possibility that research findings might be a mixed bag — which necessitates a nuanced dissemination strategy but not the burying of unfavorable results.
Follow PublicSphereWB on Twitter!
This blog originally appeared on the personal blogs of Suvojit Chattodhyay and Heather Lanthorn.
Photograph of by Achmad Ibrahim for Center for International Forestry Research (CIFOR).