Syndicate content

What does it mean to do policy-relevant research and evaluation?

Heather Lanthorn's picture

Center for International Forestry Research (CIFOR) researchers upload the data to see the resultsWhat does it mean to do policy-relevant research and evaluation? How does it differ from policy adjacent research and evaluation? Heather Lanthorn explores these questions and offers some food for thought on intention and decision making.

This post is really a conversation with myself, which I started here, but I would be happy if everyone was conversing on it a bit more: what does it mean to do research that is ‘policy relevant’? From my vantage point in impact evaluation and applied political-economy and stakeholder analyses, ‘policy relevant’ is a glossy label that a researcher or organization can apply to his/her own work at his/her own discretion. This is confusing, slightly unsettling, and probably dulls some of the gloss off the label.

The main thrust of the discussion is this: we (researchers, donors, folks who have generally bought-into the goal of evidence- and evaluation-informed decision-making) should be clear (and more humble) about what is meant by ‘policy relevant’ research and evaluation. I don’t have an answer to this, but I try to lay out some of the key facets, below.
 
Overall, we need more thought and clarity – as well as humility – around what it means to be doing policy-relevant work. As a start, we may try to distinguish work that is ‘policy adjacent’ (done on a policy) from work that is either ‘decision-relevant’ or ‘policymaker-relevant’ (similar to ‘decision-relevant,’ (done with the explicit, ex ante purpose of informing a policy or practice decision and therefore an intent to be actionable).
 
I believe the distinction I am trying to draw echoes what Tom Pepinsky wrestled with when he blogged that it was the “murky and quirky” questions and research (a delightful turn of phrase that Tom borrowed from Don Emmerson) “that actually influence how they [policymakers / stakeholders] make decisions” in each of their own idiosyncratic settings. These questions may be narrow, operational, and linked to a middle-range or program theory (of change) when compared to a grander, paradigmatic question.
 
Throughout, my claim is not that one type of work is more important or that one type will always inform better decision-making. I am, however, asking that, as “policy-relevant” becomes an increasingly popular buzzword, we pause and think about what it means.

A first take: what might it mean to do policymaker-relevant research?

In my first take on this, I had two considerations in mind for what made research policy(maker)-relevant. The first was conducting the research on a Real! Live! Policy, on an assumption that evaluating an actual policy would automatically be relevant to that policy.
 
The second was to distinguish policy-adjacent work from policymaker-relevant work by considering the intentionality of the researchers and decision-makers That intentionality matters is a point that has been made by Rachel Strohm and Tom Pepinsky, who wrote to me (in helpfully telling me to sharpen my own thinking) that a key consideration is distinguishing “research that is done explicitly to help make decisions versus research that may be used to help make decisions.”
 
What is policymaker-relevant in a particular case will depend very much on the position of the decision makers with whom the researcher or evaluator is engaging. High-level versus street-level actors will have different questions about policy and programmatic decisions they may need to make. Therefore, it is not the precise question that will bend research towards being policymaker-relevant. Rather, it likely has far more to do with intention pinning down whose decision you are trying to inform and how they may be able to use the evidence generated.  This is something I have discussed earlier, with Suvojit. More examples and learning of how to do this well are needed.
 
A second-take: is a functioning policy a requirement to be policymaker-relevant?

A second pass at this idea made me question whether it was necessary to be evaluating an ‘actual’ policy or program, with the explicit goal of contributing to immediate inputs on policy-decisions, to be policymaker-relevant. The main criterion may not be working on an actual policy, but instead it might be working with a decision-maker to investigate potential policy mechanisms that could be implemented in a given context as part of a learning agenda. As part of this, having an identified set of stakeholders intended to be the immediate users of evidence seems to be important to being decision-relevant.
 
My shift in thinking comes from reading some smart folks who write about short-cuts to seeing whether a policy idea is even worth pursuing. Closing the door on particular policy ideas is as an important decision to be made as figuring out the right path forward. One way to close the door on certain ideas is to test them in ways that would never be implementable – thus, they are not Real! Live! policies but they can provide real-time policymaker-relevant information nevertheless.
 
One variation of this approach is through trying a policy idea in an artificially ‘easy’ setting or at an artificially ‘high’ dose. These are tests of efficacy more than effectiveness. For example, one might test a program or policy idea in a Sinatra case setting — that is, ‘if the idea can’t make it there, it can’t make it anywhere’ (Gerring, attributed to Yates). Door closed, decision option removed.
 
One might also want to deliver an intervention in what H.L. Mencken called a ‘horse-doctor’s dose’ (as noted here). The idea being that if an incredibly strong version of the program or policy doesn’t do it, then it certainly won’t do it at the more likely level of administration. A similar view is expressed in running randomized evaluations, noting the ‘proof-of-concept evaluations’ can show that even “a gold-plated, best-case-scenario version of the program is not effective.” Door closed, decision option removed.
 
Another variation of this approach is to mimic what the policy would ‘look like’ if put into practice without actually fussing with all the machinery need to implement the policy. Again, this is more akin to a test of efficacy than effectiveness (admitting that the line between the two is not always sharp). Ludwig, Kling, and Mullainathan note that, “in a world of limited resources, mechanism experiments concentrate resources on estimating the parameters that are most decision relevant,” serving as a ‘first screen’ as to whether a policy is even worth trying. Said another way, this is about testing ideas and theories rather than programs. Again, if this mimicking of a policy mechanism does not yield significant results, it may well not be worth trying to implement the real thing. Door closed, decision option removed.
 
With these considerations in mind, I have had to revise my thinking about what may be classified as policymaker-relevant evaluation and research.
 
Moving the conversation forward

In the world of evaluation, we know that there is often a gulf between what research is produced and what information policymakers need in order to make decisions. We also know that just because an evaluation is conducted on a given policy, it does not necessarily help stakeholders answer the questions they have about that policy. Because we are increasingly aware of this gap, it seems increasingly desirable to have one’s evaluation work labelled as ‘policy-relevant.’ As such, researchers often seem to self-apply this label willy nilly. We should be stricter with ourselves about how proximate or distal to a decision our research is – that is, just how murky and quirky is the research we do.
 
Unfortunately, my first (and second) foray into trying to sort policy-adjacent research from policymaker-relevant research was hardly as clear-cut as I had hoped. Nevertheless, we have some ideas of how to move the conversation forward.
 
One, it may not be necessary to study an actual policy or program in order to provide information critical to making policy or programmatic decisions. However, two, it does seem critical to not only ‘do stakeholder engagement’ (another loose term) but to actually work with stakeholders to identify the questions they need answered in order to make a prioritization, programmatic, or policy decision. Again, intentionality seems an important consideration in trying to identify policy relevance. There should be clear and tangible decision-makers who intend to make use of the generated evidence to work towards a pre-stated decision goal — including a decision to shut the door on a particular policy/program option.



Follow PublicSphereWB on Twitter!

Photograph of researchers uploading the data by Center for International Forestry Research (CIFOR) via Flickr

Comments

Submitted by Kate on

Hi Heather

Thanks for an interesting post, I think you've raised some important distinctions. I've been thinking and writing a bit about the relevance of research recently. A couple of aspects I've noticed, just to throw into the thinking:
I think the point that "What is policymaker-relevant in a particular case will depend very much on the position of the decision makers with whom the researcher or evaluator is engaging" is really important. As others have said, relevance is very much in the eye of the beholder, and what is relevant to one group/individual might not be to others. There are different priorities within and between organisations. That means relevance is a very political concept (as researchers have been saying for decades) - and we perhaps need to be clear about who we want to be relevant to.

Discussions I've had also point to the importance of the research process as well as the question for determining relevance - a question that is seen as relevant doesn't always lead to research findings that are seen as relevant. For example, if the methods or analysis are weak or the findings aren't communicated effectively, the research might not be (seen as) helpful for decision-making.

Both aspects might well be covered in more detail in some of the other blogs and papers you mention. I always struggle with wanting to respond to interesting blog posts in short lunchbreaks when I don't have time to read all the links - sorry!

Thanks again - and for your other interesting posts.

Submitted by Heather on

Kate, thanks for letting me be your lunch break buddy! And even partially formed responses and ideas always welcome.

As for your points, I agree with them and that the conversation needs to be refined further.

Relevance probably depends partially on intent about the most proximate user, which is about process and politics. And the process of getting the question right for the most immediate intended user needs far more emphasis.

Lots of research and evaluation will, of course, be useful to many people in many ways but the relevance label strikes me as being very much about engagement on what questions needs answering rather than being determined just by the topic.

Thanks again for taking the time to respond.

Add new comment