In this season of making resolutions (and hopefully sticking to a few of them) we invite you to join us for a year long skills transfer discussion/blog series on technology aided gut (TAG) checks.
TAG is a term we have coined to describe the use of simple web programming tools and techniques to do basic gut checks on data - big and small. TAG does not replace data science, rather it complements it. TAG empowers you - the development professionals - who rely on the story the data tells to accomplish your tasks. It does so by giving a you good enough idea about the data before you delve into the sophisticated data science methods (here is a good look at the last 50 years of data science from Stanford’s Dr. Donoho). In many cases it actually allows you to add your own insights to the story the data tells. As the series progresses we will talk a lot about TAGs. For the eager-minded here’s an example of TAG usage in US politics.
In this series, we will use a just-in-time learning strategy to help you learn to do TAG checks on your data. Just in time learning, as the name implies, is all about providing only the right amount of information at the right time. It is the minimum, essential information needed to help a learner progress to the next step. If the learner has a specific learning objective, just-in-time learning can be extremely efficient and highly effective. A good example of just in time information is the voice command a GPS gives you right before a turn. Contrast this with the use of maps before the days of GPS. You were given way more information than you needed and in a format that is not conducive to processing when you are driving.
How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”
Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.
In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.
Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.
To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.
What is the 'results agenda' and how does it relate to transformational change within development? The recent publication of a report from The Independent Commission for Aid Impact (ICAI), which scrutinizes UK aid spending, has brought these questions to life once again. Here are some takeaways on the report and the need for systems thinking, accountability, and flexibility from Suvojit Chattopadhyay.
Craig Valters’ Devex post, based on yet another newsworthy ICAI report, seems to have somewhat revived the debate over the ‘results agenda'. The criticism is sharper, castigating DFID for the “unintended effect of focusing attention on quantity of results over their quality” – but also one that clearly implies that the ‘results agenda’ is not well-understood or widely shared within donors like DFID. Focusing on ‘results’ cannot mean a divorce from long-term outcomes. What ICAI describes sounds more like an outputs agenda that is transactional (what your money can buy) rather than transformative (the good change).
The consequence of this bean-counting is that complex problems risk being ignored: donors and the partners they fund will tend to focus on projects, rather than systems. Also, genuine accountability along the aid-chain takes a hit due to a general break-down of trust between the different actors. So what can we do about this?
No thoughtful technocrat would copy a program in every detail for a given context in her or his country. That's because they know (among other things) that economics is not a science but a social (or dismal even) science, and so replication in the fashion of chemistry isn't an option. For economics, external validity in the strict scientific sense is a mirage.
Engaging individuals to share their knowledge and learning on development challenges and solutions with the wider community is a core value of the WBG’s Open Learning Campus. In this context the story is often a powerful learning tool. This idea is not a new one; in fact, stories have been a universal form of knowledge transfer for over 100,000 years as a way of connecting people and creating a common perspective on social, economic, political and cultural issues that they care about.
However, the above statements apply only to effective storytelling, which requires sustained engagement with the community, and adequate influence over the learning and knowledge accretion process of the community. Research has shown that information alone—even critically valuable information—without the context, relevance, and engagement provided by effective story structure—is markedly ineffective in changing core attitudes, beliefs, and behaviors (in influencing).
It is easy to see that data is crucial to the agency’s operations. Sitting down with EDL’s employees and managers—all wearing the agency’s signature blue-shirt uniform with pride—it also becomes apparent that the science of numbers and the art of managing people have gone hand in hand at this agency. This combination has enabled EDL to make organizational learning a central pillar of the agency’s success.
Institutions Taking Root, a recent report of which I’m a co-author, looked at nine successful institutions in fragile and conflict-affected states that share a core set of internal operational strategies.
Swimming is to cats what rational thinking is to humans- they can do it, but usually begrudgingly.
While people like to think of themselves as independent thinkers who employ rational thought to make decisions (and this can sometimes be true), many of our choices are influenced by social instincts. What goes through our minds is derived, in large part, from what goes through the minds of those around us.
According to a book, I’ll Have What She’s Having, by Alex Bently, Mark Earls, and Michael J. O’Brien, humans are fundamentally pro- social creatures that collaborate and copy the behaviors and choices of others when making decisions.
Each month, People, Spaces, Deliberation shares the blog post that garnered the most attention.
For July 2014, the featured blog post is "World Bank’s Four Year Access to Information Policy Update."
It’s been four years since the World Bank enacted its Access to Information Policy, and to mark the occasion this blog post covers the facts, figures, and developments that has made this Policy a success. Read the blog post to learn more!
Monitoring, Evaluation and Learning (MEL) used to send me into a coma, but I have to admit, I’m starting to get sucked in. After all, who doesn’t want to know more about the impact of what we do all day?
So I picked up the latest issue of Oxfam’s Gender and Development Journal (GAD), on MEL in gender rights work, with a shameful degree of interest.
Two pieces stood out. The first, a reflection on Oxfam’s attempts to measure women’s empowerment, had some headline findings that ‘women participants in the project were more likely to have the opportunity and feel able to influence affairs in their community. In contrast, none of the reviews found clear evidence of women’s increased involvement in key aspects of household decision-making.’ So changing what goes on within the household is the toughest nut to crack? Sounds about right.
But (with apologies to Oxfam colleagues), I was even more interested in an article by Jane Carter and 9 (yes, nine) co-authors, looking at 3 Swiss-funded women’s empowerment projects (Nepal, Bangladesh and Kosovo). They explored the tensions between the kinds of MEL preferred by donors (broadly, generating lots of numbers) and alternative ways to measure what has been going on.