The big 5 multilateral development banks(MDBs) (World Bank Group, African Development Bank, Asian Development Bank, European Bank for Reconstruction and Development, and Inter-American Development Bank) collectively provided close to $100 billion in concessional and non-concessional lending in 2012 or FY13. Of late, their size, traditionally an advantage, has become something of a disadvantage. The MDBs are facing intense challenges in at least three major ways. One - criticism from academics, developing nations and others that foreign aid is detrimental for a country’s growth. Two - technology has diluted the monopolistic advantages they had (knowledge, networks, access to funding) and is leading to new models of development. As echoed by World Bank President Jim Kim, there is a "need for alignment" for development institutions in "a rapidly changing world." Three - more and more countries are shifting from demanding traditional loans to demanding knowledge and knowledge products, and development institutions are only now starting to respond to this challenge.
Tanya Gupta's blog
In our last two blogs, we spoke about why measurement is key for development professionals and what should we measure? and about some take-aways from the medical profession on the measurement of competencies and performance. In this blog, we discuss specific ways we can use those lessons and apply them to the development sector.
As we discussed, in the medical world, lessons learned in competency and performance measurement relate to:
The focus on competencies, performance, and the space in between
Competence being specific to situations and existing on a continuum
Assessment as a program of activity that uses multi-source qualitative and quantitative information
The importance of the reproducibility of assessments
Encouraging the use of a portfolio.
But how can the above be specifically applied to development? Development practitioners can certainly take a page from the medical profession, as the stakes for getting measurement right are no less than bettering the lives of those who live on less than a dollar a day.
The medical profession, by necessity, has hard requirements (inflexible and critical requirements) for measuring competencies and performance. In fact, such measurement is mission critical. While the development profession does not have “hard” requirements, we can learn from their rigorous approach. Here are a few principles and rules that we could borrow:
According to a training report no less than $55.4 billion in 2013 was spent on training, including payroll and external products and services, in the US alone. The US and other countries spend a significant amount of money on employee development with the implicit assumption that training is correlated to improved on- the- job performance. However, what exactly should we measure to ensure that this money is well spent? What is it that we need to measure to determine that employees are performing as expected and thus benefitting from these training expenditures?
Two responses that we often get to this “what should be measured” question are “performance” and “competencies”. The Government Accountability Office (GAO) of the United States defines performance measurement as the “ongoing monitoring and reporting of program accomplishments, particularly progress toward pre-established goals.” Performance measures, therefore, help define what success at the workplace means (“accomplishments”), and attempt to quantify performance by tracking the achievement of goals. Competencies are generally viewed as “a cluster of related knowledge, skills, and attitudes” (Parry 1996), and are thought to be measurable, correlated to performance, and can be improved through training. While closely connected, they are not the same thing. Competencies are acquired skills, while performance is use of those competencies at work. Measurement of both is critical.
Our Top Ten Blog Posts by readership in 2013
This post was originally published on January 24, 2013
It will soon be nearly four years since then San Francisco mayor, Gavin Newsom visited Twitter headquarters. He told Biz Stone (one of the Twitter founders) about how someone from the city had sent him a Twitter message about a pothole. A discussion about "how we can get Twitter to be involved in advancing, streamlining, and supporting the governance of cities," led to the creation of @SF311 on Twitter that would allow live reporting by citizens of service needs, feedback, and other communication. Perhaps the most innovative aspect at that time was that citizens would be able to communicate directly and transparently with the Government. San Francisco was the first US city to roll out a major service such as this on Twitter.
Twitter offers several advantages over phonecalls or written requests made by citizens, some of which I have mentioned before:
In just about a week, on Thursday November 28, people all over the United States will kick off the "holiday season" with the celebration of Thanksgiving Day. While the day's significance is both historical and profound, in modern times it consists of a lot of shopping and a big meal with family and friends gathered around the dinner table. Pre-thanksgiving is a time to be on the lookout for creative new recipes. Sure, we can get recipes from magazines, websites and friends and while they may be special, they will not be unique. Wouldn’t it be nice to have an app that would create a special unique recipe just for you? A delightful recipe that has never been executed before. Well the idea is not as futuristic as it sounds. It may be here sooner than you think. IBM and big data have a lot to do with this particular innovation.
Can computers be creative? IBM thinks they can. IBM scientists Lav R. Varshney and other members of an IBM team, have used data sets and proprietary algorithms in the daunting field of the culinary arts to develop a computational creativity system. The data sets they have used are recipes, molecular level food related data and data about the compounds, ingredients and dishes that people like and dislike. They then developed an algorithm that produces thousands or millions of new ideas from the recipes. The recipes are then evaluated to select the best ones that combine ingredients in a way that has never been attempted before. Humans can interact with the system by choosing a key ingredient and the kind of cuisine.
We have all been in meetings where we felt nothing was getting done. In the corporate world, the cost of inefficient meetings has been recognized. According to a recent CBS news report, professionals lose four work days each month in meetings and that out of 11 million meetings that occur in the U.S. every day, half the meeting time is actually wasted. There have been a lot of efforts to make meetings more productive including efficient meeting templates, ground rules for meetings (pdf) etc. However, a scientific, data-driven approach to understanding “soft” phenomenon such as a meeting has until now been rare.
A paper “Learning about Meetings” (pdf) by Been Kim and Cynthia Rudin at MIT is one of the first such efforts to employ a data-driven approach on the science of meetings (in this case, meetings that are held to arrive at a decision and not to brainstorm) to learn more about how meetings are conducted. Meetings are difficult to assess as there are social signals and interpersonal dynamics that are difficult to capture. Kim and Rudin, using AMI data show evidence that it is possible to automatically detect when during the meeting a key decision is taking place, that there are common patterns in the way social dialogue acts are interspersed throughout a meeting, that at the time key decisions are made, the amount of time left in the meeting can be predicted from the amount of time that has passed, and, finally, that it is often possible to predict whether a proposal during a meeting will be accepted or rejected based entirely on the language used by the speaker.
Some particularly interesting take-aways are:
Google’s every action is studied under a microscope. However, one major “mistake” that Google made may have gotten lost. Google’s policy of freeing up 20% time for all engineers, no management approval needed, was cancelled. Yes, this is the same policy that was responsible for Gmail. Google’s former policy had been held up as best practice at Google and in the tech community, and was advertised as a Googler perk. Although the 20% rule had been used at 3M and HP before, Google made it their own and resulted in industry changing products.
You may ask - why was the 20% rule such a good idea and why is removing it a mistake? The reason Google’s 20% time off is a great idea is because it worked and worked well. One needs a certain amount of freedom to be creative. A study on mechanisms of grant funding (long term vs. short term) found that freedom encourages creativity when the freedom was believed to be long term. “If you want people to branch out in new directions, then it’s important to provide for their long-term horizons, to give them time to experiment and potentially fail. The researcher has to believe that short-term failure will not be punished” ” says Pierre Azoulay, an associate professor at the MIT Sloan School of Management, and an author of an MIT study on the subject. Freedom of thought inspires creativity and the development community, more than anyone else needs to break away from traditional thinking.
Robots have been a part of our mythology for thousands of years, the emphasis alternating between their positive transformative power over human society and acting as agents of great destruction. Our image of robots has been shaped to a large extent by Hollywood and literature. Celluloid robots in Star Wars, 2001 Space Odyssey, Robocop, Star Trek and many of Isaac Asimov’s novels have become a part of the human story. Off-celluloid, robots have been helping our society in concrete ways (for example police work (bomb disposal), infrastructure projects etc.). However when Watson won Jeopardy it brought artificial intelligence and robotics a new kind of attention. People started to wonder if robots could replace humans. When we think of robots we think of self driven cars, household robots or even warrior robots. However, in our view, the influence of robots and Artificial Intelligence (AI) is more subtle and their presence more ubiquitous than one would think. One such impacted sector is the agriculture sector (in the US) which is on the cusp of a massive transformation, as it moves from mechanization to automation. When rolled out and commercialized (soon) this massive scale of automation will have a significant impact on US farming and on immigration for sure. But does this also impact the development landscape? If so how?
Agricultural robotic systems have been implemented in fruit and vegetable harvesting, greenhouses and nurseries. Harvest Automation, for example, has developed the the HV-100, a 90-pound robot for commercial nurseries that can pick up and rearrange potted plants. There are quite a few silicon valley startups that are contributing to this revolution in the region known as “America’s Salad Bowl”, around Salinas Valley. California, where Salinas Valley is located, produced $1.6 billion dollars worth of lettuce in 2010 and 70%+ of all lettuce grown in America. Lettuce Bot, a new robot developed by Stanford engineers Jorge Peraud and Lee Redden, both from farming families from Peru and Nebraska, can “produce more lettuce plants than doing it any other way” (Yahoo Finance). Lettuce Bot’s innovation is that while attached to a tractor, it takes pictures of passing plants and compares these to a database. When the weed or a lettuce head that is too close to another one is identified, a concentrated dose of fertiliser is sprayed. A close shot of fertilizer kills the errant weed or lettuce head but actually feeds the further off crops at the same time.
- Surprising lack of consistent, reliable data on development effectiveness: Among the various sectoral interventions, we have no uniformly reliable data on the effectiveness of every dollar spent. For example of every dollar spent in infrastructure programs in sub-Saharan Africa, how many cents are effective? Based on the same assumptions, do we have a comparable number for South East Asia? In other words why don’t we have more data on possible development investments and the associated costs, benefits/returns and risks?
- Failure to look at development effectiveness evidence at the planning stage: Very few development programs look at the effectiveness evidence before the selection of a particular intervention. Say, a sectoral intervention A in a particular region has a history of positive outcomes (due to attributable factors such as well performing implementation agencies) as opposed to another intervention B where chances of improved outcomes are foggy. Given the same needs (roughly) why shouldn’t we route funds to A instead of B in the planning stage? Why should we give equal preference to both based purely on need?