Has ‘multistakeholderism… become a mantra, void of its progressive potential and outcomes’? Stefania Milan and Arne Hintz analyze internet governance’s hyper-focus on multistakeholderism and how civil society should adapt a clear IG agenda.
“All I’m saying is, if #multistakeholder were a drinking game, I’d be in the hospital with alcohol poisoning right about now,” tweeted civil society delegate @pondswimmer during the opening ceremony of the recent Internet Governance Forum (IGF) in Istanbul, where references to the multistakeholder principle were as omnipresent (and, seemingly, mandatory) as thanking the local organizers. Since the World Summit on the Information Society (WSIS) in 2003 and 2005, the idea of bringing together governments, the business sector, and civil society for debate and policy development has been celebrated and promoted. Probably nowhere has multistakeholder governance been implemented as thoroughly as in internet governance, where civil society actors and experts occupy key positions in the Internet Corporation for Assigned Names and Numbers (ICANN) and where all stakeholders discuss relevant policy issues at the IGF on (supposedly) equal footing. It is now unimaginable to discuss the governance of the internet without some form of multistakeholder participation. References to multistakeholder processes have been pervasive in speeches and documents, from the official 2003 WSIS press release titled “Summit Breaks New Ground with Multi-Stakeholder Approach” which praised the method rather than highlighting the substantial issues of the summit, to the NETmundial outcome document calling for “democratic, multistakeholder processes, ensuring the meaningful and accountable participation of all stakeholders, including governments, the private sector, civil society, the technical community, the academic community and users.”
From my seat as an Education economist at the World Bank, I go through a number of strategies from countries and sectors in Africa outlining how best to achieve economic growth and development. I am repeatedly struck by a key question: Who will do it? Who will add value to African exports? Who will build? Who will invent? Who will cure? The answer is, of course, that graduates from African universities and training institutions should do it. But the problem is one of numbers and quality—there are simply not enough graduates in science, technology, engineering and math (STEM), and programs are of uneven quality.
“There is almost nothing that government can or should do alone,” said one of the panelists at a recent global webinar on the future of digital government.
This was just one of the many signals of the disruptive and creative impact that digital platforms, dynamic connections and cross-sector co-design and participation are having on the role and practice of governments. While some are resisting, the outcomes that many predicted in the early days of e-government are now possible through “silo-busting,” merged back-office infrastructures and focused collaborative relationships with civil society, businesses, citizens and communities. To some degree, this reflects Professor Carlotta Perez’s creative construction phase of a revolution (also described in this paper) and reinforces two critical success factors: execution and deployment capabilities.
Eight senior government leaders from the World Bank-sponsored High–Level Experts, Leaders and Practitioners (HELP) network, together with participants from 35 countries, led a discussion on the challenges and opportunities associated with digital government.
These are some of the views and reports relevant to our readers that caught our attention this week.
The State of Broadband 2014: Broadband for all
Broadband Commission for Digital Development (ITU and UNESCO)
The Broadband Commission for Digital Development aims to promote the adoption of effective broadband policies and practices for achieving development goals, so everyone can benefit from the advantages offered by broadband. Through this Report, the Broadband Commission seeks to raise awareness and enhance understanding of the importance of broadband networks, services, and applications to guide international broadband policy discussions and support the expansion of broadband where it is most needed. This year, the Report includes a special focus on the importance of integrating ICT skills into education to ensure that the next generation is able to compete in the digital economy.
Facebook Lays Out Its Roadmap for Creating Internet-Connected Drones
If companies like Facebook and Google have their way, everyone in the world will have access to the internet within the next few decades. But while these tech giants seem to have all the money, expertise, and resolve they need to accomplish that goal—vowing to offer internet connections via things like high-altitude balloons and flying drones—Yael Maguire makes one thing clear: it’s going to be a bumpy ride. “We’re going to have to push the edge of solar technology, battery technology, composite technology,” Maguire, the engineering director of Facebook’s new Connectivity Lab, said on Monday during a talk at the Social Good Summit in New York City, referring to the lab’s work on drones. “There are a whole bunch of challenges.”
History shows that investments in agriculture can be a catalytic force in the fight against hunger, poverty and malnutrition and a well-performing farm economy can be an instrument for achieving sustained structural economic transformation. Agricultural growth was the precursor to industrial growth in Europe and, more recently through the Green Revolution, in large parts of Asia and Latin America. The Green Revolution bypassed Africa.
When I was elected President of the Republic of Ghana in 2000, agriculture was a mainstay of the nation’s economy, accounting for 35% of its GDP, 55% of employment and 75% of export revenues. But it was a lagging, orphan sector, suffering from decades of neglect and lack of investment. Ghana’s agriculture had sadly changed little from the kind practiced generations ago. Farmers were still eking out a living, tilling the land by hand, much like their ancestors.
Cities are becoming the new ecosystems for innovation. Recent studies on venture capital (VC) investment in the United States reveal that innovation is moving from suburbs to downtown areas. Today, San Francisco hosts more VC investment than Silicon Valley and New York – a city where the innovation startup scene was merely anecdotic 10 years ago – has become the third-largest technology startup ecosystem in the United States, with more than US$2.4 billion VC investment in 2011.
This trend is not unique to the United States. Start-ups are surging in other major cities around the world, including London, Berlin, Madrid, Moscow, Istanbul, Tel Aviv, Cape Town, Mumbai, Buenos Aires and Rio de Janeiro, to name a few.
New technology trends have reduced the cost of technology innovation. Cloud computing, open software and hardware, social networks and global payment platforms have made it easier to create a startup with fewer physical resources and personnel. If in the 1990s, an entrepreneur needed US$2 million and months of work to develop a minimum viable prototype, today she would need less than US$50,000 and six weeks of work (in some cases, these costs can be as low as US$3,000). This trend is allowing entrepreneurs to take advantage of cities’ agglomeration effects: entrepreneurs “want to live where the action is,” where other young people, social activities and peers and entrepreneurs are. They look for conventional startup support, such as mentor networks or role models, but also for nightlife, meet-ups, social activities and other potential for “collisions” – a combination best provided by cities.
A news item from Computer Weekly casts doubt. A recent report notes that, in the United Kingdom (UK), poor data quality is hindering the government’s Open Data program. The report goes on to explain that – in an effort to make the public sector more transparent and accountable – UK public bodies have been publishing spending records every month since November 2010. The authors of the report, who conducted an analysis of 50 spending-related data releases by the Cabinet Office since May 2010, found that that the data was of such poor quality that using it would require advanced computer skills.
Far from being a one-off problem, research suggests that this issue is ubiquitous and endemic. Some estimates indicate that as much as 80 percent of the time and cost of an analytics project is attributable to the need to clean up “dirty data” (Dasu and Johnson, 2003).
In addition to data quality issues, data provenance can be difficult to determine. Knowing where data originates and by what means it has been disclosed is key to being able to trust data. If end users do not trust data, they are unlikely to believe they can rely upon the information for accountability purposes. Establishing data provenance does not “spring full blown from the head of Zeus.” It entails a good deal of effort undertaking such activities as enriching data with metadata – data about data – such as the date of creation, the creator of the data, who has had access to the data over time and ensuring that both data and metadata remain unalterable.
There is much (potentially) to be excited about here. Few would argue against having greater access to more learning opportunities, especially when those opportunities are offered for 'free', where there is latent unmet demand, and where the opportunities themselves are well constructed and offer real value for learners. As with MOOCs at the level of higher education, however, we perhaps shouldn't be too surprised if these new opportunities at the high school level are first seized upon *not* by some of the groups with the greatest learning needs -- for example, students in overcrowded, poorly resourced secondary schools in developing countries, or even students who would like a secondary education, but for a variety of reasons aren't able to receive one -- but rather by those best placed to take advantage of them. This has been largely been the case for initial adopters of MOOCs. (One of the first studies of this aspect of the 'MOOC Phenomenon', which looked at MOOCs from the University of Pennsylvania, found that students tended to be "young, well educated, and employed, with a majority from developed countries.")
As a practical matter, some of the first types of beneficiaries may, for example (and I am just speculating here), be homeschooling families in North America (while not necessarily comparatively 'rich' by local standards, such families need to be affluent enough to be able to afford to have one parent stay at home with the kids, and generally have pretty good Internet connectivity); international schools around the world (which can offer a broader range of courses to students interested in an 'American' education); and the families of 'foreign' students looking to apply to college in the United States (the edX course “COL101x: The Road to Selective College Admissions” looks, at least to my eyes, tailor made for certain segments of the population of learners in places like China, Korea, Hong Kong, etc.). In other words, at least in the near term, a Matthew Effect in Educational Technology may be apparent, where those who are best placed to benefit from the introduction of a new technology tool or innovation are the ones who indeed benefit from it the most.
Longer term, though, it is possible to view this news about movement of a major MOOC platform into the area of secondary education as one further indication that we are getting further along from the 'front end of the e-learning wave' (of which MOOCs are but one part) to something that will eventually have a greater mass impact beyond what is happening now in the 'rich' countries of North America and the OECD.
Learning with new technologies has of course been around for many decades but, broadly speaking, has not (yet) had the 'transformational' impact that has long been promised. "Gradually, then suddenly" is how one of Ernest Hemingway's characters famously describes how he went bankrupt. Might this be how the large scale adoption of educational technologies will eventually happen as well in much of the world?
I f so, one credible potential tipping point may be a 'black swan' event that could push all of this stuff into the mainstream, especially in places where it to date has been largely peripheral: some sort of major health-related scare. (For those unfamiliar with the term, which was popularized by Nicholas Taleb, a 'black swan' is a rare event that people don't anticipate but which has profound consequences). One of the first ever posts on the EduTech blog, Education & Technology in an Age of Pandemics, looked at some of what had been learned about how teachers and learners use new technologies to adapt when schools were closed in response to outbreaks involving the H1N1 influenza virus: the 'swine flu' that afflicted many in Mexico about six years ago; and an earlier outbreak of 'bird flu' in China. I have recently been fielding many calls as a result of the current outbreak of the Ebola virus in West Africa asking essentially, 'Can we do anything with technology to help our students while our schools are closed?', and so I thought it might be useful to revisit, and update, that earlier post, in case doing so might be a useful contribution to a number of related discussions are occurring.
Each year on 8 September, groups around the world gather together to celebrate "International Literacy Day", which is meant to highlight the importance of reading, and of being able to read. In the words of UNESCO, the UN organization which sponsors International Literacy Day, "Literacy is one of the key elements needed to promote sustainable development, as it empowers people so that they can make the right decisions in the areas of economic growth, social development and environmental integration." As contentious as issues around education around the world can be at times, there is little debate about the fundamental importance of literacy to most human endeavors.
New technologies can play important roles in helping to enable efforts and activities to teach people to learn how to read -- and to provide people with access to reading materials. As part of its communications outreach on International Literacy Day this year, for example, UNESCO highlighted recent experiences in Senegal targeting illiterate girls and women, where it has found that "mobile phones, computers, internet and TV make literacy courses much more attractive for illiterate women."
The potential for mobile phones and other mobile devices like e-readers to aid in literacy efforts has been a recurrent theme explored on the EduTech blog. In so-called 'developing countries', books may be scarce and/or expensive in many communities -- and reading materials that *are* locally available may not be of great interest or relevance to many potential readers. The fact that increasing numbers of people in such communities are carrying small portable electronic devices with them at all times capable of displaying text, and which indeed can hold tens, even thousands of digital 'books', has not gone unnoticed by organizations seeking to increase literacy and promote reading.
Two recent publications -- Reading in the Mobile Era and Mobiles for Reading: A Landscape Review -- attempt to take stock of and learn from many of the leading efforts around the world in this regard.