Published on Data Blog

Competition and the rise of the machines: Should the AI industry be regulated?

This page in:

Image

A multinational conglomerate uses artificial intelligence (AI) algorithms to gather intelligence about the news you peruse, social media activity, and shopping preferences. They choose the ads you passively consume on your newsfeed and throughout your social media accounts, your internet searches, and even the music you hear, creating an incrementally increasingly customized version of reality specifically for you. Your days are subtly influenced by marketers, behavioral scientists, and mathematicians armed with cloud supercomputers. All of this is done in the name of maximizing profit to influence what you’re thinking, buying, and whom you will be electing…

Sound familiar? Apocalyptic prognoses of the impact of AI on the future of human civilization have long been en vogue, but seem to be increasingly frequent topics of popular discussion. Elon Musk, Bill Gates, Stephen Hawking, Vint Cerf, Raymond Weil, together with a host of other commentators and—of course—all the Matrix and Terminator films, have expressed a spectrum of concerns about the world-ending implications of AI. They run the gamut from the convincingly possible (widespread unemployment[1]) to the increasingly plausible (varying degrees of mind control) to the outright cinematic (rampaging robots). François Chollet‏, the creator of a deep neural net platform, sees the potential for “mass population control via message targeting and propaganda bot armies.” Calls for study, restraint, and/or regulation typically follow these remonstrations.

There is a rationale for government intervention. The AI industry could benefit from safeguards against the potential for monopolistic behavior by any of the primary actors in the space, particularly as several of them are increasingly capitalizing on their ability to predict and influence human behavior. But as more corporate competitors enter this space, the relative risk may be lowered. Increased corporate competition is generally accepted as an effective way to promote consumer well-being.

Imagine a bank that develops an AI algorithm to predict a prospective client’s likelihood of defaulting on a loan. The algorithm is trained on a database containing extensive data collected through social media, health insurance, phone and credit card records, as well as by tracking the clients’ movements, analysis of his or her purchasing patterns, and more. The bank can then offer interest rates that are customized for each client, minimizing lending risks and maximizing both profits and efficiency. This hypothetical bank would likely gain monopolistic power—and quickly, too.

Now, a second bank develops its own AI algorithm. It predicts—equally well—the risks of clients defaulting on a loan using a similar range of each client’s characteristics. Using the algorithm, this second bank increases its market share. To attract clients, the first bank lowers rates by refining the AI algorithm and bringing in more data. The second bank responds. This “AI arms race” soon eats into both banks’ profit margins, nearly up to cost recovery, and as a result, neither bank will have any power over their potential clients (we ignore, for simplicity, the Cournout duopoly).

One can envision similar scenarios playing out across contexts, which adds fuel to the ‘fire’ of popular concern about the impact of AI: firms push their products; political parties compete for voters; interest groups promote their causes; etc. In the long run, the competition would reduce the ability of any group to consistently affect the outcomes of their interests and control the behavior of people.

However, meaningful corporate competition in the AI space is not happening. In fact, we may be well on our way to monopolies the likes of Standard Oil. Google now owns about 81% of all online searchers and 85% of search ad revenue; Google also owns YouTube and Chrome, which accounts for 60% of the browser market. Facebook controls 77% of mobile social traffic for more than 2 billion users around the world. And in different corners of the globe, antitrust crusaders are making the case that certain of the AI tech giants may be engaged in anticompetitive behavior, for which traditional antitrust law is a suitably blunt instrument. European antitrust regulators most recently fined Google €4.34 billion (in July 2018, which followed a 2017 fine of €2.4 billion for anticompetitive behavior concerning its online shopping search service) and ordered the company to stop using its Android mobile operating system to block rivals. It remains to be seen whether the company’s efforts to comply with the ruling (despite its appeal of the ruling) will be sufficient. More efforts like this are afoot, but movements like that led by Open Market Institute are still in their infancy.

An important question is whether traditional antitrust regulations and related legislation are at all sufficient to tackle the broader AI landscape. To some, Google and Facebook have “innocent monopolies” or, “two-sided platforms” (Tirole 2014), which are difficult to break using standard antitrust mechanisms because they offer free services (i.e. Google Maps, Microsoft Skype) and actually lower prices for consumers (Economist, 2016). US antitrust laws were originally designed to prevent monopolies from forcing higher prices on consumers; regulations generally focus on antitrust indicators like the size of a company (rather than its capabilities, the extent of a its data assets, the nature of its data usage, and the quantity of data it possesses) and human intent and action (rather than the functions of automated bots). Investigations into anticompetitive behavior typically are typically traditional affairs, centering on human interactions and manipulation to generate particular outcomes. But bots don’t leave voicemails or email evidence that could bolster a court case. These new realities are already making it more difficult to detect anticompetitive actions.

Amid this challenging legal and regulatory environment, the tech giants are attracting ever more customers and quietly continuing to access and generate ever more data to perfect their AI algorithms. These giants will continue to dominate the AI landscape. Such market concentration—coupled with certain limitations in antitrust legal and regulatory regimes—is a worrisome state of affairs, even for largest world economies. Smaller developing countries are even more vulnerable. The risk that AI algorithms might exert effective control over vulnerable populations in a small country is not only great, it is already happening.

So where do we turn? Multilateral development organizations are uniquely situated to provide some leadership here. Organizations like the World Bank are already guiding clients around the world with policy advice on regulatory governance. Similar support to regulate the international AI markets and promote competition would not be a significant stretch for the institution. The Bank—as an apolitical and public interest multilateral organization known for its technical expertise—occupies a unique position to provide regulatory guidance and should take this role seriously as a further avenue to enable public goods.


[1] Over the last 60 years the computing power increased by 100s of billions of times. That resulted in elimination of a single occupational code from the list of 270 detailed occupations that existed in the 1950 US Census ( Bessen 2016). The category that was eliminated was an elevator operator.

Authors

Michael M. Lokshin

Lead Economist, Office of the Regional Chief Economist, Europe and Central Asia

Craig Hammer

Senior Program Manager, Development Data Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000