Will widespread adoption of emerging digital technologies such as the Internet of Things and Artificial Intelligence improve people’s lives? The answer appears to be an easy “yes.” The positive potential of data seems self-evident. Yet, this issue is being actively discussed across international summits and events. Thus, the agenda of Global Technology Government Summit 2021 is dedicated to questions around whether and how “data can work for all”, emphasizing trust aspects, and especially ethics of data use. Not without a reason, at least 50 countries are grappling independently with how to define ethical data use smoothly without violating people’s private space, personal data, and many other sensitive aspects.
Ethics goes online
What is ethics per se? Aristotle proposed that ethics is the study of human relations in their most perfect form. He called it the science of proper behavior. Aristotle claimed that ethics is the basis for creating an optimal model of fair human relations; ethics lie at the foundation of a society’s moral consciousness. They are the shared principles necessary for mutual understanding and harmonious relations.
Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.).
Digital chaos without ethics
2020 and the lockdowns clearly demonstrate that we plunge into the digital world irrevocably.Just a few examples:
The common exclusion of women as test subjects in much medical research results in a lack of relevant data on women’s health. Heart disease, for example, has traditionally been thought of as a predominantly male disease. This has led to massive misdiagnosed or underdiagnosed heart disease in women.
- A study of AI tools that authorities use to determine the likelihood that a criminal reoffends found that algorithms produced different results for black and white people under the same conditions. This discriminatory effect has resulted in sharp criticism and distrust of predictive policing.
- Amazon abandoned its AI hiring program because of its bias against women. The algorithm began training on the resumes of the candidates for job postings over the previous ten years. Because most of the applicants were men, it developed a bias to prefer men and penalized features associated with women.
These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.
Figure 1. Data ethics concept, 2010-2021
Figure 2. AI ethics concept, 2010-2021
Guiding data ethics
Countries are taking wildly differing approaches to address data ethics. Even the definition of data ethics varies. Look, for example, at three countries—Germany, Canada, and South Korea—with differing geography, history, institutional and political arrangements, and traditions and culture.
Germany established a Data Ethics Commission in 2018 to provide recommendations for the Federal Government’s Strategy on Artificial Intelligence. The Commission declared that its operating principles were based on the Constitution, European values, and its “cultural and intellectual history.” Ethics, according to the Commission, should not begin with establishing boundaries. Rather, when ethical issues are discussed early in the creation process, they may make a significant contribution to design, promoting appropriate and beneficial applications of AI systems.
In Canada, the advancement of AI technologies and their use in public services has spurred a discussion about data ethics. The Government of Canada’s recommendations focuses on public service officials and processes. It provided guiding principles to ensure ethical use of AI and developed a comprehensive Algorithmic Impact Assessment online tool to help government officials explore AI in a way that is “governed by clear values, ethics, and laws.”
The Korean Ministry of Science and ICT, in collaboration with the National Information Society Agency, released Ethics Guidelines for the Intelligent Information Society in 2018. These guidelines build on the Robots Ethics Charter. It calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, Korean ethical policies mainly focused on the adoption of robots into society, while emphasizing the need to balance protecting “human dignity” and “the common good."
Do data ethics need a common approach?
The differences among these initiatives seem to be related to traditions, institutional arrangements, and many other cultural and historical factors. Germany places emphasis on developing autonomous vehicles and presents a rather comprehensive view on ethics; Canada puts a stake on guiding government officials; Korea approaches questions through the prism of robots. Still, none of them clearly defines what data ethics is. None of them is meant to have a legal effect. Rather, they stipulate the principles of the information society. In our upcoming study, we intend to explore the reasons and rationale for different approaches that countries take.
But the sooner we reach a consensus on key definitions, principles, and approaches, the easier the debates can turn into real actions. Data ethics are equally important for government, businesses, individuals and should be discussed openly. The process of such discussion will serve itself as an awareness and knowledge-sharing mechanism.
Recall the Golden Rule of Morality: Do unto others as you would have them do unto you. We suggest keeping this in mind when we all go online.