Introduction

In this theme ….


The Ethics of Aritificial Intelligence

Bostrom, Nick, and Eliezer Yudkowsky. “The ethics of artificial intelligence.” The Cambridge handbook of artificial intelligence 1 (2014): 316-334.

Reflections by: Soroosh Shahtalebi

Introduction

This article is part of the “Cambridge Handbook of Artificial Intelligence” and has investigated the parallel rise we expect to see in ethical issues while machines become thinking machines. This article is shaped in five sections; The first section talks about the issues that may arise by AI in near future, the second section discusses the challenges in ensuring that AI operates safely, the third section tries to show how we can check if AI has any moral status or not, the fourth section talks about how basic differences between AI and humans should be taken into account when they are assessed from ethical point of view, and section five discusses the situation when AI surpasses human-level intelligence.

Ethics in Machine Learning and Other Domain‐Specific AI Algorithms

Imagine the case that a bank is using AI for assessing and prioritizing the application forms for loan. After a while, it becomes apparent that AI system is not fair and it is biased towards some certain types of applications, meaning that the AI is discriminating between applications based on unwanted clues. What do we learn from this example?

The first requirement of the AI systems should be their “transparency to inspection”. In brief, we should be able to find out why the AI is coming up with a certain answer and what are the mathematical drives behind it. Some techniques such as Decision trees and Bayesian networks are transparent to programmer, unlike the ones based on neural networks or evolutionary techniques.

“It will become increasingly important to develop AI algorithms that are not just powerful and scalable, but also transparent to inspection—to name one of many socially important properties.”(p.2)

In addition to transparency, an AI system needs to be “predictable to those they govern”. The author uses the principle of stare decisis as an analogy in law that binds judges to follow past precedent whenever possible. It is discussed that an engineer also needs to follow this principle, so that the legal system can investigate in the advance technological fields.

“The job of the legal system is not necessarily to optimize society, but to provide a predictable environment within which citizens can optimize their own lives.” (p.2)

Another criterion discussed in this article is the importance of robustness in AI algorithms against manipulation. Robustness against manipulation is highly discussed and respected in information security domain, and the same urge needs to be followed in machine learning domain.

The last social criterion discussed in this article is who should take the blame when an AI system fails to deliver its expected task? In case of the bank example, shall we blame the bank for the performance of their AI system? If not, who is responsible for the biased decisions made by the AI system?

“Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to not make innocent victims scream with helpless frustration: all criteria that apply to humans performing social functions; all criteria that must be considered in an algorithm intended to replace human judgment of social functions; all criteria that may not appear in a journal of machine learning considering how an algorithm scales up to more computers. This list of criteria is by no means exhaustive, but it serves as a small sample of what an increasingly computerized society should be thinking about.” (p.3)

Machine with Moral Status

In this section, the discussion revolves around providing a definition of morality and investigating if machines could fall into those definitions or not. Then, based on the moral status of machines, humans need to respect some mutual rights and follow certain guidelines. Here, the discussion is grounded on a definition for moral status by Francis Kamm and once it is ensured, it sets strong prohibition against murdering, stealing from, and doing a variety of things to the machines and to their properties without their consent.

X has moral status = because X counts morally in its own right, it is permissible/impermissible to do things to it for its own sake.” (P6)

The contemporary AI does not fall into the provided definition of moral status and one may think that it relaxes any responsibility towards AI; however, the moral responsibility that we have to other beings, such as our fellow humans, corroborates the urge to follow moral constraints in our dealings with contemporary AI.

Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer. Sapience: a set of capacities associated with higher intelligence, such as self awareness and being a reason-responsive agent.” (P7)

To be more specific on verifying the moral status of the AI system, two criteria named Sentience and Sapience are proposed, where the latter is a more strict condition to be ensured. Based on the two criteria introduced here, we can come up with a strong principle of Substrate Non-discrimination. As the AI systems are artificial and may not physically own a substrate, another complementary principle is introduced as the Principle of Ontogeny Non-discrimination.

Principle of Substrate Non-Discrimination: if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” (P8)

Lastly, it is discussed that although these principles and taxonomies help us verify the moral status of an AI system, they also leave so many unforeseen ethical questions and concerns unanswered, which arise with the novel features and capabilities that AI systems introduce to our communities.

Principle of Ontogeny Non-Discrimination: if two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.” (P8)

Copyright © 2020 IEAI, Inc. All rights reserved.