Ethical artificial intelligence

Artificial Intelligence (AI) is acquiring increasing importance in many applications that support decision-making in various areas, including healthcare, consumption, and risk classification of individuals. The growing impact of AI on people’s lives naturally raises the question about its ethical and moral components. Are AI decisions ethically acceptable? How can we ensure that AI remains ethical over time? Should we dominate AI and impose specific behavioural rules, possibly limiting its enormous potential, or should we allow AI to develop its own ethics, possibly ultimately subjugating us to intellectual slavery? Better, is it possible that AI and human endeavour can work together to achieve a stable symbiotic relationship beneficial to both. We begin our reflection with fundamental thoughts on education.

Public education has been one of the most important achievements of modern societies. Education, which is about supporting and facilitating learning, has demonstrated itself to be beneficial to individuals and societies. Several studies show that education positively impacts health, economic well-being, and social integration.

Learning is about acquiring instruments that are needed to form independent judgments and decide on proper actions. Education has the main goal to create a society where individuals can think and act independently, and also assume responsibilities. For example, in democratic systems, every individual takes the responsibility of their vote and also the responsibility to elect representatives in parliaments and governments. However, this requires the ability to form judgments, and thus education.

Knowledge is a tool, but not the final goal of learning. Judgments and actions also relate to values. In his Dean’s note, Sandro Galea of Boston University School of Public Health wrote, “Values are what we choose to focus on in a world of limited time and resources.” Values are what motivate us in applying our knowledge when judging and deciding. Therefore, education concerns both knowledge and values.

Nowadays, with the advent of AI and machine learning (ML), we are confronted with a new and challenging problem: machines can also learn, from experience (in the form of data), and thus develop the capability to make independent judgments, decisions, and actions. Therefore, the question is whether humans should also educate machines and how.

In this article, we differentiate between ML (machine learning) and AI. With ML we intend the “study of computer algorithms that improve automatically through experience.” While AI is much broader in scope and refers to the science of making computers behaving like intelligent agents, having superintelligence (a form of intelligence, not yet clearly specified, superior to human intelligence) as its limit. Therefore, in our context, ML and AI do have some commonality but are not equivalent. An intelligent machine learns well; a stupid machine not so well. We recognise that an intelligent machine may learn nothing, if not provided with the means to learn, e.g., experience in the form of data. A stupid machine could learn well, say if given the opportunity of manifold repetition.

In this treatise, we explore the relationships among ML and AI, their morals, and ethics. As such, it is incumbent upon the authors to clarify these concepts, and define their relationships. For our purposes, morals are codified systems of principles of right and wrong that guide an individual in actions and deeds. These are principles of the self, independent of their impingement upon others. By contrast, ethics consist of a set of rules of conduct or principles of right and wrong recognised in the context of a given group of individuals or a given society. Ethics and morals could differ, for example, an individual following her morals could act unethically in a given context, and, on the other hand, what society considers ethical could be in contrast with an individual’s own morality.

A typical example is the case of abortion. A society might consider abortion ethical under certain conditions, e.g., when pregnancy represents a serious risk for the mother and the fetus. However, a medical doctor may nevertheless not perform abortion due to their own moral principles related, e.g., to their conscience which could be deeply rooted in their religious values. However, another doctor may possess different morals and feel that the choice to perform abortion lies entirely with a woman, thus considering abortion acceptable under more general conditions as those of the established ethics.

Also Read: Creating an intentional AI future

This differentiation between ethics and morals is important in our discussion because the continuous confrontation between internally-codified morals and externally-set ethics is the base for ethical learning. Therefore, as we will discuss later, ethical learning involves defining morals, but also a way to measure how actions differ from the prevailing ethics.

Back to machines. So far, machines were just executing our orders. Therefore, despite reaching new extraordinary goals using machines (like flying, landing on the moon, advancing diagnostics and therapies in medicine), machines didn’t judge and acted independently from humans. Humans could entirely specify the framework in which machines judged, decided, and acted, and thus impose their relevant moral and ethical values.

Also Read: Mission artificial intelligence

Also Read: Ethics in artificial intelligence: India’s approach

By contrast, AI will very likely make machines into independent agents, with their own learning and decision-making capabilities, possibly able to think with higher capability than humankind (superintelligence). We should all be very concerned about this development and ask ourselves whether we would accept a scenario where machines could judge and act motivated by different values than human values, and use these values in combination with their knowledge to impact or modify our societies. Philosopher and Oxford professor Nick Bostrom put it clearly: superintelligent AI agents could dominate us and even bring humanity to extinction.

Several studies show that AI systems (ML, deep learning, enforced deep learning) are not necessarily ethical. AI systems are trained to achieve goals, e.g., to maximise utility, but the way this happens does not necessarily follow ethical principles or human values. An example will serve to illustrate.

Suppose that a machine is trained to form learning groups at school. Based on given training data the machine learns that children from low-income families are less likely to succeed at school and, as a consequence, pre-selects those children into specific learning groups to achieve the most efficient learning environment in the school. In this case, one could argue that the training data generated a bias and thus this bias must be corrected away using, e.g., alternative training data. However, even if no bias has been incorporated into the decision process, and the machine reached its goal to improve the learning environment at school; it is not clear that the way this happened is ethically acceptable. Indeed, the selection criterion (maybe the most powerful predictor based on the training data) is not based on children’s learning skills (probably what humans would care about,) but on their social status, and this is ethically unacceptable in many societies. More generally, machines could also follow unintended instrumental actions that will increase the likelihood to achieve a goal in a second stage, e.g. self- protection or acquisition of necessary resources at the costs of humans.

Even if a machine is instructed not to choose unethically in given scenarios (in the form of clearly specified moral norms, e.g., “If this happens, then do not act that way.”) this is not sufficient to avoid unethical behavior. In many modern applications, AI systems are too complex and humans might be unable to predict how AI systems will reach their goals. Therefore, in this case, humans cannot predict and control the full set of possible scenarios a machine will face (lack of transparency.) On the other hand, if humans control AI agents so they do not become independent decision-makers, then we probably also limit the results they may achieve.

Therefore, the question on how to ensure that AI agents will act ethically is very challenging, and an answer likely lies somewhere between setting strict rules (regulation) and allowing machines to learn with their full uninhibited potential.

We are now at the beginning of a great adventure and have a choice about how that adventure is to begin. Will we stand by as AI and its companion ML evolve on their own design, or will we, as evolved creatures, specify the parameters of this evolution so that the amazing results certain to come will enhance human existence, rather than constrain it, or in the abysmal abhorrent possibility, destroy it?

The normative issue here is that humans should design machines to ensure ethical learning. Machines should learn that given actions are not ethical and in contrast to fundamental values set by humans. Ethical learning is a necessary condition in order that machines can be beneficial to humans and that humans can guarantee the safety and ensure that machines will judge and act motivated by our values. In general, humans will not be able to control each step of machines’ learning processes, because many of those steps will not be predictable and even not transparent to humans. However, humans can impose transparency and predictability with respect to the moral and ethical systems, and impose ethical learning to ensure that machines learn consistently with the chosen moral and ethical systems.

New York University professor Dolly Chugh and organisational psychologist May C. Kern in their book Ethical Learning: Releasing the Moral Unicorn describe the conditions for ethical learning. First of all, ethical learning requires a central moral identity, i.e., a set of moral norms with respect to which actions are evaluated. Second, ethical learning requires psychological literacy, i.e., the ability to identify the gap between the central moral identity and the actual behaviors or the actual actions. The absence of psychological literacy could lead humans to deny the gap (self-delusion) in order to limit the self-threat generated by it. Finally, ethical learning requires a growth mindset, i.e., the belief that effort and perseverance will be successful in reducing the gap between the central moral identity and the actual behavior.

When it comes to ethical decision-making (which is not equivalent to ethical learning) American psychologists James Rest and Darcia Narvaez in their book Moral Development in the Professions: Psychology and Applied Ethics identify four distinct psychological processes: moral sensitivity (moral awareness,) moral judgment, moral motivation, and moral character.

Moral sensitivity relates to psychological literacy, i.e., to individuals’ ability to identify moral issues, which are gaps between the observed behaviour or actions and moral identity. Moral judgment consists in expressing and assessing solutions for the existing moral issues that have moral justification, i.e., are consistent with the given central morality. Moral motivation consists of individuals’ intention to choose solutions that are morally justified over solutions that are inconsistent with the given moral identity. Finally, a moral character refers to individuals’ capability (strength, courage) to implement their intentions.

The question now is: how can we translate all these conditions for ethical learning and ethical decision-making into machines? This is a very challenging question.

First, what is the set of moral norms that could be used to define the central moral identity of a machine? The central moral identity is crucial, because it guides ethical learning and decision-making, and thus the final outcome of how ethics influence AI agents. The set of moral norms should be general enough to allow for the existing heterogeneity of moral norms among humans but at the same time specific enough to ensure ethical learning and decision making.

To see the importance of this initial step, consider American author and biochemist Isaac Asimov's Three Laws of Robotics as possible moral norms for an AI central moral identity:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As discussed by Butler University professors James McGrath and Ankur Gupta in their paper Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines not only the content of the three laws is crucial for their implications, but also the order. For example, having Law 2 first, followed by Laws 3 and 1, could generate a catastrophic world, where humans exploit machines to harm other humans. Therefore, moral identity should be implemented very carefully. However, even if the relevant set of moral principles is identified and carefully implemented, this could be insufficient to induce proper ethical behavior over time. Indeed, rule-based ethics could have many limitations, e.g., be too restrictive or insensitive to its consequences.

For example, if a rule says that the act of attempting to murder is unacceptable regardless of the outcome, the AI agent will not try to protect a human being if this might require the act of attempting to murder another human being. However, the outcome, in this case, might be considered unethical in given societal contexts, i.e., like a policeman that wouldn’t act in the presence of a criminal who kills innocent people.

Therefore, our view is that ethical AI should be a mix of rule-based ethics and learning from actions and consequences. This puts ethical learning and decision-making at the core of ethical AI. That is to say, the central question in this context is how decision algorithms learn, e.g., are trained, including which datasets are used to train them, and how rewards are set. As an example, the design of AI systems should prevent self- delusion, e.g., that AI agents modify the environment in order to increase their reward, without reaching the intended goals. As previously discussed, self-delusion is also present in humans, who deny reality or ignore moral issues (exhibit no moral awareness).

Back to our central question: How should ethical learning and decision-making be implemented in AI? We apply Rest’s framework for ethical decision-making.

Moral awareness in AI-driven applications requires a moral identity, i.e., a set of moral norms. Initially, these norms could be a set of rules, designed to reflect the relevant set of ethical principles in the context of the given application. However, moral identity is not fixed but will modify itself with experience and learning. As an example, a rule could say that the act of attempting to murder is unacceptable. However, if the AI application is a robot that must prevent crimes, it must be trained to refine this rule, because, as we mentioned before, the rule is too strict for the purpose of the given application. This, again, puts ethical learning at the core, and thus the way the model is trained also becomes crucial. Indeed, the AI model should be trained on all sorts of data to limit potential biases, because specific data generated from humans’ actions is not necessarily free of ethical issues (e.g., policemen that murdered following misjudgments). The data should have several parameters of interest and dimensions. Ethical values should be incorporated as part of training algorithms, so the machine’s moral identity converges through learning towards relevant and intended ethical setting.

Moral judgment should be measured in AI-driven systems using rigorous measurement systems. The goal is to detect deviations between the established moral identity, the behavior, and its consequences. Moral intention should be implemented by setting ethical rewards, i.e., the solution that is consistent with the given moral identity should be preferred by AI agents. Finally, moral action should be taken after evaluating various models of ML, and all models should be evaluated against each other.

Thus we have attempted to set forth threads of ideas to suggest, stimulate, and, yes, begin to create an image on the tapestry of the mind, so that others may join with us to fulfill our dream of a future with ML and AI in the service of mankind. This is your invitation to cut cloth with us and turn the dream into reality.

Enrico De Giorgi  (R) is professor of mathematics at the University of St. Gallen, Switzerland, and co-founder of AI Ethics Initiatives. Chitro Majumdar (L) is the chief founder & chief strategist on model risk at RsRL and co-founder of AI Ethics Initiatives.

Follow us on Facebook, X, YouTube, Instagram and WhatsApp to never miss an update from Fortune India. To buy a copy, visit Amazon.

More from Opinion

Most Read