Just Another Artifact:

Ethics and the Empirical Experience of AI

Joanna Bryson

Phil Kime

University of Edinburgh, UK

In March of 1995, Daniel Dennett chaired the EPIIC Workshop on Darwinism and Artificial Intelligence at Tufts University. Participants besides Dennett included scientists and journalists such as Murray Gell-Mann, David Haig, John Holland, Kevin Kelly, Patti Maes, Bruce Mazlish, Marvin Minsky, Hans Moravec, Seymour Papert, Oliver Selfridge, Sherry Turkle and Karl Sims. On the second day of the event the group began discussing a future world in which artificially intelligent machines made all critical decisions, and how these machines would inevitably decide humanity was not worth supporting. One of the participants, Professor Rodney Brooks (now director of the MIT Artificial Intelligence Laboratory) was called "conservative" for venturing the opinion that this might not happen (Dennett, 1998).

This was not an isolated event. On October 5, 1994, Vernor Vinge (a professor at San Diego State University) gave a lecture at MIT entitled The Coming Technological Singularity: How to Survive in the Post-Human Era. His abstract was as follows:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended… Can the Singularity be avoided? If not to be avoided, can events be guided so that we may survive? What does survival even mean in a Post-Human Era?"

Not all computer scientists consider world conquest by machines probable, or even possible (See for example (Harvey, 1998), in response to (Warwick, 1997).) However, such fears have been a persistent part of our culture, not only in fiction but also in scientific writings. (See (deGaris, 1990) for a summary.) What can lead even computer scientists to believe that AI endangers our society? Computer programs, including those classified as "artificial intelligence", are purpose-built artifacts designed and controlled by human beings. While computers can accelerate and magnify our mistakes to do more damage than an unaided individual is capable of, the same could be said of levers and pulleys.

We believe exaggerated fears of, and hopes for, AI are symptomatic of a larger problem -- a general confusion about the nature of humanity and the role of ethics in society. To the category of "exaggerated fear" we assign the notions of ambitious or machine-loyal artificial intelligences that make selfish decisions about the relative importance of their own intelligence, growth or energy. The category of `exaggerated hopes' includes the expectation that machine intelligence will perpetuate either individual or planetary experience and culture past the normal life expectancy of the individual or the species. Our thesis is that these are false concerns, which can distract us from the real dangers of AI technologies. The real dangers are no different from those of other artifacts in our culture: from factories to advertising, weapons to political systems. The danger of these systems is the potential for misuse, either through carelessness or malevolence, by the people who control them.

The following section begins our argument concerning exaggerated expectations (both positive and negative) with a brief perspective on the function and implementation of ethical systems in society. We conclude that an ethical society is built from each individuals' internal sense of obligation. This obligation in turn results from an individual's identification with the object of that obligation. We next use this result to attempt an understanding of the misplaced hopes and fears for AI, which we argue, come from individuals' inappropriate identification with machine intelligence. From here we move to propose how an AI system might in fact be used to help us rationalize our ethics systems and properly address the real threats to our society and cultures, including those threats stemming from the misuse of advanced technology like AI. This proposal involves using empirical experience with real AI systems to help individuals' natural ethical systems to more correctly asses their relation to machines.

To some, addressing issues of AI as a question of psychology and ethics may seem like an ill-advised reduction of a technical problem to an insubstantial domain. However, studies of morality, society and personality are significantly older than those of computer science and AI, and in many cases have been explored much more deeply. The disadvantage of these domains has more to do with the comparative difficulty of research and where the notion of evidence is appropriate, the complexity thereof. We are aware that the nature of the present paper is primarily speculative -- we derive its content from a fundamental approach to the psychology and the philosophy of ethics, and a strong familiarity with the disciplines of artificial life and artificial intelligence, and their research communities. The final section, however, does propose some testable hypotheses, and we are actively engaged in preparing the opportunities to pursue this research. We also hope this paper itself will stimulate discussion and further research.

 

Ethical Obligation

Understanding ethical issues requires the understanding of ethics itself, particularly its function. Most social and philosophical research in this area holds that the primary or even sole purpose of ethics is to maintain a functional degree of social homogeneity (a classic theme in functionalist sociology) in, or to protect the social organism (a central theme of classic works such as Hobbes' Leviathan). In other words, ethics has evolved as a mechanism of human social cohesion, without which a society disintegrates. Some of the details of an ethical system may consequently be seen as somewhat arbitrary. As with any evolved order, there is a purely contingent historical element with respect to which rules are subsequently subjected to evolutionary pressure. That is, the pressures of "speciesisation" (in this case cultural and/or family identity) may lead to baroque elements. However, some concepts are nearly universal, such as the prohibition of certain necessarily socially destructive behaviours, such as murder, as well as some key unifying forces, such as family and religion.

One important aspect of any ethical system is that it should be as much as possible self-regulatory (Hofstadter, 1985). Socially speaking, external control is far too resource intensive. For example, if we were to be prevented from crime merely by the threat of external punishment rather than by internalized codes of ethics, the police force would have to number a good percentage of the population, and would itself be very difficult to regulate. The observation that such a circumstance becomes increasingly the case in areas of cultural diversity indicates that some mechanisms of imposed internal regulation (notably religion and fear of authority) have become less effective in modern, information and experience-rich environments. However, many forms of individual ethical behaviour are generated without conscious weighing of negative consequence to the individual. Most people `naturally' conform to such social expectations as protecting and supporting their own family members, or taking care not to damage the property of others or the community.

The primary basis for such ethical reactions is often empathy. We care for people or objects that we would feel badly for if they were hurt or damaged. Thus ethics requires some degree of perceived analogy between ourselves and that with which we empathise. For example, our empathy and sense of ethical obligation tends to be highly correlated, with ourselves and our families tending to be at the top, followed by our neighbours and other people with whom we acknowledge commonality. This can be attributed to the ease of identification in such cases. Children have well-documented life-long identity confusion with their parents, possibly founded in a failure of categorisation occurring between the discrimination of self and other in infancy. (Parents, perhaps confusingly, often have the same goals as the child, such as keeping it warm and happy: discriminating self on the basis of reaction to intention therefor becomes difficult.) In turn, parents can view their children as perpetuations of their own bodies and even lives.

If such identification can be seen to scale throughout the community as with ethics and empathy, we can claim a likely strong relationship between identification and internalized ethical sense. If we come to understand our more complicated relationships with friends, state and religion in terms of transference or metaphor with familial relationships (as might be suggested by (Lakoff & Johnson, 1980) then we might quite literally feel that actions that benefit these select others are in our own best interest. If self-interest is the root of our ethics, then it is easy to see how ethical systems can be self-regulating.

 

The Consequences of Over-Identification with Machine Intelligence

What could lead us to over-identify with machines? Quite simply, a misunderstanding that puts the capabilities of language, mathematics and "reason" as the key characteristics of human life. This attitude is a consequence of our tendency to separate ourselves from animals: it is itself a consequence of our ethical system. To form a human society, one needs to value the lives of the humans in the community over the lives of other animals. However, choosing this particular metric as an identity criteria has arguably lead to an undervaluing of the emotional and aesthetic in our society. Consequences include an unhealthy neglect and denial of emotional experiences. Further, we have difficulty understanding the behaviour of our companions and ourselves as we attempt to impose rational/intentional models on what may well be instinctive, emotion-driven and physical responses. Such failures of understanding can lead to poor predictions and unnecessary conflicts.

If, as the last section suggests, identifying with a person or object is central to our sense of ethical obligation, then over-identifying with a machine displaying some aspect of artificial intelligence holds two dangers. Firstly, we may believe the machine to be a participant in our society, which might seriously confuse our understanding of them, and of their potential dangers and capabilities. Second, we may over-value the machine when making our own ethical judgements and balancing our own obligations.

The statement that we over-identify with machine intelligence is, of course, itself a judgement. This evaluation can be made on two different levels, one technical and the other ethical. The technical is easier to demonstrate: the general population ascribes much higher levels of intelligent capabilities to machines than the machines generally possess. The ethical evaluation is more subjective. It is quite possible some percentage of the people reading this paper would consider the aspects of culture potentially embodied in computer programs as equally or more valuable than some or all individual human beings. We do not address this issue directly here, but rather seek only to point out that this problem is not restricted to AI, but rather is a problem for many forms of artifact (including fine art and political systems). The technical argument is sufficient for this section. This section focuses on fears and hopes specific to AI which we attempt to demonstrate as unfounded.

Reductionism and the Loss of Free Will

One source of fear about AI is simply that, if it works, it proves that our intelligence (which, as we mentioned above, we often rashly consider to be definitive of human mentation) is the inevitable consequence of the interaction of simple, deterministic parts. It is the "inevitable" and "deterministic" parts of this description that trouble people; this would seem to prove that free will is an illusion, which would then (the argument goes) eliminate the concept of personal responsibility. Not only that, but as engineers, we might also create objects that were not responsible for anything they might do.

Responsibility in this sense, however, is a definition in that it is a socially assigned role: we are responsible for what we do if society considers us to be. Similarly, free will is a powerful concept that helps us organise our own behaviour, regardless of arguments as to its reality. The fear that AI might reveal underlying determinism in our own behaviour is probably a vain hope given the state of AI today. However, theoretically, could this be a real problem? We think not. Any possible future account of the low-level deterministic elements constituting our behaviour would almost certainly be so complex that the decomposition in any particular case would be impossible. It is not something that could give rise to anything other than weak concern, akin to the concern students feel when they are told about everything being made of atoms: will their books fall through their desks? The atomicity of solids is never even vaguely apparent in a concrete case and thus this angst can never obtain a hold on us. So also, functionalism will not lead to perfect prediction and control of human behaviour. The incredible complexity of the human brain implies that such predictions will always be easier from statistical evaluation of observed behaviour than from a deterministic understanding of the workings of brains or intelligence. To put this another way, from the individual perspective, free will continues to be definitive of our qualitative experience and this is an important factor in prediction, regardless whether it is a true description of the actual situation. More concisely, it is epistemology, rather than some abstract truth, which governs prediction of human behaviour.

Identification and Obligation

There are two possible consequences of over-identifying with a machine. The first is that it lowers ones own opinion of oneself. The above-mentioned fear of functionalism is an example of this. The other possible consequence is the incorrect elevation of the worth of the machine. This can be the basis for both exaggerated fears of and affinity for the results of artificial intelligence research. For example, in identifying with our machines, we endow them with the rights and privileges of ethical status. We are somewhat embarrassed by this sense of obligation, and so try to rationalize it, while at the same time distancing ourselves. Many people think it would be unethical to unplug a computer if it were conscious. However, consciousness is famously awkward to define: for most people it is wrapped up with the concept of a soul. What if we simply define consciousness as a sort of self-awareness, state-maintaining? Then, if a machine has a record of its most recent actions, is it unethical to unplug it? In fact, logging actions and internal state is the primary way to make a computer program safe if it terminates abruptly, because it can be reproduced exactly on the same machine or another compatible one at a different time. So, as an example, this shows a beneficial by-product of conscious machines, given a certain definition of consciousness. Of course, we are not suggesting that this definition or anything else so simple-minded could possibly be adequate. However, it does show that certain of the concepts which we hold dear to in ourselves, as against machines, if construed in pragmatically sensible ways, can exhibit characteristics which also seem to require some sense of obligation.

Failing to recognize fundamental differences between human and machine intelligence leads to other mistakes. For example, because we consider human intelligence to be the only intelligence, and humans are infamous for desiring power, we assume any other intelligent system desires power. Big Blue can defeat most humans at chess, but it has absolutely no representation of power or human society anywhere in its program, with the possible exception of the metric values of various chess pieces and positions. Your calculator can do arithmetic you would be completely unable to do without the assistance of tools, yet it is ludicrous to ascribe any form of desire whatsoever to a calculator. Aspects of cognition do not automatically come with others: our particular mix is the product of millions of years of evolution.

Of course, it is easy to program a computer to print, or even say "I want to rule the world!!" It is possible to program a system to preferentially select behaviours that give it more power or resources. For example, Tom Ray's Tierra (Ray, 1990) is a system that essentially develops programs that compete for computer resources such as disk and processing time. The internet worm of the 80's also competed for processor time, in fact, it disabled many computers world-wide by monopolizing their processors. However, this act happened without intent: the program was designed to move from processor to processor, but a mistake in the program led it to replicate on a single processor until that machine was essentially disabled. The program no more intended to stop other processes than a bomb intends to destroy a city.

The internet worm demonstrates another aspect of identifying with artificial intelligence. Its creator desired to have a copy of his program running on as many different machines as possible. This ambition for a creation is again similar to the ambition parents may impose on their children. We would like to be more intelligent, to live longer, to be stronger; if not actually ourselves, then our progeny with whom we identify should have these characteristics. A consequence of this is both scientists and science fiction writers sometimes speaking of self-replicating space ships becoming the ultimate receptacle of Earthly intelligence (as if surviving to the end of the universe were significantly better than surviving to the end of the solar system!). Science fiction has also thoroughly examined the idea of robots that are deserving of citizen status. It is important to understand that these works of literature are exploring what it means to be human, not what it means to be a computer.

In fact, machines are created to fill useful roles (or occasionally as art). We develop artifacts to perform tasks for us, and while they may eliminate the need for various human labours, they do not eliminate the need or desire for us to live our lives. The threat then would not be that machines out of maliciousness will take over the world, but that every human endeavour will eventually be "better" accomplished by a machine. What makes this concern mistaken is that what ultimately matters to us is not the actual accomplishments of our lives (for which there is no real, objective metric to measure value) but the performing of the actions that leads to accomplishments. What we value is what we actually sense; it is an aesthetic and emotional experience (Burns, 1969). What, for example, happens when people choose to write a letter by hand instead of sending email, because writing "loses some of its essence" when it is too easy? Perhaps they are eccentric, but perhaps they have recognised something extremely valuable about their own experience, and that of their letter's recipient. It is ludicrous to think of a machine falling in love for you, of it enjoying victory or gossip for you. An obsession with the results of action rather than the actions themselves is not the fault of AI, but a problem our culture needs to address. If AI puts this crucial issue into sharp relief, all the better.

 

AI and Ethics

As we stated earlier, our argument is not that there are no ethical considerations to creating and using AI technology. Rather, we are arguing that the nature of those ethical obligations is often misconstrued. We should neither fear the motives of a system nor trust its common sense. AI systems need not have either "motives" or "common sense," but if we do choose to create systems that could meaningfully be described in such terms, we should take the same precautions against their potential flaws we do against similar human errors and fallibilities. Nondeterministic computing should be audited regularly, by humans, and possibly monitored by other parallel programs with substantially different architectures. These kinds of precautions are already standard even for conventional computer programs in critical applications, such as manned space flights.

AI systems are already being used by major industries. For example, computer manufacturers use expert systems (AI programs designed around some notion of "common sense") for checking circuit design. But this is just one step in the manufacturing process; real circuits are later built and tested. Similarly, credit card companies now use machine learning to build profiles of their customer spending habits, in an attempt to recognise as soon as possible whether a card is being used by a thief. But no one would ever be arrested directly as a consequence of these programs: exceptions are simply flagged and turned over to human account officers, who phone the customer and verify whether they were making the unexpected purchases. These are examples of AI systems reasonably and responsibly integrated into our culture. There is no reason for a disturbing qualitative jump in practise as usage and empirical experience rather than theoretical assurances of desirable attributes have integrated these systems.

What about our ethical obligations to intelligent systems? To answer this we must refer back to the fact that ethical systems can be seen to be somewhat arbitrary, with their main surface attribute being the maintenance of social order. Since we are provided with a new ethical quandary, we are to some extent free to create a new ethical standard. This standard should be consistent with our overall code of ethics, and should presumably contribute towards a social order we find desirable. As we have argued, the change need not be strongly qualitative; there need be no harsh transition. Instead, it is an ongoing process in which we are already participating.

Ethics generally leads us to be altruistic towards things that are like us. If we decide that the axis of "likeness" is genetic or biological, then we will always have absolutely no ethical obligation towards an artifact. But what if we choose our standard to be culture? After all, much of what we are -- our language, our music, our socialising -- is less inborn than learned. If an artifact becomes a vessel for our culture, should we treat it with the same respect as a person? This question is already with us. Books, buildings, art, songs and languages all embody the intellectual output of members of our species. If it were unethical to destroy a building or to burn books, then it would be unethical to destroy a machine that retains and communicates the same kinds of information.

The difficulty with choosing retention of culture as a criteria for ethical consideration is that it raises the possibility that some machine could become considered more important than some human. Before dismissing this idea out of hand, consider that again, this problem is already with us. We routinely place human life as significantly less important than sustaining cultural artifacts such as political systems, economic systems, and religions. Whilst warfare may be considered unethical, people are far less inclined to deem it so when it is seen as being defensive, when an aggressor is threatening an economy or a religion. But even a defensive war is still a sacrifice of life to cultural artifact. It is justified because the cultural artifact is seen to so enrich the lives of the population as to merit the risk of loss of life to some percentage of them. A classic case is cars. Road accidents claim many lives every year all over the world, the optimal solution to which would be to ban cars. However, even ignoring the enormous political and economic pressures, the convenience of this particular artifact is perceived as being so huge that human life is, in some sense, less important.

Another possible criterion of ethical status is that of contributing to culture rather than simply retaining it. We suggest that AI is a tool of creativity, not a creator in itself. The creators are those who designed the AI system. This would place a "creative" AI system on the same standard of any work of art or scholarship, for example the Mona Lisa or "On the Origin of Species". Regardless of the model of contribution, the moral conundrums are the same here as above. The question is, if an artifact is retaining or generating more intellectual information than a person, should people be allowed to die to preserve it? This is a difficult problem, but as we have said, one that faces us with or without AI. Resources are spent to protect the Mona Lisa that could in theory be spent on medicine or food. In AI we actually have a luxury not afforded to da Vinci -- we can to some extent resolve the difficulty by ensuring our systems are not unique. With computer science we have the ability to ensure that every aspect of our work is replicable, and "backed up". Even if a system learns from irreproducible experience, the resulting internal hardware states could normally be preserved. So, in a sense, AI reduces the possibility of such ethical problems. Replicability is replaceability and thus deciding between things is easier. Indeed, this important property of computer-based systems allows us to be positively biased in favour of people, as they are unique and qualitatively irreplaceable.

If AI machines could generate scientific theory, art, money, or some other cultural commodity significantly faster than humans, would we not be obligated to devote all of our resources to building such machines? Obviously not since, as we have suggested, a purpose of ethics is to maintain social order, and as such it has always involved balancing obligations to various sources: to yourself, your family, your country. There can be no reason except nihilism that human civilisation should chose a set of ethics that values artificial intelligence to the exclusion of our own existence.

The point here is not that there is no problem: it is rather that the problem we are frightened of is actually one we already have. The issue is seeing this and integrating accordingly, rather than worrying about new and dangerous ethical conundrums that may arise. The illusion of a terrifyingly different world in which we are at the mercy of machines is simply a trick of perspective: we imagine the far future and compare it to the present without taking into account the intervening time of gradual change. It is this intervening time that we are currently in (indeed, in a sense, we are always in "intervening" time) and so the process is underway. There is no Singularity.

 

Ethics and AI

In fact, our concluding proposal is that AI could be used to help to help rationalize human ethical decisions. As mentioned, our understanding and technology have given us means to do extensive damage not previously possible, but it also allows us to understand our world, including ourselves, as never before. As we come to understand the evolution of our culture and society, we can also make decisions and take actions that have direct impact on that evolution.

As we have indicated earlier, one of the key problems for our society is the development of internalized ethical systems. In particular, several traditional means of communicating imposed senses of identity, such as religion and family, are severely challenged by our new densely populated, multi-cultural, informed and empowered citizenry. The issue of forming identity is now more than ever an issue for public education. It is important for students to undertake to understand themselves, including the understanding of their dependence on their society and environment. Artificial intelligence is in fact an ideal tool for working on this problem. Already, students learn through computer simulations about dynamic systems such as the global environment or local ecosystems. Another important aspect might be the introduction of agent systems modelling the students themselves in the classroom. If students were allowed to build "portraits" of themselves and each other, this would offer both an opportunity to learn about their own behaviour, and to reduce the confusion between mechanism and human. An AI agent might look and sound like the student it portrays, it might say it likes the same things when asked, it might even "play" with other agents that are portraits of the real students' friends. But its behaviour is under human control, and brings no direct physical benefit. In the future, children should have no more difficulty disambiguating an AI portrait from a friend than they would a painted one.

Our conclusion then is that in creating and using intelligent artifacts we do need to consider ethical and social dangers, but in no greater sense than we would "conventional" technology. If they control our defence systems, our utility supplies or our political propaganda, we should take exactly the same care with them we should currently pay our computers, piping and media. To the extent that they serve, like us, as vessels of our culture, we owe them the same respect we owe libraries, architecture and works of art. We believe the extent to which people feel exaggerated fears or obligations with respect to AI and robotics is a consequence of their over-identification with the artifacts due to superficial, unfamiliar similarities such as a machine's "use" of language. It also indicates our society's uncertain ethics, which, having so far largely evolved unguided within cultures and societies, is currently hard pressed to keep up with the rate of cultural change. The empirical experience of AI can help us to understand intuitively the nature of the various relations that are potentially ethically significant.

We have emphasised the importance of identity born of empirical experience in forming ethical obligations in new situations, and suggested that experience and understanding of real AI programs may actually assist us in our ethical quandaries. They could help us form better models of what it is to be human, and thus provide us with a better basis for empathetic decisions. Empathy and therefore ethics is dependent on real experience rather than theoretical urgings. Those worried about the acceptability of AI technology in our culture should worry less about anthropomorphic fallacies and more about making AI visible and understood where it already exists.

 

Acknowledgements

Though the ideas in this paper largely came from discussions between the authors, its motivation came from discourse and disagreements with Rodney Brooks and Anne Foerst. The ongoing recent debate between Kevin Warwick and other members of the British AI community (particularly Alan Bundy) has also contributed both information and motivation. Thanks also to Daniel Dennett and Oded Maron for information used in the original draft; and Will Lowe, Mark Humphrys, Chris Malcolm, Liselotte van Leeuwen and Joanne Williams for useful comments and discussion.

 

References

Burns, Robert. (1969). Poems and Songs. Oxford University Press, Oxford, James Kinsley edition.

de Garis, Hugo. (1990). The twenty-first century artilect: Moral dilemmas concerning the ultra intelligent machine. Revue Interantionale de Philosophie.

Dennett, Daniel C., ed. (1998). Artificial Life: The Tufts Symposium. Oxford University Press, Oxford.

Harvey, Inman. (1998). Backpropogation. AISB Quarterly, 10(99). letter to the editor.

Hobbes, Thomas. (1947). Leviathan. London, Michael Oaskeshott edition.

Hofstadter, Douglas R. (1985). Metamagical Themas: Questing for the Essence of Mind and Pattern. Penguin.

Lakoff, George and Johnson, Mark. (1980). Metaphors We Live By. University of Chicago Press Chicago, Illinois.

Ray, Thomas S. (1990). An approach to the synthesis of life. In Christopher G. Langton, J. Doyne Farmer Charles Taylor and Rasmussen, Steen, (eds.), Proceedings of Articial Life, II, pages 371-408. Addison-Wesley. appeared 1991.

Warwick, Kevin. (1997). March of the Machines. Century.