Building Persons is a Choice

Joanna J. Bryson

An invited commentary on Anne Forest, "Robots and Theology", to appear in Erwägen Wissen Ethik, November 2009

 

Although Foerst has provided an interesting account of robotics and personhood, I believe her key claim is this: that science is descriptive, while theology provides meaning (¶61). ``Whenever science leaves [description] ...and attempts to construct meaning, it ceases to be science and enters the religious realm. Therefore, it is important to let the religious and the scientific ...stand side by side to see where they can mutually enrich each other, without convoluting their very different spheres.'' It is possible that science cannot provide meaning. This depends on some axioms that it is not essential to this commentary to debate. Regardless of these, science can and should provide an explanation for why meaning is so often an essential motivation for human adults. This is a basic question of human psychology and human behaviour.

Meaning is the motivation for rational action. Without meaning we lose motivation, leaving our propensity to act inhibited -- the definition of depression. Creating life is of course the ultimate biological motivation. Life can nearly be defined through reproduction (1). However, biological success cannot simply be measured by counting the number of children an individual has. Some species invest relatively small amounts of resources in each of a large number of children, while other species like ourselves invest huge amounts of resources in small numbers of children.

In the eusocial insects (such as some ants, bees and termites) the majority of individuals apparently sacrifice their own reproductive capacity for the good of the hive or nest. This lifestyle is not actually an evolutionary sacrifice though, since such systems originate only in the situation of strict monogamy (2). In insects, monogamy can be enforced physically. Where there is sexual reproduction and monogamy is guaranteed, an individual's siblings are exactly as related to them as their children would be, that is, 50%. However, siblings do not require paying directly reproductive costs. In such species, the queen mother can live for decades, thus even generational considerations favouring youth are not important.

Humans are not strictly monogamous, but our rational capacities both allow and incline some of us to value shared ideas as highly as shared genes. Arguably, humanity has become a vehicle not only of biological evolution, but of cultural evolution as well (3,4). Culture -- a collection of ideas, not genes -- appears to allow humanity to act with levels of altruism almost as high as those of eusocial insects. We have also become similarly explosively successful in terms of dominating a significant proportion of the Earth's biomass.

If culture can be viewed as an evolutionary system, then it is not surprising that we place value on artifacts that propagate our ideas into the future. Such artifacts can be stories, songs, letters, inventions, articles, religions and yes, potentially robots. The anthropologist Helmreich (5) has suggested that the desire to create Artificial Life is most present in researchers that are middle-aged males with a fixation on an individual capacity for creating life. But judging by the popularity of robots, the desire for this form of propagation is strong in a large section of society.

Saying that a robot can be a vehicle of culture, or even a producer of culture, is not the same as arguing that it should be a person. It also does not mean that it would be ethically correct for us to build robots that we owe ethical obligation towards (6). Robot-oriented ethics are fundamentally different from ethics involving other intelligent entities, because robots are by definition artifacts of our own culture and intelligence. We do have almost as much control over many other species and sometimes other persons as we do over robots. We -- as individuals and as societies and cultures -- regularly decide the amount of resources (including space and time) we are willing allocate to other people and animals. But individual members of biological species hold exquisitely complicated and unique minds and cultures. When these minds and cultures are eliminated, they are impossible to ever fully replicate again.

In the case of robots, the minds are not there yet, and the culture they would affect (if we choose to allow them to) would be our own. We own robots. We design, manufacture and operate them. They are entirely our responsibility. We determine their goals and behaviour, either directly or indirectly through specifying their intelligence, or even more indirectly by specifying how they acquire their own intelligence. But at the end of every indirection lies the fact that there would be no robots on this planet if it weren't for deliberate human decisions to create them (7).

One of the things we can decide as a society is whether to make robots irreplaceable. Making them so would be irresponsible. There is an old undergraduate-party question about whether to save a person or the last copy of Shakespeare from a burning building. Of course, this is only a party question -- no one is ever likely to be alive in a room with the last copy of Shakespeare. Robot builders can adapt a similar strategy to publishing in order to make robots that no one has ethical obligations to. A robot's brain should be backed up continuously off-site by wireless network; its body should be mass produced and easily interchangeable. No one should ever need to hesitate an instant in deciding whether to save a human or a robot from a burning building. Robots should be utterly replaceable. Robot owners should should never suffer concern that their robots might suffer or `die' even if the rest of their possessions are destroyed.

By Foerst's argument though, even a robot like these would be construed as a person if it suffers estrangement (¶75). If we ignore (as per Foerst's earlier quoted injunction) any introduction of theism into a description, then estrangement comes down to an awareness of the artificiality of the distinction of self and other (¶7). That is, all animals have some model of what parts of the world they can control and what parts they cannot -- a fundamental notion of self and other. But humans sense estrangement because they know that in fact the self is really one instance of 'other'. Humans thus become self aware of the inaccessibility of others' minds and perceptions, and also of their own limits in self conception and perception.

Constructing a robot mind or any computer program which has perfect access to the knowledge necessary for estrangement is actually trivially easy (8). Computer knowledge is stored in random-access memory on chips and also on disks. A program can read any part of that memory's contents, including those which contain the program itself. What would be hard -- both to program and to motivate programming -- would be giving an artifact any feeling of angst over the realization of its limitations. Computer scientists know that accessing too much memory or computing too many things is impractical (9). This is because thinking and remembering take time, and over time the world changes. We can learn to understand these computational limits intellectually, though emotionally we still feel the social angst of estrangement that is no doubt an adaptive, evolved motivation that serves to increase our socialisation.

It is important to recognise that any definition, including those used here of estrangement or personhood, is itself a social construct. Although ethical systems and religions may have originally evolved with little deliberate guidance (10), at this stage we are aware of the system. Indeed, human societies can now rapidly accumulate new formal norms through legislation. The attractiveness of the idea that our ideas and culture matter more than our bodies and might be carried on in perpetuity by robots is rooted in our evolved motivation for reproduction. But the very science of embodiment that Foerst discusses at length lets us know that non-human minds will necessarily be very different from human ones.

Foerst claims that there can be ``no reason to deny'' a demonstrably estranged robot the rights of personhood (¶73), but of course the reasons for denying personhood are always the same: conflict for resources. Personally, I am too hedonistic to entirely dismiss the biological joys I receive from my kin in favour of the pure product of my mind. Though I must admit I am taking the time to write this response late at night in social isolation. So demonstrably I do sometimes allocate resources to memetic offspring over hedonism.

In conclusion, I largely agree with Foerst's final claim (¶77), that robotics helps us rethink personhood, and that doing so is very important for humanitarian reasons. The issues concerning robot personhood and ethics are substantial not only philosophically, but also in terms of socially-assigned responsibility. Already the US military is funding research into making robots more ethical (11). This could precede an attempt to have them legally declared responsible for their actions, which should in fact be considered an abdication of human responsibility for our artifacts. Robots are not persons unless we build them to be such and then declare them to be so. Even if we do this, they will always be our creations and our responsibility.

Notes

1
Richard Dawkins.
The Selfish Gene.
Oxford University Press, 1976.

2
William O. H. Hughes, Benjamin P. Oldroyd, Madeleine Beekman, and Francis L. W. Ratnieks.
Ancestral monogamy shows kin selection is key to the evolution of eusociality.
Science, 320 (5880): 1213-1216, 2008.

3
Susan Blackmore.
The Meme Machine.
Oxford University Press, 1999.

4
Peter J. Richerson and Robert Boyd.
Not By Genes Alone: How Culture Transformed Human Evolution.
University Of Chicago Press, 2005.

5
Stefan Helmreich.
The spiritual in artificial life: Recombining science and religion in a computational culture medium.
Science as Culture, 6 (3): 363-395, 1997.

6
Joanna J. Bryson.
A proposal for the Humanoid Agent-builders League (HAL).
In John Barnden, editor, AISB'00 Symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights, pages 1-6, 2000.

7
Joanna J. Bryson.
Robots should be slaves.
In Yorick Wilks, editor, Artificial Companions in Society:  Scientific, Economic, Psychological and Philosophical Perspectives, John Benjamins, Amsterdam, 2009.
in press.

8
Joanna J. Bryson.
Consciousness is easy but learning is hard.
The Philosophers' Magazine, (28): 70-72, Autumn 2004.

9
Michael Sipser.
Introduction to the Theory of Computation.
PWS, Thompson, Boston, MA, second edition, 2005.

10
Daniel C. Dennett.
Breaking the Spell: Religion as a Natural Phenomenon.
Viking, 2006.

11
Wendell Wallach and Colin Allen.
Moral Machines: Teaching Robots Right from Wrong.
Oxford University Press, October 2008.