Robots, Theology and the Personhood of Nonhumans.
A Critique.

Andreas Matthias, Lingnan University, Hong Kong
28.4.09

Abstract

In her paper, Anne Foerst tries to explore a way of reconciling the theological view of human beings as images of God with what she labels the "scientific anthropology," which she assumes to be the metaphysical basis for the creations of Artificial Intelligence and Artificial Life ("AI" and "AL" in the following discussion). The argument seems to suffer from a misapprehension of the metaphysical basis, the goals, and the actual variety of AI and AL technology, which far exceeds the class of artifacts considered by Foerst.

The components of "being human"

((1)) When we consider historic answers to modern problems, as is done in this text, it is helpful to remind ourselves what the new and unique thing about our present situation is, and how it differs from the problems with which our ancestors had to concern themselves. One of the prominent features of modern technology, as opposed to traditional accounts of the world, is its attempt to divide concepts which had previously been considered atomic. The person ("a-tomon", "in-dividual") nowadays falls apart into a set of interacting functional blocks: speech capacity, short- and long-term memory, optical image processing and pattern recognition, speech recognition, speech synthesis, emotional apparatus, reflexes, autonomous nervous system, mirror neurons as part of an empathy generator and many others (for two classic accounts of how this is mirrored in the philosophy of AI, see Minsky (1988) and Ashby (1952)).

((2)) Historical categories, on the other hand, have not been created with isolated subsystems in mind, and this generates all kinds of misunderstandings. No traditional account of personhood can deal adequately with the separation of the various faculties of a person and their individual technological re-creation: chat-bots today have almost perfect speech capabilities without comprehension (Shieber 1993); intelligent vehicle navigation systems have perfect geographic knowledge and obstacle avoidance capabilities, without any language processing features (Schultz 1991, Stancliff 2000). Banana ripeness determination is done with electronic noses which can perfectly judge the smell of a banana, without being able to perceive or judge anything else (Llobet 1999).

Criteria of responsibility and personhood

((3)) Personhood discussions, even relatively recent ones, like Dennett's "six criteria" (Dennett 1978b, cited by Foerst ((72)) ) have often failed to take this modularity into account. I have argued elsewhere that Dennett's account mixes, for example, the capability to use verbal communication into the personhood criteria, when it is obvious, even to a superficial examination, that we don't reject the personhood claims of people who can't speak our language, or any language we know, or, possibly, any language at all, as long as they otherwise seem to behave responsibly and in accordance with our own conceptions of "sensible" behaviour (Matthias 2004).

((4)) Full personhood status is closely related to the notion of a responsible agent, and this is in turn built up out of functional subsystems, so that we can identify a list of properties which are necessary (although probably not sufficient) elements of personhood. The important thing to note is that this analysis allows us to distinguish clearly between the necessary properties of a responsible agent and those phenomena which are only incidentally features of humans, but do not represent necessary ingredients of every person-like entity (Matthias 2008):

((5)) Note also that there can be partial responsibility: in the case of compulsive gamblers or drug addicts, for example, the agent would be assumed to be responsible in all respects except those related to his addiction. This is a concept we find in many legal systems.

Personhood as an aggregate of functional blocks

((6)) Foerst says in her abstract: "Artificial Life attempts to build humanoid robots according to the scientific anthropology," but this seems not to correspond to the reality of AI research.

((7)) Foerst here still has the notion of robots as potential "images of God" in mind, as can be seen from her examples: integrated systems, which aim to reproduce the whole man, robotic creatures, which at once try to simulate both the exterior appearance and the emotional apparatus of humans ((64)). But these machines are not really what AI and AL technology is about, or what any technology generally is about.

((8)) The naive understanding of technology always focuses first on the imitation of the visible properties of some thing to be emulated, on its shape and form, just as, for example, the first flying machines, both of Leonardo and Lilienthal, were modelled after the shape of flying creatures. As the technology of flight grew more mature, it increasingly distanced itself from emulating the gestalt of birds, into reproducing, with very different means, some of their functional capabilities. Further progress tends to replace more and more of the remaining atavistic features with new, purely technological ways of achieving a function, which are in principle different from those of nature, so that the end-product of this development loses all resemblance to the biological organism that once inspired its construction. This can be observed in the transition from wing-beating flying artifacts to fixed-wing planes, to jet propulsion and rockets; or of central heating devices for houses, or of cooking stoves, which all initially imitated some naturally found phenomenon (fire in the latter cases) in order to develop with time into completely new implementations of some functional feature in a way which is alien to the earlier phenomenon (electrical heaters, infrared lamps, microwave ovens).

((9)) Therefore, the basic misunderstanding of the text lies in its assumption that AI technologies attempt to reproduce something of the shape and general habitus of a human. A close look at AI and AL technology shows that this is not the case. The reasons are evident, and they are essentially the same as for the airplane and the stove: the imitation of gestalt is not what we aim for, but the simulation of a behaviour, of one particular, often isolated, functional characteristic, and this can be implemented technologically in ways much more efficient than by imitating the designs of nature. If I want to create a bird, what is the point in building one for thousands of dollars out of metal and electronics, when I can have one for free every time I let an egg hatch? And, similarly, why simulate humans with immense expenses and technological difficulties, when there is a far more pleasurable way to create them using the means of nature? What we really want from technology (despite the literary accounts of talking statues and Golems in the history of human imagination), are not artificial birds or humans, but machines which share some of a human's properties (but not all!): we don't want a computer which needs sleep, or which forgets, or a music player which on Mondays is in the mood to play mournful songs only. We want the perfection of the technological product, its ideal availability and readiness to serve, its ability to provide one particular, well-defined, completely predictable function for our use.

((10)) Kismet and similar projects ((64)) are mainly interesting to the press, and not so much to actual developers of soft- and hardware, because their simulation of life is atavistic in the same way that Leonardo's flying machines were reminiscent of birds, and they share the same shortcomings. Nowadays computer viruses are, for example, much closer to artificial life than Kismet. They are programs which can reproduce, they can move autonomously from host to host, they can evade attempts at detection and thus actively seek to preserve their existence. But the general public does not perceive them as artificial life, and this shows how much our imagination is attached to the image, the gestalt factor, instead of the functional evaluation of a technological product. Artificial intelligence is already doing medical diagnoses (Zhou 2002), fighting wars and autonomously killing humans (Sharkey 2008), controlling submarines, missiles and planes (Schultz 1991, Stancliff 2000), and even such everyday devices like elevators and traffic lights (OTIS 2003). It is everywhere, but it doesn't look like God's image, because for most application domains, there is no point in making it look so. (An exception, of course, are electronic pets or puppets for the care of children and the elderly, where the imitation of the human or animal form is a design goal to facilitate emotional bonding between human and artifact.)

Embodiment and self-awareness

((11)) Foerst also talks of the impossibility of self-awareness without embodiment: "Self-awareness is not a mental concept but an embodied one and it occurs only in a healthy body" ((43)). To this, of course, there is the obvious reply that up to now we have so little knowledge of self-awareness that it is impossible to say where it is "only" supposed to occur and where not (there is no evidence whatsoever that a purely symbolic system cannot achieve self-awareness. In fact, there is some evidence to the contrary, since a chess-playing computer, for example, already fulfills some criteria for self-awareness, being aware of the distinction between its own pieces and the opponent's, being able to plan its actions and to maximize its profit, being able to trick the opponent by sacrificing pieces in order to gain a future advantage and so on.) But besides this reply, we must also see that the notion of "embodiment," where technological devices are concerned, encompasses many more modes of existence than just being stuck in a body made of flesh or steel and plastic. If I log into Second Life, then I am in a very real sense "embodied" there, and I can meet my friends, I can see and interact with their virtual "bodies" in a simulation of three-dimensional space. I can talk, I can gesture, I can buy clothes and dress up, I can speak to others using my voice, I can dance. I have a whole range of bodily expressions at my disposal, and still this body is a body "somewhere else," in that it doesn't share the same 3D space with me as a biological entity. But it is less of a body for that? And: is it, where the epistemology of perception is concerned, in any significant way different from a body made of flesh which moves around physical space? But then, if I concede "embodiment" to avatars in Second Life, I must also grant it to other creatures in virtual space. A computer virus or an autonomous software agent which perceives its (virtual) environment with its particular set of sensors, processes the sensory information, and acts upon it by changing its environment: how can I deny that such an organism is, in its particular way, "embodied"?

((12)) But then, conceding that self-awareness is, to a large extent anyway, determined by the sensory inputs and the motoric "feel" of having a particular type of body: in this case we must assume that the creations of AI and AL software, having so different senses from us, using different modes of locomotion in virtual space, and having a wholly different set of possible manipulative actions at their disposal, are bound to develop a sense of self-awareness which fundamentally differs from ours; possibly to an extent which will make it hard for us even to label it as "proper" self-awareness; although, from the viewpoint of somebody living in such different worlds, it will be full self-awareness, far better tailored to the needs of the (virtual) organism in that world than our biological version of it.

Conclusion

((13)) Foerst's account of personhood is insufficient to describe the range of phenomena we are observing even at this early stage of technological development. Our personhood as biological creatures will necessarily differ from the mode of personhood adequate to a machine which can perfectly drive a car, understand road law, handle road emergencies, but have no concept of language, or of history, or of art. Or, on the other side, of a machine which is capable of perfect discourse in speech, but which lacks a body, existing only as a set of procedures inside a computer's memory. And this poses the question if it ever will be likely, as Foerst concludes, to use robots "as thinking tools to analyze our own concept of personhood afresh" ((77)), assigning personhood "to all human beings, despite our prejudices and judgments."

((14)) AI and AL creatures, far from integrating different notions of personhood, will most likely teach us to perceive the concept of personhood as a set of properties, of functional blocks, instead of an integral whole, fully in accordance with the history of other, similar concepts in the past. Instead of forcing us to assign personhood to everything that looks like a human form, the differentiation of the concept of personhood will force us to see and to accept the fact that there can well be partial personhood. We are already on the way of learning to accept that personhood is not an indivisible whole and that it does not require a human body: just like flight is not a property only of birds and does not require us to build airplanes out of feathers.

References

Ashby, W.R. (1952) Design for a Brain. New York: Wiley.

Dennett, D.C. (1978a) "Intentional Systems", in Brainstorms. Philosophical Essays on Mind and Psychology, MIT Press, 3-22.

Dennett, D.C. (1978b) "Conditions of Personhood", in Brainstorms. Philosophical Essays on Mind and Psychology, MIT Press, 267-285.

Dworkin, G. (1987) "Intention, Foreseeability, and Responsibility", in F. Schoeman (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology, Cambridge: Cambridge University Press, 338-354.

Fischer, J.M. and Ravizza, M. (1998) Responsibility and Control. A Theory of Moral Responsibility, Cambridge Studies in Philosophy and Law, Cambridge: Cambridge University Press.

Foner, L.N. (1993) "What's an Agent, Anyway? A Sociological Case Study." Available online: http://foner.www.media.mit.edu/people/foner/Julia/Julia-cite.html (accessed April 28, 2009)

Frankfurt, H. (1971) "Freedom of the Will and the Concept of a Person", Journal of Philosophy, LXVIII, 5-21.

Llobet, J.W.G. and Franco, S. (1999) "Non-Destructive Banana Ripeness Determination Using a Neural Network-Based Electronic Nose" In: Meas. Sci. Technol. Nr. 10, pp. 538-548

Matthias, A. (2004) "Responsibility Ascription to Nonhumans. Climbing the Steps of the Personhood Ladder" In: Ikaeheimo, H., Kotkavirta, J., Laitinen, A., Lyyra, P. (eds): Personhood. Workshop papers of the Conference "Dimensions of Personhood" (August 13-15, 2004). Jyvaeskylae, Finland, Publications in Philosophy 68

Matthias, A. (2008) Automaten als Traeger von Rechten (Automata as Holders of Rights). Plaedoyer fuer eine Gesetzesaenderung Berlin: Logos Verlag.

Minsky, M.L. (1988) The Society of Mind, New York: Simon & Schuster.

OTIS Elevators (2003) Elevonic Class. Redefining Elevator Performance, Safety and Comfort: The OTIS Elevonic Class. Product description, available online: http://www.otis.com

Schultz, A.C. (1991) "Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation," In: Proceedings of the Seventh Intl. Symp. on Unmanned Untethered Submersible Technology. Univ. of New Hampshire Marine Systems Engineering Laboratory, pp. 213-215

Sharkey N. (2008) "Grounds for Discrimination: Autonomous Robot Weapons" Rusi Defence Systems, October 2008. Available online: http://www.rusi.org/downloads/assets/23sharkey.pdf

Shieber, S.M. (1993) "Lessons from a Restricted Turing Test" Aiken Computation Laboratory, Division of Applied Sciences, Harvard University. Entry dated April 15, 1993 (rev 5). Available online: http://www.eecs.harvard.edu/shieber/Biblio/Papers/loebner-rev-html/loebner-rev-html.html

Stancliff, S.B. and Nechyba, M.C. (2000) "Learning to Fly: Modeling Human Control Strategies in an Aerial Vehicle". Machine Intelligence Laboratory, Electrical and Computer Engineering, University of Florida. Available online: http://www.mil.ufl.edu/publications

Wolf, S. (1987) "Sanity and the Metaphysics of Responsibility", in F. Schoeman (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology, Cambridge: Cambridge University Press, 46-62.

Zhou, Z. H., Jiang, Y., Yang, Y.B., Chen, S.F. (2002) "Lung Cancer Cell Identification Based on Artificial Neural Network Ensembles" Artificial Intelligence in Medicine 24, Nr. 1, pp. 25-36

Address:

Andreas Matthias
Dept. of Philosophy
Lingnan University
Tuen Mun, N.T.
Hong Kong
matthias@ln.edu.hk