Andreas Matthias, Lingnan University, Hong Kong
((2)) Historical categories, on the other hand, have not been created with isolated subsystems in mind, and this generates all kinds of misunderstandings. No traditional account of personhood can deal adequately with the separation of the various faculties of a person and their individual technological re-creation: chat-bots today have almost perfect speech capabilities without comprehension (Shieber 1993); intelligent vehicle navigation systems have perfect geographic knowledge and obstacle avoidance capabilities, without any language processing features (Schultz 1991, Stancliff 2000). Banana ripeness determination is done with electronic noses which can perfectly judge the smell of a banana, without being able to perceive or judge anything else (Llobet 1999).
((4)) Full personhood status is closely related to the notion of a responsible agent, and this is in turn built up out of functional subsystems, so that we can identify a list of properties which are necessary (although probably not sufficient) elements of personhood. The important thing to note is that this analysis allows us to distinguish clearly between the necessary properties of a responsible agent and those phenomena which are only incidentally features of humans, but do not represent necessary ingredients of every person-like entity (Matthias 2008):
((5)) Note also that there can be partial responsibility: in the case of compulsive gamblers or drug addicts, for example, the agent would be assumed to be responsible in all respects except those related to his addiction. This is a concept we find in many legal systems.
((7)) Foerst here still has the notion of robots as potential "images of God" in mind, as can be seen from her examples: integrated systems, which aim to reproduce the whole man, robotic creatures, which at once try to simulate both the exterior appearance and the emotional apparatus of humans ((64)). But these machines are not really what AI and AL technology is about, or what any technology generally is about.
((8)) The naive understanding of technology always focuses first on the imitation of the visible properties of some thing to be emulated, on its shape and form, just as, for example, the first flying machines, both of Leonardo and Lilienthal, were modelled after the shape of flying creatures. As the technology of flight grew more mature, it increasingly distanced itself from emulating the gestalt of birds, into reproducing, with very different means, some of their functional capabilities. Further progress tends to replace more and more of the remaining atavistic features with new, purely technological ways of achieving a function, which are in principle different from those of nature, so that the end-product of this development loses all resemblance to the biological organism that once inspired its construction. This can be observed in the transition from wing-beating flying artifacts to fixed-wing planes, to jet propulsion and rockets; or of central heating devices for houses, or of cooking stoves, which all initially imitated some naturally found phenomenon (fire in the latter cases) in order to develop with time into completely new implementations of some functional feature in a way which is alien to the earlier phenomenon (electrical heaters, infrared lamps, microwave ovens).
((9)) Therefore, the basic misunderstanding of the text lies in its assumption that AI technologies attempt to reproduce something of the shape and general habitus of a human. A close look at AI and AL technology shows that this is not the case. The reasons are evident, and they are essentially the same as for the airplane and the stove: the imitation of gestalt is not what we aim for, but the simulation of a behaviour, of one particular, often isolated, functional characteristic, and this can be implemented technologically in ways much more efficient than by imitating the designs of nature. If I want to create a bird, what is the point in building one for thousands of dollars out of metal and electronics, when I can have one for free every time I let an egg hatch? And, similarly, why simulate humans with immense expenses and technological difficulties, when there is a far more pleasurable way to create them using the means of nature? What we really want from technology (despite the literary accounts of talking statues and Golems in the history of human imagination), are not artificial birds or humans, but machines which share some of a human's properties (but not all!): we don't want a computer which needs sleep, or which forgets, or a music player which on Mondays is in the mood to play mournful songs only. We want the perfection of the technological product, its ideal availability and readiness to serve, its ability to provide one particular, well-defined, completely predictable function for our use.
((10)) Kismet and similar projects ((64)) are mainly interesting to the press, and not so much to actual developers of soft- and hardware, because their simulation of life is atavistic in the same way that Leonardo's flying machines were reminiscent of birds, and they share the same shortcomings. Nowadays computer viruses are, for example, much closer to artificial life than Kismet. They are programs which can reproduce, they can move autonomously from host to host, they can evade attempts at detection and thus actively seek to preserve their existence. But the general public does not perceive them as artificial life, and this shows how much our imagination is attached to the image, the gestalt factor, instead of the functional evaluation of a technological product. Artificial intelligence is already doing medical diagnoses (Zhou 2002), fighting wars and autonomously killing humans (Sharkey 2008), controlling submarines, missiles and planes (Schultz 1991, Stancliff 2000), and even such everyday devices like elevators and traffic lights (OTIS 2003). It is everywhere, but it doesn't look like God's image, because for most application domains, there is no point in making it look so. (An exception, of course, are electronic pets or puppets for the care of children and the elderly, where the imitation of the human or animal form is a design goal to facilitate emotional bonding between human and artifact.)
((12)) But then, conceding that self-awareness is, to a large extent anyway, determined by the sensory inputs and the motoric "feel" of having a particular type of body: in this case we must assume that the creations of AI and AL software, having so different senses from us, using different modes of locomotion in virtual space, and having a wholly different set of possible manipulative actions at their disposal, are bound to develop a sense of self-awareness which fundamentally differs from ours; possibly to an extent which will make it hard for us even to label it as "proper" self-awareness; although, from the viewpoint of somebody living in such different worlds, it will be full self-awareness, far better tailored to the needs of the (virtual) organism in that world than our biological version of it.
((14)) AI and AL creatures, far from integrating different notions of personhood, will most likely teach us to perceive the concept of personhood as a set of properties, of functional blocks, instead of an integral whole, fully in accordance with the history of other, similar concepts in the past. Instead of forcing us to assign personhood to everything that looks like a human form, the differentiation of the concept of personhood will force us to see and to accept the fact that there can well be partial personhood. We are already on the way of learning to accept that personhood is not an indivisible whole and that it does not require a human body: just like flight is not a property only of birds and does not require us to build airplanes out of feathers.
Dennett, D.C. (1978a) "Intentional Systems", in Brainstorms. Philosophical Essays on Mind and Psychology, MIT Press, 3-22.
Dennett, D.C. (1978b) "Conditions of Personhood", in Brainstorms. Philosophical Essays on Mind and Psychology, MIT Press, 267-285.
Dworkin, G. (1987) "Intention, Foreseeability, and Responsibility", in F. Schoeman (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology, Cambridge: Cambridge University Press, 338-354.
Fischer, J.M. and Ravizza, M. (1998) Responsibility and Control. A Theory of Moral Responsibility, Cambridge Studies in Philosophy and Law, Cambridge: Cambridge University Press.
Foner, L.N. (1993) "What's an Agent, Anyway? A Sociological Case Study." Available online: http://foner.www.media.mit.edu/people/foner/Julia/Julia-cite.html (accessed April 28, 2009)
Frankfurt, H. (1971) "Freedom of the Will and the Concept of a Person", Journal of Philosophy, LXVIII, 5-21.
Llobet, J.W.G. and Franco, S. (1999) "Non-Destructive Banana Ripeness Determination Using a Neural Network-Based Electronic Nose" In: Meas. Sci. Technol. Nr. 10, pp. 538-548
Matthias, A. (2004) "Responsibility Ascription to Nonhumans. Climbing the Steps of the Personhood Ladder" In: Ikaeheimo, H., Kotkavirta, J., Laitinen, A., Lyyra, P. (eds): Personhood. Workshop papers of the Conference "Dimensions of Personhood" (August 13-15, 2004). Jyvaeskylae, Finland, Publications in Philosophy 68
Matthias, A. (2008) Automaten als Traeger von Rechten (Automata as Holders of Rights). Plaedoyer fuer eine Gesetzesaenderung Berlin: Logos Verlag.
Minsky, M.L. (1988) The Society of Mind, New York: Simon & Schuster.
OTIS Elevators (2003) Elevonic Class. Redefining Elevator Performance, Safety and Comfort: The OTIS Elevonic Class. Product description, available online: http://www.otis.com
Schultz, A.C. (1991) "Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation," In: Proceedings of the Seventh Intl. Symp. on Unmanned Untethered Submersible Technology. Univ. of New Hampshire Marine Systems Engineering Laboratory, pp. 213-215
Sharkey N. (2008) "Grounds for Discrimination: Autonomous Robot Weapons" Rusi Defence Systems, October 2008. Available online: http://www.rusi.org/downloads/assets/23sharkey.pdf
Shieber, S.M. (1993) "Lessons from a Restricted Turing Test" Aiken Computation Laboratory, Division of Applied Sciences, Harvard University. Entry dated April 15, 1993 (rev 5). Available online: http://www.eecs.harvard.edu/shieber/Biblio/Papers/loebner-rev-html/loebner-rev-html.html
Stancliff, S.B. and Nechyba, M.C. (2000) "Learning to Fly: Modeling Human Control Strategies in an Aerial Vehicle". Machine Intelligence Laboratory, Electrical and Computer Engineering, University of Florida. Available online: http://www.mil.ufl.edu/publications
Wolf, S. (1987) "Sanity and the Metaphysics of Responsibility", in F. Schoeman (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology, Cambridge: Cambridge University Press, 46-62.
Zhou, Z. H., Jiang, Y., Yang, Y.B., Chen, S.F. (2002) "Lung Cancer Cell Identification Based on Artificial Neural Network Ensembles" Artificial Intelligence in Medicine 24, Nr. 1, pp. 25-36