Intention and Rejection:

Basic Principles for Subjective Robots

 

by Professor Bernhard Mitterauer, BFel.

Institute of Forensic Neuropsychiatry and Gotthard Günther Archives

University of Salzburg, Austria

 

 

INTRODUCTION

Currently there is a worldwide endeavour to develop mobile autonomous robots. According to Gates (2007) in the near future robots will be playing a decisive role in everyday life. However, robots should also be capable to explore uncertain and changing areas in the environment. To cope with this challenge, a biomimetic approach is intended. This means that the architecture of the technical devices and their programming is oriented on living systems (animals or even humans). Meanwhile, this approach progresses in giant steps, especially what the sensory and motor systems concerns. But the question arises: are these agents really autonomous or are they lacking in basic functions that command and control biological brains?

 

Here, I propose an outline of a model of autonomous robots that may show at least a “touch” of subjectivity in the sense of an animal-like or even human-like behavior. In addition to the animal-like sensory and motor systems, the following basic principles should be implemented in a robot brain:

-          intentional programming

-          logic of acceptance and rejection

But first the underlying brain model will be outlined.

 

 

BIOCYBERNETIC BRAIN MODEL

 

I focus on those structures and functions of the brain that could be basic for the construction of a robot brain with properties of subjectivity (Fig. 1).

 

 


Our brain has a double structure consisting of the neuronal system and the glial system. Both systems interact in various time scales. However, according to my brain theory: The glial system dominates the neuronal system as follows: First, glial cells have a spatiotemporal boundary-setting function in regard to the information processing in neuronal networks, so that glia structure information processing in the brain (Mitterauer, 1998; Mitterauer & Kopp, 2003). Second, in glial networks (syncytia) intentional programs may be generated. The neuronal networks test the feasibility of these programs in the environment via the sense organs (Mitterauer, 2006). Third, the neuronal system feeds this information back to the glial system, positive feedback in case of feasibility, and negative feedback in case of non-feasibility. This mechanism allows an optimization of the glial intentional programming. Fourth, since the interaction of the brain with the environment is based on intentional programs, the brain must, on the one hand, be able to reject non-feasible (inappropriate) information of the environment, and, on the other hand, be able to accept appropriate information.

 

Accordingly, our brain determines already intrasystemically (via glial-neuronal interaction) which domains of the environment can be accepted and which ones must be rejected based on the intentional programming at time x. One can also say that non-intended information is irrelevant.

 

LOGIC OF ACCEPTANCE AND REJECTION

 

From a logical point of view, I am convinced that the German-American philosopher Gotthard Guenther as early as 1962 (“cybernetic ontology”) developed a logic of subjectivity. If we are capable of implementing this logic in robot brains, then these artifacts may show a subjective behavior. Of decisive importance is the novel rejection operator not applied in classical logical systems. Formally, rejection means that a logical value that is not given in an alternative of values rejects this alternative of values. In contrast, if that value appears in the alternative of values, the pertinent value of the alternative of values is accepted. In other words: the logic of subjectivity works as an interplay between acceptance and rejection (Guenther, 1975). Most importantly, according to Guenther (1962), the rejection operator is an index of subjectivity. Thus, the increasing capability of a system to realize its intentional programs in the environment makes its behavior more subjective and individual.

 

Table 1 shows a simple application of the principles of acceptance and rejection in robotics.

 

Table 1.

Example of an intentional program (1,3,2,4), which either accepts or rejects objects (1,2) in the environment

 

 

Steps for
exploring the environment

Objects in the environment

Intentional Program of Subject

 

object value

intentional values (iv)

results

step 1

1

1

1

acceptance of ov (1,1)

step 2

1

2

3

rejection of ov (1,2)

step 3

2

1

2

acceptance of ov (2)

step 4

2

2

4

rejection of ov (2,2)

 

A robot tries to realize its intentional program, depicted as intentional values iv (1, 3, 2, 4), in four steps (step 1...4). Two objects, with their respective object values ov (1, 2), are found in the environment. In the first step, the detected object ov (1) can be accepted. In the second step, the intended value iv (3) rejects both objects ov (1, 2). In the third step ov (2) is accepted. In the fourth step, finally, iv (4) rejects both object values ov (2, 2).

 

 

Suppose the robot moves through an environment in which two different objects are present, designated as object value 1 (ov1) or object value 2 (ov2). The robot has an intentional program, which consists of the intentional values (iv) 1, 3, 2, 4. The robot starts off trying to find the intended objects in 4 steps. At the first step, the detected object (1) corresponds to the robot's intention (1), so it accepts the object (1). At the second step, however, none of the present objects (1,2) corresponds to its intentional value (iv) 3, so it rejects both objects and moves on. At the third step, one of the two objects (2) corresponds to the robot's intention (2), so it accepts the object. At the fourth step, finally, the intended value (iv) 4 results in another rejection of both objects (2,2). If a robot shows a rejecting behavior by ignoring non-intended objects and moves further along, this rejection behavior could indicate that the robot is already equipped with some subjectivity.

 

 

MODEL OF A SUBJECTIVE ROBOT

 

As initially mentioned, robotics is meanwhile biomimetically oriented and based on this approach very successful as well. Beside animal-like sensory devices, motor systems and their coordination (Ijspeert et al., 2007) are very impressive and represent the “conditio sine qua non” for subjective robots. In addition, control and command systems are programmed to learn and self-organize and even to self-model (Bongard et al., 2006). However, they do not show a subjective behavior in the sense of a kind of self-determination, since they are not really intentionally programmed and thus incapable of rejecting inappropriate information. The aim of robotics is so far to optimize the adaptation of an agent to its environment, not actively to influence it.

 

As a first step towards an intentional programming of a robotic system, I developed a “clocked perception system” (Mitterauer, 2001; 2004). Meanwhile, we are able to show a computer simulation of this system (Zinterhof, 2006). This perception system intends environmental information, but also rejects information (objects, colours, etc.) that are not appropriate to the intentional programs, Here, we are not dealing with a pattern recognition system, but with a pattern generation system which is determined by the intentional programming. Such a technical perception system is comparable to human vision, where over 90 % of the sensory information must be rejected in order to see a distinct picture. Therefore, a robot equipped with this novel perception system may show a “touch” of subjectivity.


 

Fig. 2 represents a schematic diagram of a robot that stepwise elaborates its subjectivity. In a distinct operation procedure (t1…t4), the perception system generates four intentional programs (IP1…IP4). Since the robot is moving in an environment, the domains of the environment are changing (E1…E4). In the time periods (t1…t4) four intentional programs (IP1…IP4) are generated. The robot has to compute information of two different environmental domains (E1; E2). It operates according to the logic of acceptance (A) and rejection (R). IP1 rejects E1 and accepts E2. IP2 accepts both E1 and E2. In contrast, IP3 and IP4 reject both environmental domains (E1; E2).

 

This example shows that the interaction of the robot with the environment is rather restrictive, since its intentions may be realized in other domains of the environment that have to be explored further. Such “self-determined” behavior of one robot could be interpreted by an observer as subjective. In any case, according to the logic of subjectivity proposed, this example demonstrates that the rejection behavior dominates the acceptance behavior. Finally, one should differentiate between a predominantly biomimetic behavior and a distinct subjective behavior of a robot. In other words: an animal-like behavior is not necessary a subjective behavior. Therefore, I propose to use subjective and human-like synonymously. Admittedly, then we are faced with difficult problems like consciousness and free will. However, this is not the topic of the present paper (see Mitterauer, 1998; 2000; 2006). Here, merely an attempt is made to propose two basic principles of subjective (human) systems, namely intention and rejection.

 

 

 

FUTURE PROSPECTS

 

Perhaps one of the greatest breakthroughs in robotics has been achieved by Bongard and co-workers last year. A four-legged machine uses activation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, the machine adapts this self-model, leading to the generation of alternative gaits. The researchers conclude that this concept may help develop more robot machines and shed light on self-modelling in animals. In a commentary, Adami (2006) over-interpreted this self-modelling robotic system. He even speaks of a test bed for models of self-awareness. However, we should be cautious to relate this self-modelling mechanism to consciousness or self-consciousness. In addition, we should be aware that brain research is in a stalemate what the exploration of consciousness or self-consciousness concerns. I am afraid that this situation will not significantly change. As I mentioned, over the years only robotics may improve this situation.

 

The paper presented here is a modest attempt to propose basic principles of subjectivity for robotics. It is also biomimetic, since it is based on a brain model. If it is possible – and I am convinced it is – to implement intentional programming and the logic of acceptance and rejection in a robot brain, then it may show a “touch” of subjective behavior. Since subjectivity connotes emotion, free decision processes and consciousness, the behavior of these robots will teach us if we have really implemented subjectivity. Moreover, since subjective human behavior is characterized by the capability to actively change the environment, the optimization techniques of subjective robots could reach a stage where they are also able to actively change their environments and robot-man interactions. Then ethics enter the picture. That is presently science fiction. However, since my model of subjectivity is formally based (Pfalzgraf & Mitterauer, 2005), it may be a good science fiction, becoming reality in this or the next century (Mitterauer, 1989).

 

 

 

REFERENCES

 

Adami C. (2006). What do robots dream of? Science 314: 1093-94.

Bongard J., Zykov V., Lipson H. (2006). Resilient machines through continuous

     self-modelling. Science 314: 1118-21.

Gates W.H. (2007=. Roboter für jedermann. Spektrum der Wissenschaft 3: 36-45.

Günther, G. (1962). Cybernetic ontology and transjunctional operations.

     BCL Publication 68. Biological Computer Laboratory, Urbana, Illinois.

Guenther G. (1975). Das Janusgesicht der Dialektik. In: Hegel-Jahrbuch.

     Beyer W.R. (ed.). Pahl-Rugenstein, Köln, pp. 98-117.

Ijspeert A.J. et al. (2007). From swimming to walking with a salamander

     robot driven by a spinal cord model. Science 315: 1416-20.

Mitterauer, B. (1989). Architektonik. Entwurf einer Metaphysik der Machbarkeit,

     Vienna, Brandstätter.

Mitterauer, B. (1998). An interdisciplinary approach towards a theory

     of consciousness, BioSystems 45, 99-121.

Mitterauer B. (2000). Some principles for conscious robots. Journal of

     Intelligent Systems 10: 27-56.

Mitterauer B. (2001). Clocked perception system. Journal of Intelligent

     Systems 11: 269-297.

Mitterauer B. (2004). Computer system, particularly for simulation

     of human perception via sense organs. United States Patent, US6, 697, 789B2.

Mitterauer B. (2006). Where and how could intentional programs be generated

     in the brain? A hypothetical model based on glial-neuronal

     interactions. BioSystems 88: 101-112.

Mitterauer B., Kopp C. (2003). The self-composing brain. Towards a

     glial-neuronal brain theory. Brain and Cognition 51: 357-67.

Pfalzgraf J., Mitterauer B. (2005). Towards a biomathematical model of

     intentional multiagent systems. In: Eurocast 2005, LNCS 3643,

     Moreno Diaz et al. (eds.), Berlin, Springer, pp 577-583.

Zinterhof P. (2006). Biomimetische Stimulation von Hirnfunktionen.

     PhD-Thesis, University of Salzburg.



[ back to "Publications & Special Reports" ]
[ BWW Society Home Page ]