aboutsummaryrefslogtreecommitdiffstats
path: root/airlift-zstd/testdata/calgary/paper3
blob: 186d0e6f2550a4a580ab03584e4a75684bc484ff (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
.pn 0
.ls1
.EQ
delim $$
.EN
.ev1
.ps-2
.vs-2
.ev
\&
.sp 10
.ps+4
.ce
IN SEARCH OF ``AUTONOMY''
.ps-4
.sp4
.ce
Ian H. Witten
.sp2
.ce4
Department of Computer Science
The University of Calgary
2500 University Drive NW
Calgary, Canada T2N 1N4
.sp2
.sh "Abstract"
.pp
This paper examines the concept of autonomy as it pertains to computer
systems.
Two rather different strands of meaning are identified.
The first regards autonomy as self-government or self-motivation.
This is developed by reviewing some recent AI research on representing and
using goals, together with physiological, psychological, and philosophical
viewpoints on motivation and goal-seeking behavior.
The second concerns the biological independence of organisms which have the
ability to maintain their own organization in a capricious environment.
The advantages of such organisms have been realized recently in a number of
different computer contexts, and the examples of worm programs,
self-replicating Trojan horses and viruses are introduced and discussed.
.bp 1
.ls2
.sh "Introduction"
.pp
What does it mean for a machine to be autonomous?
Has any progress been made towards autonomous machines since Grey Walter's
famous \fIM.\ Speculatrix\fR\u1\d (Walter, 1953)?
.[
Walter 1953 living brain
.]
.FN
1.\ \ for the discerning, or ``tortoise'' for the profane, as its inventor
took pains to point out.
.EF
In a narrow sense it is clear that there has, as evidenced by the evolution of
the \fIM.\ Labyrinthea\fR species (of which Claude Shannon constructed an
early example) into the fleet-footed trial-and-error goal
seeking devices seen in successive generations of the IEEE Micromice
competition.
However, these devices have a predictable course and a predestined end,
providing an excellent example of the old argument against artificial
intelligence that ``reliable computers do only what they are instructed to
do''.
In this paper we seek autonomy in some deeper sense.
.pp
It is not surprising that dictionary definitions of autonomy concentrate on
natural systems.
According to the Oxford dictionary, it has two principal strands of meaning:
.LB "\fBAutonomy\fR  1.  \fBa\fR  "
.NI "\fBAutonomy\fR  1.  \fBa\fR  "
\fBAutonomy\fR\ \ 1.\ \ Of a state, institution, etc
.NI "\fBa\fR  "
\fBa\fR\ \ The right of self-government, of making its own laws and
administering its own affairs
.NI "\fBb\fR  "
\fBb\fR\ \ Liberty to follow one's will, personal freedom
.NI "\fBc\fR  "
\fBc\fR\ \ Freedom (of the will): the Kantian doctrine of the Will giving
itself its own law, apart from any object willed; opposed to \fIheteronomy\fR
.NI "1.  \fBa\fR  "
2.\ \ \fIBiol.\fR  autonomous condition
.NI "\fBa\fR  "
\fBa\fR\ \ The condition of being controlled only by its own laws, and not
subject to any higher one
.NI "\fBb\fR  "
\fBb\fR\ \ Organic independence
.LE "\fBAutonomy\fR  1.  \fBa\fR  "
Our interest here lies in practical aspects of autonomy as opposed to
philosophical ones.
Consequently we will steer clear of the debate on free will and what it means
for machines, simply noting in passing that some dismiss the problem out of
hand.
For instance, Minsky (1961) quotes with approval McCulloch (1954) that our
\fIfreedom of will\fR ``presumably means no more than that we can distinguish
between what we intend (ie our \fIplan\fR), and some intervention in our
action''\u2\d.
.FN
2.\ \ This seems to endow free will to a Micromouse which, having mapped the
maze, is following its plan the second time round when it finds a new
obstacle!
.EF
.[
Minsky 1961 steps toward artificial intelligence
.]
.[
McCulloch 1954
.]
We also refrain from the potentially theological considerations of what is
meant by ``higher'' laws in the second part.
.pp
How can we interpret what is left of the definition?
In terms of modern AI, the first meaning can best be read as
self-government through goal-seeking behavior,
setting one's own goals, and choosing which way to pursue them.
The second meaning, organic independence, has been the subject of major debate
in the biological and system-theoretic community around the concepts of
``homeostasis'' and, more recently, ``autopoiesis''.
.pp
Our search in this paper will pursue these strands separately.
Goals and plans have received much attention in AI, both from the point of
view of understanding (or at least explaining) stories involving human goals
and how they can be achieved or frustrated, and in purely artificial systems
which learn by discovery.
Biologists and psychologists have studied goal-seeking behavior in people,
and come to conclusions which seem to indicate remarkable similarities with
the approach taken by current AI systems to setting and pursuing goals.
On the other side of the coin, there are strong arguments that these
similarities should be viewed with a good deal of suspicion.
.pp
The second strand of meaning, organic independence, has not been contemplated
explicitly in mainstream computer science.
There have been a number of well-known developments on the periphery of
the subject which do involve self-replicating organisms.
Examples include games such as ``life'' (Berlekamp \fIet al\fR, 1982) and
``core wars'' (Dewdney, 1984), as well as
cellular (eg Codd, 1968), self-reproducing (eg von Neumann, 1966),
and evolutionary (eg Fogel \fIeg al\fR, 1966) automata.
.[
Dewdney 1984
.]
.[
Berlekamp Conway Guy 1982
.]
.[
Codd 1968 cellular automata
.]
.[
von Neumann 1966 self-reproducing automata
.]
.[
Fogel Owens Walsh 1966
.]
However, these seem artificial and contrived examples of autonomy.
In contrast, some autonomous systems have recently arisen naturally in
computer software.
We examine the system-theoretic idea of ``autopoiesis'' and then look at these
software developments in this context.
.sh "Goal-seeking \(em artificial and natural"
.pp
In a discussion of robots and emotions, Sloman and Croucher (1981) note that
many people deny that machines could ever be said to have their own goals.
``Machines hitherto familiar to us either are not goal-directed at all
(clocks, etc) or else, like current game-playing computer programs,
have a simple hierarchical set of goals, with the highest-level goal put there
by a programmer''.
.[
Sloman Croucher 1981 robots emotions
.]
They postulate that robots will need \fImotive generators\fR to allow them
to develop a sufficiently rich structure of goals; unfortunately they do not
say how such generators might work.
To exemplify how goals are used in existing AI programs, we will briefly
review two lines of current research.
.rh "Examples of artificial goal-seeking."
Those working on conceptual dependency in natural language understanding have
long recognized that stories cannot be understood without knowing about the
goal-seeking nature of the actors involved.
Schank & Abelson (1977) present a taxonomy of human goals, noting that
different attempts at classification present a confusing array of partially
overlapping constructs and suggesting that some future researcher might
succeed in bringing order out of the chaos using methods such as cluster
analysis.
.[
Schank Abelson 1977
.]
They postulate the following seven goal forms:
.LB
.NP
Satisfaction goal \(em a recurring strong biological need
.br
Examples:  \fIhunger\fR, \fIsex\fR, \fIsleep\fR
.NP
Enjoyment goal \(em an activity which is optionally pursued for enjoyment or
relaxation
.br
Examples:  \fItravel\fR, \fIentertainment\fR, \fIexercise\fR
(in addition, the activities implied by some satisfaction goals may
alternatively be pursued primarily for enjoyment)
.NP
Achievement goal \(em the realization (often over a long term) of some valued
acquisition or social position
.br
Examples:  \fIpossessions\fR, \fIgood job\fR, \fIsocial relationships\fR
.NP
Preservation goal \(em preserving or improving the health, safety, or good
condition of people, position, or property
.br
Examples:  \fIhealth\fR, \fIgood eyesight\fR
.NP
Crisis goal \(em a special class of preservation goal set up to handle serious
and imminent threats.
.br
Examples:  \fIfire\fR, \fIstorm\fR
.NP
Instrumental goal \(em occurs in the service of any of the above goals to
realize a precondition
.br
Examples:  \fIget babysitter\fR
.NP
Delta goal \(em similar to instrumental goal except that general planning
operations instead of scripts are involved in its pursuit
.br
Examples:  \fIknow\fR, \fIgain-proximity\fR, \fIgain-control\fR.
.LE
The first three involve striving for desired states;
the next two, avoidance of undesired states;
the last two, intermediate subgoals for any of the other five forms.
Programs developed within this framework ``understand'' (ie can answer
questions about) stories involving human actors with these goals
(eg Wilensky, 1983; Dyer, 1983).
.[
Wilensky 1983 Planning and understanding
.]
.[
Dyer 1983 in-depth understanding MIT Press
.]
For example, if John goes to a restaurant it is likely that he is attempting
to fulfill either a satisfaction goal or an entertainment goal (or both).
Instrumental or delta goals will be interpreted in the context of the
prevailing high-level goal.
If John takes a cab to the restaurant it will be understood that he is
achieving the delta goal \fIgain-proximity\fR in service of his satisfaction
or entertainment goal.
.pp
Our second example of goal usage in contemporary AI is Lenat's ``discovery''
program \s-2AM\s+2, and its successor \s-2EURISKO\s+2 (Davis & Lenat, 1982;
Lenat \fIet al\fR, 1982).
.[
Davis Lenat 1982
.]
.[
Lenat Sutherland Gibbons 1982
.]
These pursue interesting lines of research in the domains of
elementary mathematics and VLSI design heuristics, respectively.
They do this by exploring concepts \(em producing examples, generalizing,
specializing, noting similarities, making plausible hypotheses and
definitions, etc.
The programs evaluate these discoveries for utility and ``interestingness,''
and add them to the vocabulary of concepts.
They essentially perform exploration in an enormous search space, governed
by heuristics which evaluate the results and suggest fruitful avenues for
future work.
.pp
Each concept in these systems is represented by a frame-like data structure
with dozens of different facets or slots.
The types of facets in \s-2AM\s+2 include
.LB
.NP
examples
.NP
definitions
.NP
generalizations
.NP
domain/range
.NP
analogies
.NP
interestingness.
.LE
Heuristics are organized around the facets.
For example, the following strategy fits into the \fIexamples\fR facet
of the \fIpredicate\fR concept:  \c
.sp
.BQ
If, empirically, 10 times as many elements
.ul
fail
some predicate P as
.ul
satisfy
it, then some
.ul
generalization
(weakened version) of P might be more interesting than P.
.FQ
.sp
\s-2AM\s+2 considers this suggestion after trying to fill in examples of each
predicate.
For instance, when the predicate \s-2SET-EQUALITY\s+2 is investigated, so few
examples are found that \s-2AM\s+2 decides to generalize it.
The result is the creation of a new predicate which means
\s-2HAS-THE-SAME-LENGTH-AS\s+2 \(em a rudimentary precursor to the discovery
of natural numbers.
.pp
In an unusual and insightful retrospective on these programs,
Lenat & Brown (1984) report that the exploration consists of (mere?) syntactic
mutation of programs expressed in certain representations.
.[
Lenat Brown 1984
.]
The key element of the approach is to find representations with a high
density of interesting concepts so that many of the random mutations will be
worth exploring.
If the representation is not well matched to the problem domain, most
explorations will be fruitless and the method will fail.
.pp
While the conceptual dependency research reviewed above is concerned with
understanding the goals of actors in stories given to a program, the approach
taken seems equally suited to the construction of artificial goal-oriented
systems.
If a program could really understand or empathize with the motives of people,
it seems a small technical step to turn it around to create an autonomous
simulation with the same motivational structure.
Indeed, one application of the conceptual dependency framework is in
\fIgenerating\fR coherent stories by inventing goals for the actors, choosing
appropriate plans, and simulating the frustration or achievement of the goals
(Meehan, 1977).
.[
Meehan 1977 talespin
.]
The ``learning by discovery'' research shows how plausible subgoals can be
generated from an overall goal of maximizing the interestingness of
the concepts being developed.
It is worth noting that Andreae (1977) chose a similar idea, ``novelty,''
as the driving force behind a very different learning system.
.[
Andreae 1977 thinking with the teachable machine
.]
Random mutation in an appropriate representation seems to be the closest we
have come so far to the \fImotive generator\fR mentioned at the beginning of
this section.
.rh "The mechanism and psychology of natural goal-seeking."
Now turn to natural systems.
The objection to the above-described use of goals in natural language
understanders and discovery programs is that they are just programmed in.
The computer only does what it is told.
In the first case, it is told a classification of goals and given
information about their interrelationships, suitable plans for achieving them,
and so on.
In the second case it is told to maximize interestingness by random
mutation.
On the surface, these seem to be a pale reflection of the autonomous
self-government of natural systems.
But let us now look at how goals seem to arise in natural systems.
.pp
The eminent British anatomist J.Z.\ Young describes the modern biologist's
highly mechanistic view of the basic needs of animals.
.[
Young 1978 programs of the brain
.]
``Biologists no longer believe that living depends upon some special
non-physical agency or spirit,'' he avers (Young, 1978, p.\ 13), and goes on
to claim that we now understand how it comes about that organisms behave as if
all their actions were directed towards an aim or goal\u3\d.
.FN
3.\ \ Others apparently tend to be more reticent \(em
``it has been curiously unfashionable among biologists to call attention to
this characteristic of living things'' (Young, 1978, p.\ 16).
.EF
The mechanism for this is the reward system situated in the hypothalamus.
For example, the cells of the hypothalamus ensure that the right amount of
food and drink are taken and the right amount is incorporated to allow the
body to grow to its proper size.
These hypothalamic centers stimulate the need for what is lacking, for
instance of food, sex, or sleep, and they indicate satisfaction when enough
has been obtained.
Moreover, the mechanism has been traced to a startling level of detail.
For example, Young describes how hypothalamic cells can be
identified which regulate the amount of water in the body.
.sp
.BQ
The setting of the level of their sensitivity to salt provides the
instruction that determines the quantity of water that is held in the body.
We can say that the properties of these cells are physical symbols
``representing'' the required water content.
They do this in fact by actually swelling or shrinking when the salt
concentration of the blood changes.
.FQ "Young, 1978, p.\ 135"
.sp
Food intake is regulated in the same way.
The hypothalamus ensures propagation of the species by directing reproductive
behavior and, along with neighboring regions of the brain, attends to the goal
of self-preservation by allowing us to defend ourselves if attacked.
.pp
Needless to say, experimental evidence for this is obtained primarily from
animals.
Do people's goals differ?
The humanistic psychologist Abraham Maslow propounded a theory of human
motivation that distinguishes between different kinds of needs (Maslow, 1954).
.[
Maslow 1954
.]
\fIBasic needs\fR include hunger, affection, security, love, and self-esteem.
\fIMetaneeds\fR include justice, goodness, beauty, order, and unity.
Basic needs are arranged in a hierarchical order so that some are stronger
than others (eg security over love); but all are generally stronger than
metaneeds.
The metaneeds have equal value and no hierarchy, and one can be substituted
for another.
Like the basic needs, the metaneeds are inherent in man, and when they are not
fulfilled, the person may become psychologically sick (suffering, for example,
from alienation, anguish, apathy, or cynicism).
.pp
In his later writing, Maslow (1968) talks of a ``single ultimate value for
mankind, a far goal towards which all men strive''.
Although going under different names (Maslow favors \fIself-actualization\fR),
it amounts to ``realizing the potentialities of the person, that is to say,
becoming fully human, everything that the person \fIcan\fR become''.
However, the person does not know this.
As far as he is concerned, the individual needs are the driving force.
He does not know in advance that he will strive on after the current need
has been satisfied.
Maslow produced the list of personality characteristics of the psychologically
healthy person shown in Table\ 1.
.RF
.in 0.5i
.ll -0.5i
.nr x0 \n(.l-\n(.i
\l'\n(x0u'
.in +\w'\(bu 'u
.fi
.NP
They are realistically oriented.
.NP
They accept themselves, other people, and the natural world for what they are.
.NP
They have a great deal of spontaneity.
.NP
They are problem-centered rather than self-centered.
.NP
They have an air of detachment and a need for privacy.
.NP
They are autonomous and independent.
.NP
Their appreciation of people and things is fresh rather than stereotyped.
.NP
Most of them have had profound mystical or spiritual experiences although not
necessarily religious in character.
.NP
They identify with mankind.
.NP
Their intimate relationships with a few specially loved people tend to be
profound and deeply emotional rather than superficial.
.NP
Their values and attitudes are democratic.
.NP
They do not confuse means with ends.
.NP
Their sense of humor is philosophical rather than hostile.
.NP
They have a great fund of creativeness.
.NP
They resist conformity to the culture.
.NP
They transcend the environment rather than just coping with it.
.nf
.in -\w'\(bu 'u
\l'\n(x0u'
.ll +1i
.in 0
.FE "Table 1: Characteristics of self-actualized persons (Maslow, 1954)"
.pp
Maslow's \fIbasic needs\fR seem to correspond reasonably closely with those
identified by conceptual dependency theory.
Moreover, there is some similarity to the goals mentioned by Young (1978),
which, as we have seen, are thought to be ``programmed in'' to the brain in an
astonishingly literal sense.
Consequently it is not clear how programs in which these goals are embedded
differ in principle from goal-oriented systems in nature.
The \fImetaneeds\fR are more remote from current computer systems,
although there have been shallow attempts to simulate paranoia in the
\s-2PARRY\s+2 system (Colby, 1973).
.[
Colby 1973 simulations of belief systems
.]
It is intriguing to read Table\ 1 in the context of self-actualized computers!
Moreover, one marvels at the similarity between the single-highest-goal model
of people in terms of self-actualization, and the architecture for discovery
programs sketched earlier in terms of a quest for ``interestingness''.
.rh "The sceptical view."
The philosopher John Haugeland addressed the problem of natural language
understanding and summed up his viewpoint in the memorable aphorism,``the
trouble with Artificial Intelligence is that computers don't give a damn''
(Haugeland, 1979).
.[
Haugeland 1979 understanding natural language
.]
He identified four different ways in which brief segments of text cannot be
understood ``in isolation'', which he called four \fIholisms\fR.
Two of these, concerning \fIcommon-sense knowledge\fR and
\fIsituational knowledge\fR,
are the subject of intensive research in natural language analysis systems.
Another, the \fIholism of intentional interpretation\fR,
expresses the requirement that utterances and descriptions ``make sense'' and
seems to be at least partially addressed by the goal/plan orientation of some
natural language systems.
It is the fourth, called \fIexistential holism\fR, that is most germane to the
present topic.
Haugeland argues that one must have actually \fIexperienced\fR emotions (like
embarrassment, relief, guilt, shame) to understand
``the meaning of text that (in a familiar sense) \fIhas\fR any meaning''.
One can only experience emotions in the context of one's own self-image.
Consequently, Haugeland concludes that
``only a being that cares about who it is, as some sort of enduring whole,
can care about guilt or folly, self-respect or achievement, life or death.
And only such a being can read.''  Computers just don't give a damn.
.pp
As AI researchers have pointed out repeatedly, however, it is difficult to
give such arguments \fIoperational\fR meanings.
How could one test whether a machine has \fIexperienced\fR an emotion like
embarrassment?
If it acts embarrassed, isn't that enough?
And while machines cannot yet behave convincingly as though they do experience
emotions, it is not clear that fundamental obstacles stand in the way of
further and continued progress.
There seems to be no reason in principle why a machine cannot be given a
self-image.
.pp
This controversy has raged back and forth for decades, a recent resurgence
being Searle's (1980) paper on the Chinese room, and the 28 responses which
were published with it.
.[
Searle 1980 minds programs
.]
Searle considered the following \fIgedanken\fP experiment.
Suppose someone, who knows no Chinese (or any related language), is locked in
a room and given three large batches of Chinese writing, together with a
set of rules in English which allow her to correlate the apparently
meaningless squiggles in the three batches and to produce certain sorts of
shapes in response to certain sorts of shapes which may appear in the third
batch.
Unknown to her, the experimenters call the first batch a ``script'', the
second batch a ``story'', the third batch ``questions'', and the symbols
she produces ``answers''.
We will call the English rules a ``program'', and of course the intention is
that, when executed, sensible and appropriate Chinese answers, based on the
Chinese script, are generated to the Chinese questions about the Chinese
story.
But the subject, with no knowledge of Chinese, does not see them that way.
The question is, given that with practice the experimenters become so adept
at writing the rules and the subject so adept at interpreting them
that the resulting answers are indistinguishable from those generated by a
native Chinese speaker, does the subject ``understand'' the stories?
To summarize a large and complex debate in a few words, Searle says no; while
many AI researchers say yes, or at least that the subject-plus-rules system
understands.
.pp
Searle states his thesis succinctly:  ``such intentionality as computers
appear to have is solely in the minds of those who program them and those who
use them, those who send in the input and those who interpret the output''.
And the antithesis could be caricatured as
``maybe, but does it \fImatter?\fR''.
Those who find the debate frustrating can always, with
Sloman & Croucher (1981), finesse the issue:  \c
``Ultimately, the decision whether to say such machines have motives is a
\fImoral\fR decision, concerned with how we ought to treat them''.
.[
Sloman Croucher 1981 robots emotions
.]
.sh "Autopoiesis \(em natural and artificial"
.pp
Autonomy is a striking feature of biological systems.
Not surprisingly, some biologists have made strenuous attempts to articulate
what it means to them; to pin it down, formalize and study it in a
system-theoretic context.
However, this work is obscure and difficult to assess in terms of its
predictive power (which must be the fundamental test of any theory).
Even as a descriptive theory its use is surrounded by controversy.
Consequently this section attempts to give the flavor of the endeavor, relying
heavily on quotations from the major participants in the research, and goes on
to describe some practical computer systems which appear to satisfy the
criteria biologists have identified for autonomy.
.rh "Homeostasis."
People have long expressed wonder at how a living organism maintains its
identity in the face of continuous change.
.sp
.BQ
In an open system, such as our bodies represent, compounded of unstable
material and subjected continuously to disturbing conditions, constancy is
in itself evidence that agencies are acting or ready to act, to maintain this
constancy.
.FQ "Cannon, 1932"
.sp
.[
Cannon 1932 wisdom of the body
.]
Following Cannon, Ashby (1960) developed the idea of ``homeostasis'' to
account for this remarkable ability to preserve stability under conditions of
change.
.[
Ashby 1960 design for a brain
.]
The word has now found its way into North American dictionaries, eg Webster's
.sp
.BQ
Homeostasis is the tendency to maintain, or the maintenance of, normal,
internal stability in an organism by coordinated responses of the organ
systems that automatically compensate for environmental changes.
.FQ
.sp
The basis for homeostasis was adaptation by the organism.
When change occurred, the organism adapted to it and thus preserved its
constancy.
.sp
.BQ
A form of behavior is \fIadaptive\fR if it maintains the essential variables
within physiological limits.
.FQ "Ashby, 1960, p. 58"
.sp
The ``essential variables'' are closely related to survival and linked
together dynamically so that marked changes in any one soon lead to changes in
the others.
Examples are pulse rate, blood pressure, body temperature, number of
bacteria in the tissue, etc.
Ashby went so far as to construct an artifact, the ``Homeostat'', which
exhibits this kind of ultrastable equilibrium.
.pp
Homeostasis emphasizes the stability of biological systems under external
change.
Recently, a concept called ``autopoiesis'' has been identified, which
captures the essence of biological autonomy in the sense of stability or
preservation of identity under \fIinternal\fR change
(Maturana, 1975; Maturana & Varela, 1980; Varela, 1979; Zeleny, 1981).
.[
Maturana 1975 organization of the living
.]
.[
Maturana Varela 1980 autopoiesis
.]
.[
Varela 1979 biological autonomy
.]
.[
Zeleny 1981 Editor Autopoiesis  a theory of living organization
.]
This has aroused considerable interest, and controversy, in the system
theoretic research community.
.rh "Autopoiesis."
The neologism ``autopoiesis'' means literally ``self-production'', and a
striking example occurs in living cells.
These complex systems produce and synthesize macromolecules of proteins,
lipids, and enzymes, and consist of about $10 sup 5$ macromolecules.
The entire population of a given cell is renewed about $10 sup 4$ times
during its lifetime (Zeleny, 1981a).
.[
%A Zeleny, M.
%D 1981a
%T What is autopoiesis?
%E M.Zeleny
%B Autopoiesis:  a theory of living organization
%I North Holland
%C New York
%P 4-17
.]
Despite this turnover of matter, the cell retains its distinctiveness and
cohesiveness \(em in short, its \fIautonomy\fR.
This maintenance of unity and identity of the whole, despite the fact that
all the while components are being created and destroyed, is called
``autopoiesis''.
A concise definition is
.sp
.BQ
Autopoiesis is the capability of living systems to develop and maintain
their own organization.
The organization that is developed and maintained is identical to that
performing the development and maintenance.
.FQ "Andrew, 1981, p. 156"
.sp
.[
Andrew 1981
.]
Other authors (eg Maturana & Varela, 1980; Zeleny, 1981a) add a corollary:
.sp
.BQ
a topological boundary emerges as a result of the processes [of development
and maintenance].
.FQ "Zeleny, 1981a, p. 6"
.sp
This emphasizes the train of thought ``from self-production to identity''
that seems to underly much of the autopoietic literature.
.pp
Operating as a system which produces or renews its own components, an
autopoietic system continuously regenerates its own organization.
It does this in an endless turnover of components and despite inevitable
perturbations.
Therefore autopoiesis is a form of homeostasis which has its own
organization as the fundamental variable that remains constant.
The principal fascination of the concept lies in the self-reference it
implies,
This has stimulated a theoretical formulation of the notion of circularity or
self-reference in Varela's (1975) extension of Brown's
``calculus of distinctions'' (Brown, 1969).
.[
%A Varela, F.J.
%D 1975
%K *
%T A calculus for self-reference
%J Int J General Systems
%V 2
%N 1
%P 5-24
.]
.[
Brown 1969 Laws of Form
.]
Along with other work on self-reference (eg Hofstadter, 1979), this
has an esoteric and obscure, almost mystical, quality.
.[
Hofstadter 1979 Godel Escher Bach
.]
While it may yet form the basis of a profound paradigm shift in systems
science, it is currently surrounded by controversy and its potential
contribution is quite unclear (Gaines, 1981).
.[
Gaines 1981 Autopoiesis some questions
.]
Indeed, it has been noted that an
``unusual degree of parochialism, defensiveness, and quasi-theological
dogmatism has arisen around autopoiesis'' (Jantsch, 1981).
.[
Jantsch 1981 autopoiesis
.]
.pp
There has been considerable discussion of the relation between autopoiesis and
concepts such as purpose and information.
Varela (1979) claims that
``notions [of teleology and information] are unnecessary for the
\fIdefinition\fR of the living organization, and that they belong to a
descriptive domain distinct from and independent of the domain in which the
living system's \fIoperations\fR are described'' (p.\ 63/64).
In other words, nature is not about goals and information; we observers invent
such concepts to help classify what we see.
Maturana (1975) is more outspoken:  \c
``descriptions in terms of information transfer, coding and computations of
adequate states are fallacious because they only reflect the observer's domain
of purposeful design and not the dynamics of the system as a state-determined
system'';
.[
Maturana 1975 organization of the living
.]
presumably goals are included too in the list of proscribed terms.
Some have protested strongly against this hard-line view \(em which is
particularly provocative because of its use of the word ``fallacious'' \(em
and attempted to reconcile it with ``the fact that the behavior of people and
animals is very readily and satisfactorily described in terms of goals and
attempts to achieve them'' (Andrew, 1981, p. 158).
In his more recent work Varela (1981) diverged further from the hard-line
view, explaining that he had intended to criticize only ``the \fInaive\fR use
of information and purpose as notions that can enter into the definition of
a system on the same basis as material interactions'' [his emphasis].
.[
Varela 1981 describing the logic of the living
.]
He concluded that ``autopoiesis, as an operational explanation, is not quite
sufficient for a full understanding of the phenomenology of the living,
and that it needs a carefully constructed complementary symbolic
explanation''.
For Varela, a symbolic explanation is one that is based on the notions of
information and purpose.
It is clear, though, that while some allow that autopoiesis can \fIcoexist\fR
with purposive interpretations, it will not \fIcontribute\fR to them.
.pp
Is autopoiesis restricted to \fIliving\fR systems?
Some authors find it attractive to extend the notion to the level of society
and socio-political evolution (eg Beer, 1980; Zeleny, 1977).
.[
Beer 1980
.]
.[
Zeleny 1977
.]
Others (eg Varela, 1981) stress the renewal of components through material
self-production and restrict autopoiesis to chemical processes.
Without self-production in a material sense, the support for the corollary
above becomes unclear, and consequently the whole relevance of autopoiesis
to identity and autonomy comes under question.
.rh "Artificial autopoiesis."
Although one can point to computer simulations of very simple autopoietic
systems (eg Varela \fIet al\fR, 1974; Zeleny, 1978; Uribe, 1981), there seems
to have been little study of artificially autopoietic systems in their own
right.
.[
Varela Maturana Uribe 1974 autopoiesis characterization and model
.]
.[
Zeleny 1978 experiments in self-organization of complexity
.]
However there are examples of computer systems which are autopoietic and
which have arisen ``naturally'', that is to say, were developed for other
purposes and not as illustrations of autopoiesis.
It is probably true that in each case the developers were entirely unaware
of the concept of autopoiesis and the interest surrounding it in system
theory circles.
.pp
.ul
Worm programs
were an experiment in distributed computation (Shoch & Hupp, 1982).
.[
Shoch Hupp 1982
.]
The problem they addressed was to utilize idle time on a network of
interconnected personal computers without any impact on normal use.
It was necessary to be able to redeploy or unplug any machine at any time
without warning.
Moreover, in order to make the system robust to any kind of failure,
power-down or ``I am dying'' messages were not employed in the protocol.
A ``worm'' comprises multiple ``segments'', each running on a different
machine.
Segments of the worm have the ability to replicate themselves in idle
machines.
All segments remain in communication with each other, thus preserving the
worm's identity and distinguishing it from a collection of independent
processes; however, all segments are peers and none is in overall control.
To prevent uncontrolled reproduction, a certain number of segments is
pre-specified as the target size of the worm.
When a segment is corrupted or killed, its peers notice the fact because it
fails to make its periodical ``I am alive'' report.
They then proceed to search for an idle machine and occupy it with another
segment.
Care is taken to coordinate this activity so that only one new segment is
created.
.pp
There are two logical components to a worm.
The first is the underlying worm maintenance mechanism, which is responsible
for maintaining the worm \(em finding free machines when needed and
replicating the program for each additional segment.
The second is the application part, and several applications have been
investigated (Shoch & Hupp, 1982), such as
.LB
.NP
.ul
existential
worm that merely announces its presence on each computer it inhabits;
.NP
.ul
billboard
worm that posts a graphic message on each screen;
.NP
.ul
alarm clock
worm that implements a highly reliable alarm clock that is not based on any
particular machine;
.NP
.ul
animation
worm for undertaking lengthy computer graphics computations.
.LE
.pp
Can worms shed any light on the controversies outlined above which surround
the concept of autopoiesis?
Firstly, although they are not living and do not create their own material in
any chemical sense, they are certainly autonomous, autopoietic systems.
Shoch & Hupp relate how
.sp
.BQ
a small worm was left running one night, just exercising the worm control
mechanism and using a small number of machines.
When we returned the next morning, we found dozens of machines dead,
apparently crashed.
If one restarted the regular memory diagnostic, it would run very briefly,
then be seized by the worm.
The worm would quickly load its program into this new segment; the program
would start to run and promptly crash, leaving the worm incomplete \(em and
still hungrily looking for new segments.
.FQ
.sp
John Brunner's science fiction story \fIThe shockwave rider\fR presaged just
such an uncontrollable worm.
Of course, extermination is always possible in principle by switching off or
simultaneously rebooting every machine on the network, although this may not
be an option in practice.
Secondly, in the light of our earlier discussion of teleology and autopoiesis,
it is interesting to find the clear separation of the maintenance mechanism
\(em the autopoietic part \(em from the the application code \(em the
``purposive'' part \(em of the worm.
It can be viewed quite separately as an autopoietic or an application
(teleological?) system.
.pp
.ul
Self-replicating Trojan horses.
In his Turing Award lecture, Thompson (1984) raised the specter of
ineradicable programs residing within a computer system \(em ineradicable in
the sense that although they are absent from all source code, they can survive
recompilation and reinstallation of the entire system!
.[
Thompson 1984 reflections trust
.]
Most people's reaction is ``impossible! \(em it must be a simple trick'',
but Thompson showed a trick that is extremely subtle and sophisticated, and
effectively impossible to detect or counter.
The natural application of such a device is to compromise a system's security,
and Thompson's conclusion was that there can be no technical substitute for
natural trust.
From a system-theoretic viewpoint, however, this is an interesting example
of how a parasite can survive despite all attempts by its host to eliminate
it.
.pp
To understand what is involved in creating such an organism, consider first
self-replicating programs.
When compiled and executed, these print out themselves (say in source code
form); no more and no less.
Although at first sight they seem to violate some fundamental intuitive
principle of information \(em that to print oneself one needs
\fIboth\fR ``oneself'' \fIand, in addition\fR, something to print it out,
this is not so.
Programmers have long amused themselves with self-replicating programs, often
setting the challenge of discovering the shortest such program in any given
computer language.
Moreover, it is easy to construct a self-replicating program that includes
any given piece of text.
Such a program divides naturally into the self-replicating part and the
part that is to be reproduced, in much the same way that a worm program
separates the worm maintenance mechanism from the application part.
.pp
View self-replication as a source program ``hiding'' in executable binary
code.
Normally when coaxed out of hiding it prints itself.
But imagine one embedded in a language compiler, which when activated
interpolates itself into the input stream for the compiler, causing itself
to be compiled and inserted into the binary program being produced.
Now it has transferred itself from the executable version of the compiler
to the executable version of the program being compiled \(em without ever
appearing in source form.
Now imagine that the program being compiled is itself the compiler \(em a
virgin version, uncorrupted in any way.
Then the self-replicating code transfers itself from the old version of
the compiler to the new version, without appearing in source form.
It remains only for the code to detect when it is the compiler that is being
recompiled, and not to interfere with other programs.
This is well known as a standard Trojan Horse technique.
The result is a bug that lives only in the compiled version and replicates
itself whenever the compiler is recompiled.
.pp
If autopoiesis is the ability of a system to develop and maintain its own
organization, the self-replicating Trojan horse seems to be a remarkable
example of it.
It is an organism that it extremely difficult to destroy, even when one
has detected its presence.
However, it cannot be autonomous, but rather survives as a parasite on a
language compiler.
It does not have to be a compiler:  any program that handles other programs
(including itself) will do\u4\d.
.FN
4.\ \ As Thompson (1984) remarks, a well-installed microcode bug will be
almost impossible to detect.
.EF
Although presented as a pathological example of computer use, it is possible
to imagine non-destructive applications \(em such as permanently identifying
authorship or ownership of installed software even though the source code is
provided.
In the natural world, parasites can have symbiotic relationships with their
hosts.
It would be interesting to find analogous circumstances for self-replicating
Trojan horses, but I do not know of any \(em these examples of benevolent
use do not seem to benefit the host program directly, but rather its author or
owner.
.pp
.ul
Viruses
are perhaps less subtle but more pervasive kinds of bugs.
They spread infection in a computer system by attaching themselves to
files containing executable programs.
The virus itself is a small piece of code which gains control whenever the
host is executed, performs its viral function, and then passes control to
the host.
Generally the user is unaware that anything unusual is happening:  as far as
he is concerned, the host program executes exactly as normal\u5\d.
.FN
5.\ \ The only difference is a small startup delay which probably goes
unnoticed.
.EF
As part of its function, a virus spreads itself.
When it has control, it may attach itself to one or several other files
containing executable programs, turning them into viruses too.
Under most computer protection schemes, it has the unusual advantage of
running with the privileges of the person who invoked the host, not with
the privileges of the host program itself.
Thus it has a unique opportunity to infect other files belonging to that
person.
In an environment where people sometimes use each others programs, this allows
it to spread rapidly throughout the system\u6\d.
.FN
6.\ \ More details of the construction of both viruses and self-replicating
Trojan horses are given by Witten (1987).
.[
Witten 1987 infiltrating open systems
.]
.EF
.pp
Unlike self-replicating Trojan horses, a virus can be killed by recompiling
the host.
(Of course, there is no reason why a virus should not be dispatched to install
a self-replicating Trojan horse in the compiler.)  \c
If all programs are recompiled ``simultaneously'' (ie without executing any of
them between compilations), the virus will be eradicated.
However, in a multi-user system it is extremely hard to arrange for everyone
to arrange a massive recompilation \(em in the same way as it is difficult to
reboot every machine on a network simultaneously to stamp out a worm.
.pp
Viruses do not generally remain in touch with each other and therefore,
unlike worms, are not really autopoietic.
But there is no intrinsic reason why they should not be.
They provide a basic and effective means of reproduction which could be
utilized for higher-level communicating systems.
As with the other devices reviewed above, when one hears about viruses one
cannot help thinking of pathological uses.
However, there are benevolent applications.
They could assist in system maintenance by recording how often programs were
used and arranging optimization accordingly, perhaps migrating little-used
ones to slower memory devices or arranging optimization of frequently-used
programs.
Such reorganizations could take place without users being aware of it, quietly
making the overall system more efficient.
.sh "Conclusions"
.pp
We have examined two rather different directions in which autonomy can be
pursued in computer systems.
The first concerns representation and manipulation of goals.
Examination of some current AI systems shows that they do not escape the
old criticism that their goals and aspirations are merely planted there
by the programmer.
Indeed, it is not easy to see how it could be different, unless goals were
generated randomly in some sense.
Random exploration is also being investigated in current AI systems, and these
show that syntactic mutation can be an extremely powerful technique when
combined with semantically dense representations.
.pp
But according to modern biological thinking, the lower-level goals of people
and animals are also implanted in their brains in a remarkably literal sense.
Higher-level goals are not so easy to pin down.
According to one school of psychological thought they stem from a
single ``super-goal'' called self-actualization.
This is remarkably in tune with the architecture of some prominent discovery
programs in AI which strive to maximize the ``interestingness'' of the
concepts being developed.
While one may be reluctant to equate self-actualization with interestingness,
the resemblance is nevertheless striking.
.pp
The second direction concerns organizational independence in a sense of
wholeness which is distinct from goal-seeking.
The concept of autopoiesis formalizes this notion.
Organizational independence can be identified in certain computer systems
like worm programs, self-replicating Trojan horses, and viruses.
It is remarkable that such applications have been constructed because
they offer practical advantages and not in pursuit of any theoretical
investigation of autonomy;
in this way they are quite different from contrived games.
In some sense self-replicating programs do have a goal, namely \fIsurvival\fR.
A damaged worm exhibits this by repairing itself.
But this is a weak form of goal-seeking compared with living organisms, which
actively sense danger and take measures to prevent their own demise.
.pp
The architecture of these systems is striking in that the mechanism which
maintains the artificial organism (be it the worm maintenance code,
the self-replicating part of a Trojan horse, or the viral infection-spreader)
is quite separate from the application part of the organism.
Most people think of such programs as somehow pathological, and the
application as a harmful or subversive one, but this need not be so:  there
are benign examples of each.
In any case, separation of the organism's maintenance from its purpose is
interesting because the concept of autopoiesis has sparked a debate in
system-theoretic circles as to whether teleological descriptions are even
legitimate, let alone necessary.
In both domains a clear separation seems to arise naturally between the
autopoietic and teleological view of organisms.
.pp
There have been no attempts to build computer programs which combine these two
directions.
The AI community which developed techniques of goal-seeking has historically
been somewhat separate from the system software community which has created
robust self-replicating programs like worms and viruses.
What will spring from the inevitable combination and synthesis of the two
technologies of autonomy?
.sh "Acknowledgements"
.pp
First and foremost I would like to thank Brian Gaines for suggesting and
encouraging this line of research.
I am grateful to Saul Greenberg and Roy Masrani for many insights into topics
discussed here, and to Bruce MacDonald for making some valuable suggestions.
This research is supported by the Natural Sciences and Engineering Research
Council of Canada.
.sh "References"
.ls1
.sp
.in+4n
.[
$LIST$
.]
.in0