FORMAL ASPECTS or NEURAL MODELING Thesis for the Degree of Ph. D. MICHIGAN STATE UNIVERSITY CHARLES WAYNE JOHNSON 1971 'P-a'p“€ LIBRARY Michigan State Universiq' This is to certify that the thesis entitled FORMAL ASPECTS OF NEURAL MODELING presented by Charles Wayne Johnson has been accepted towards fulfillment of the requirements for _P_h_..D..__degree in Jill—1.0.5119. hy 2c A4219? <7 /r/Céz/ / Major essor Date 13 May 1971 0-7639 .‘l l o ' , UBRARY BlNDERS ; " srnmspom, meme” K _ .. -\ w‘”", ABSTRACT FORMAL ASPECTS OF NEURAL MODELING BY Charles Wayne Johnson The purpose of this paper is to consider the question, "Is a sufficient neural model logically pos- sible?" The consideration of this question is conducted in four stages. First, some related concepts of "model" are char— acterized in terms of structural isomorphism. Neural models are specifically discussed, and a distinction be- tween two sorts of "mechanism" is made, with respect to neural modeling. Mechanisml is designated as the View that a sufficient neurophysiological theory is logically possible. Mechanism is designated as the view that a 2 sufficient neural model is logically possible. The con- cept of "sufficiency," in the case of a sufficient neuro- physiological theory, is characterized by a predictivity criterion, which is stated in terms of the covering-law model of scientific explanation. In the case of a suf- ficient neural model, the concept of "sufficiency" is characterized by a behavioral criterion. Charles Wayne Johnson Secondly, Mechanism1 is defended against three arguments which are used to attempt to demonstrate that Mechanisml is self-refuting. Thirdly, Mechanism2 is defended against the argu- ment that Godel's 1931 findings show that a sufficient neural model is not logically possible. Fourthly, the Brain-Model Problem is briefly dis- cussed. The Brain-Model Problem is an epistemological problem that may be characterized by the question, "Can a sufficient neural model be simple enough to be understood?" The general conclusion is drawn that both Mechanisml and Mechanism2 are viable. Finally, some suggestions are made for future research. A comprehensive bibliography is included, together ,with two appendices, which contain some representative arguments against Mechanisml and Mechanismz. FORMAL ASPECTS OF NEURAL MODELING BY Charles Wayne Johnson A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of PhilOSOphy 1971 For Buffy ii ACKNOWLEDGMENTS I am fortunate in having many people to thank for their assistance and advice in various phases of this study. I am very grateful to Dr. Richard J. Hall for his perceptive and enthusiastic comments on this work; his remarks were instrumental in the elimination of several errors and ambiguities. To Dr. Edward M. Eisenstein, I owe thanks, not only for his advice in connection with this study, but also for providing the opportunity for me to actually do some science rather than merely study it. I must also thank Dr. Herbert E. Hendry and Dr. Peter D. Asquith for their helpful and illuminating comments. Additionally, I am indebted to Dr. William L. Kilmer whose timely advice regarding reference material greatly expe- dited my research for this paper. I must, of course, thank my wife, Patti, for, among other things, assisting me with typing and proof reading. This study was supported by a Woodrow Wilson Dissertation Fellowship. iii LISI LIST 51.0 §2.0 §2.l 52.2 §2.3 LIST LIST §l.0 §2.0 52.1 52.2 §203 §2.4 §3.0 §3.1 §3.2 §3.3 53.4 §4OO 54.1 §402 54.3 54.4 55.0 55.1 §5.2 TABLE OF CONTENTS OF ‘ TABLES O O O O O O O O O O O O O O O O O 0 OF F IGURES O O O O O I O O O O ‘ O O O O O O O 0 INTRODUCTION 0 O O O O O O O O O O O I O O O PROBLEMS OF NEURAL MODELING . . . . . . . . THE EMPIRICAL SIGNIFICANCE OF FORMAL/ MATHEMATICAL NEURAL MODELS . . . . . . . . . MECI-IANISMI C O O O O O O O C O O O O O O O O MECIIIANISMZ I O O O C O O O O O O O C O O O 0 DISCUSSION: THE BRAIN-MODEL PROBLEM . . . . MECHANISMl AND THE SELF-REFUTATION AGRUMENT THE INTENTIONAL PARADOX . . . . . . . . . . THE RATIONAL PARADOX . . . . . . . . . . . . THE DESCRIPTION PARADOX . . . . . . . . . . DISCUSSION 0 I C O I O O O O I O C I O O O O MECHANISM2 AND THE GODEL ARGUMENT . . . . . NEURAL ARTEFACT SUFFICIENCY AND THE CONCEPT OF " “CHINE " O I O O O O O O O O O C THE GODEL ARGUMENT . . . . . . . . . . . . . ARTEFACTS, BRAINS, AND GODEL . . . . . . . . DISCUSSION 0' o o o o o o o o o o o o o o o o MECHANISM2 AND THE BRAIN‘MODEL PROBLEM . . . THE BRAIN-MODEL PROBLEM . . . . . . . . . . SUGGESTIONS FOR FUTURE RESEARCH . . . . . . BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . APPENDIX I O O O O O O O O O O O O O O O O O O O 0 APPENDIX II 0 O O O 0‘. O O O O 0-. O O .1. O O O Page vi 16 26 28 32 37 44 65 79 85 87 89 109 119 152 156 161 177 181 193 196 Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 2.0-1 2.0-2 2.0-3 2.1-1 2.1*2 2.1-3 2.1-4 2.1-5 3.2-1 4.1-1 4.1-2 5.1-1 LIST OF FIGURES Page 13 13 13 19 19 19 20 77 97 100 166 LIST OF TABLES Page Table 4.2-1 0 O O O O O O O O O O O O O O O O O O O O 114 Table APP. 11.1 C O O O O C O O C O O O O O C O O O O 207 vi 51.0 INTRODUCTION The enterprise of modeling the human brain has met, as is to be expected, with enormous practical diffi- culties, but it has also resulted in the generation of problems of both a conceptual and logical nature. The conceptual difficulties have largely resulted from attempts to reconcile the mind-body dichotomy that is so entrenched in language and in thought. Logical considerations, on the other hand, have played a major role in both the formulation of the formal structure of neural models and in dealing with the general question of whether, in the final analysis, the construction of a sufficient neural model is logically possible. The present study is primarily concerned with the formal aspects of neural modeling. The central question which we shall consider is: "Is it logically possible to construct a model of the human brain that can perform characteristically human (rational) activities and is also simple enough to be understood?" As a preliminary measure, in the consideration of this question, it will be necessary to discuss the logical structure of models in general, and of neural models in particular.. Although it will not be possible to avoid conceptual matters entirely, these will not be stressed. Considering the formal relationship that obtains between a model and the thing which it models, it would seem that advancements in the direction of constructing a sufficient neural model would be largely dependent upon the state of development of neurophysiology. The late Warren McCulloch, however, held that some significant advances can be made in this direction somewhat in advance of the corresponding discoveries in neurophysiology. This would seem to particularly be the case in studies wherein internal brain processes are ignored and a "black box" model is all that is desired. Such models may, in fact, aid in the development of neurophysiology. We may, there- fore, expect considerations regarding neural modeling to provide a convenient way to study neural structures and neural functions. Neural models may also provide a new way of discussing classical problems associated with the alleged mind-body dichotomy. At-any rate, the present study is not adversely affected by the absence of a sufficient neural model; in fact, it is not obvious that the existence of a candidate model would resolve the question with which we are pre- sently concerned, since the question regarding the logical possibility of a sufficient neural model (together with, of course, considerations regarding sufficiency criteria) must be settled prior to the evaluation of a given candi- date model. The present study proceeds in four stages: (1.) In §2, a working definition of 'model' is formulated in terms of structural similarities. The empirical value of logical and mathematical models of neural systems is assessed with particular emphasis placed upon network models. A distinction is then made between two senses of 'mechanism,‘ both of which are suitably restricted in meaning for the purposes of this study. Specifically, we take Mechanism1 to be the proposition that a sufficient neurophysiological theory is logically possible. We take Mechanismz, on the other hand, to be the prOposition that a sufficient neural model is logically possible. Some problems that are peculiar to neural modeling are intro- duced; these problems are related to the two senses of 'mechanism.‘ Notable among these problems are: (a.) the ques- tion of whether Mechanism1 is self-refuting, (b.) the question regarding the possible implications for Mechanism2 of GOdel's 1931findings, and finally, (c.) the question of whether or not a neural model can both be sufficient and be simple enough to be understood. These questions are the topics of the remaining three sections. (2.) In §3, we examine the self-refutation argument against Mechanisml. This examination is done in terms of three paradoxes which allegedly prove that Mechanisml is self-refuting. (3.) It has been argued that GOdel's 1931 findings pro- vide the basis for an argument which refutes Mechanism2 by showing that there will always be something that a human brain can do that cannot be done by a model (viz., a "machine" or "artefact"). In §4, this argument is con- sidered in some detail. (4.) Assuming that both a sufficient neurophysiological theory and a sufficient neural model are logically pos- sible, the question remains of whether a model can be constructed which is both sufficient to model the human central nervous system (CNS) and is simple enough to be understood. This is an important feature of the more general Brain-Model Problem, and is the topic of §5. The Brain-Model Problem indicates some intriguing quandries which are involved with the situation wherein one system attempts to both model and understand another system which is of its own level of sophistication. In §5, we also summarize our findings and conclu- sions, and we briefly discuss modeling sufficiency cri- teria. Finally, we make some brief recommendations for future research. which brain ‘ to the Neithe: of a bi brain, conside As we shall indicate, a neural model of the sort which is desired may be a "black box" model of the human brain which has input-output correlations which correspond to the input-output correlations of a rational human brain. Neither the internal processes nor the internal structures of a black box model need be similar to those of the human brain, in order for the model to be sufficient. These considerations will be clarified later. 52.0 PROBLEMS OF NEURAL MODELING A considerable number of somewhat divergent views have historically come under the general heading of "mecha- nism." For our purposes, we shall use 'mechanism' to refer to the view that the question "Is it logically possible to construct a sufficient neural model?" should be answered affirmatively. There are, however, some preliminary con- siderations which must be discussed. In common usage, 'model' is quite ambiguous; in scientific model theory, however, it is given a rather precise rendering. We shall, in accordance with current literature in scientific modeling, use the term in three distinct senses: (l) "logical model," (2) "theoretical model," and (3) "physical model" or "artefact." Our use of the term 'logical model' coincides with the sense of 'model' that is employed by logicians. We shall say that an entity, M0, is a logical model of a set of sentences, To, if and only if Mo satisfies To, where 'satisfaction' is defined in the usual manner (an excellent explication of '(logical) model' may be found in [MEN 1]). which rather that 1,. a the” Say tha By 'theoretical model' we shall mean a theory which is isomorphic to another theory, which it is said to theoretically model. For the purpose of this sense of 'model,‘ one must consider the theories in question to be formalized, in the sense of 'formalization' which is used by, for example, P. Suppes (See [SUP l], [SUP 2] as well as [MEN 1], [BYE l], and [APO 1]). In a closely related characterization of 'theoreti- cal model,‘ one Speaks of a theory, T1, as being related to another theory, To, which it is said to theoretically model, in virtue of being an interpretation of the same uninterpreted calculus, C, of which T0 is an interpreta- tion. This is typically referred to as the "one calculus-- two theories" characterization of theoretical modeling (for a detailed treatment of this characterization, see [SWA 1]). If a theoretical model is a mathematical theory, rather than an empirical scientific theory, we shall say that it is a mathematical-theoretical model. Similarly, a theoretical model is a theory of formal logic, we shall say that it is a formal-theoretical model. We shall use the terms 'isoporphism' and 'inter- pretation' as they are defined in [MEN 1]. By 'physical model' or 'artefact' we shall mean a physical entity which is isomorphic to another physical entity, which it is said to physically model. We shall presu: a COII repre: shoulc cal mc \ tional we h a 37 Cal mod Rant 0‘ ‘7 .3 By ar Structu Well‘de DJ (D dueti. presume that a physical model will be a logical model of a corresponding theoretical model. The general modeling framework may be graphically represented by Figure (2.0-1); the single-ended arrows should be interpreted as 'is satisfied by,’ and the double- ended arrows should be interpreted as 'is isomorphic to.‘ Furthermore, let C be an uninterpreted calculus; let T0 be a theory, which is to be modeled; let Tl be a theoreti- cal model of T finally, let M0 and M1 be physical rela- 03 tional systems which satisfy T0 and T1 respectively. Then we have: /\1 0‘ 3H&-'H 36—6 “is N! I-' (Figure 2.0-1) Typically, one speaks of the purpose of theoreti- cal modeling as being to expedite the deductive develop- ment of theories that are "rudimentary," in the sense that they are (l) theories in fields that have not been well— explored, and (2) they have "gaps" in their deductive structures. In the theoretical modeling procedure, a well-developed theory, T1, is used as a pattern for the deductive development of a rudimentary theory TO' fir" There are, in general, two ways in which physical modeling can proceed. Specifically, one way to proceed in physical modeling would be to attempt to have the artefact perform the relevant modeled operations of the original in the same way (or in a very similar way) that the original system performs these operations. For exam- ple, one way to approach the problem of modeling the human CNS is to attempt to produce an electronic artefact (for example, a computer) which, in virtue of its circuitry (or programming) effectively replicates the transmission structure, as well as general functions of the human CNS. 0n the other hand, one may treat the system to be modeled as a "black box" (or, alternatively, one may treat discrete portions of the original system as black boxes). In this case, one would be interested simply in producing a physical model which has inputs and outputs that are in respective correspondence to the inputs and outputs of the original system. In this way, one may concentrate on, and take advantage of, the particular capabilities of the prospective artefact. For example, in modeling the human brain, one may choose to treat the entire brain as a black box and then attempt, in the most efficient and expedient way possible, to produce an (e.g., electrical) artefact, which has an input-output correspondence that is similar to that of the original system. In the use of this sort oftechnique, the circuitry (or pattern of transmission 10 paths) of the original system, i.e., the human brain, need have no structural correspondence to the circuitry of the artefact. Neural systems are occasionally modeled by elec- trical systems, i.e., by fixed circuitry systems, or much more commonly, they are modeled by the more flexible com- puter techniques. Black box procedures may be used where only functional similarity between the prototype and model is desired. An intuitive sufficiency criterion for physi- cal models of neural systems may be provided by, for example, Turing's Test.1 Questions regarding the possibility of a sufficient neural model are, to a large extent, dependent upon an af- firmative resolution to the question regarding the possi- bility of a sufficient neurophysiological theory. While, it is true that modeling developments may take place some- what in advance of corresponding discoveries in neurophysiology,2 we may expect questions regarding the possibility of a sufficient neurophysiological theory to be conceptually, epistemologically, and perhaps, logically prior to questions regarding the possibility of sufficient theoretical neural models or physical neural models. 1See §4. 2See the views of W. S. McCulloch in his [McC l], passim. 11 The situation in the case of neural modeling is somewhat unique in that mathematical models used in neural modeling are highly developed deductively but are poorly developed in another sense. They are deductively well- developed, but, to date, neither they nor their logical models have resulted in sufficient neural models. To a large extent, this situation is due to relatively low level of development of neurophysiology with respect to the ex- planation of total brain functions. Apparently, an em- pirical theory must achieve a certain degree of sophisti- cation before mathematical modeling can be initiated on a large scale. It would seem that formal/mathematical neural modeling fits into the modeling framework rather well. The remaining question as to whether any new mathe- matical developments are required to facilitate sufficient neural.modeling has been answered largely in the negative by, for example, M. A. Arbib,3 who sees the modeling prob- lem as being one of developing computer techniques, and computer techniques are, for the most part, based on existing mathematics. He urges, as we have, a drOpping of the distinction between "real mathematics" and "com- puter programming." We may generally state that neural modeling falls into the modeling framework, which is represented in 3See, for example, [STA l]. 12 (Figure 2.0-1). This point is partially substantiated by the following remark made by Perkel and Moore in their defense of neural modeling: The most significant use of neural models, analog or digital, is a heuristic one: effectively used, they suggest possibilities; they encourage the enumera- tion of specific alternatives; they force the precise specification of alternative hypotheses; and they serve as an ideal tool for the elaboration of the conse- quences of hypotheses; in short, they furnish a vehicle for the application of the principle of strong inference so essential to sustained scientific progress.4 Note that Perkel and Moore regard neural models as being "tools." If they are correct, then we may conclude that neural models are of instrumental value; hence, they may be convenient to use, but they are not essential to neuro- physiological theories. Generally speaking, we may distinguish between two sorts of mechanism (and correspondingly, we may distinguish between two sorts of "mentalism"). We shall take Mechanisml to be the proposition that a sufficient neurophysiological theory is logically possible. Mechanismz, on the other hand, we shall take to be the view that a sufficient neural artefact is logically possible. Accordingly, we take Mentalisml to be the denial of Mechanisml, and Men- talism2 to be the denial of Mechanism . 2 4[PER 11. - 't-g mr‘ .sv 63" Fl. 13 It is our contention that Mechanism2 is, to a large extent, dependent upon an affirmation of Mechanisml. The modeling procedure may go in the following direction: ---+ T ---+ T ---+ T 0 1 0 (rudimentary)l’ (developed) (Figure 2.0-2) If we let M0 = a human brain = B; To = a neurophysiologi- cal theory = NT; T1 = a formal/mathematical model of To/NT = MT; and M1 = a neural artefact (or "machine")=A; then, we have: B ---+ NT ---+ MT ---+ NT (rudimentary) (developed/ sufficient) A (Figure 2.0-3) While it seems possible that a sufficient A could be pro- duced independently of a sufficient NT (i.e., an SNT), it seems for practical, heuristic, and other reasons unlikely that this will occur. It seems apparent that many of the arguments against Mechanism1 (especially, for example, arguments regarding intentions, consciousness, determinism, etc.) settled therzor UPS!) a: at this term 'n in this heweve: informa comonlj inVOlve Iather- 14 could also be applied against Mechanismz. Hence, if these arguments are successful in refuting Mechanisml, it may be concluded that equal success could be expected with regard to Mechanismz. Our general conclusion is that the question of the truth of Mechanisml is, in several respects, prior to the question of the truth of Mechanismz, and, hence, should be settled before considering Mechanismz. We conclude, fur- thermore, that generally speaking, Mechanism2 is based upon an affirmation of Mechanisml. One additional clarification seems to be required at this point. There are two prevalent senses of the term 'neural model.‘ We presume that the sense intended, in this study, is now clear. In an alternative sense, however, 'neural modeling' refers to the deciphering of information encoded in neural activity. This is more commonly called 'neural coding.‘5 Neural coding does not involve the functional modeling of neural systems, but rather seeks to interpret the on-going activity of a CNS and categorize, insofar as is feasible, meaningful discrete units of neural activity with respect to previously selec- ted parameters. There is an overlap between neural modeling and neural coding which, as we shall indicate in 55, results 5See [PER 2]. in inte descrlp 15 in interesting problems of self-reference and self- description. The problems of neural modeling, aside from purely practical problems that are common to all scientific in- quiries, fall into roughly four areas: (1) problems re- lated to the empirical significance of formal/mathematical neural models, (2) the questions of the truth of Mecha- nisml, (3) the question of the truth of Mechanismz, and finally (4) the self-reference and epistemological diffi- culties which are associated with the Brain-Model Problem. The first of these problems, (1), will be discus- sed in the following section, 52.1, and the remaining three problems will be introduced in the remainder of §2, and will be discussed in detail in later sections. biologi neurons is, by 0f whic hepe to unit me 50, f0: Unit me to neur System- §2.1 THE EMPIRICAL SIGNIFICANCE OF FORMAL/MATHEMATICAL NEURAL MODELS It is both unnatural and inaccurate to regard biological neurons as being discrete units.6 Biological neurons are best characterized by their histories, that is, by the history of their functions within the systems of which they are components. In addition, we cannot hope to be able to infer systemic functions from single- unit neural functions. As, for example, L. D. Harmon indicates, In general, . . . we can no more easily deduce ensemble function from the single-unit function than a physicist can deduce the physical properties of water from his knowledge of hydrogen and oxygen molecules. Both difficulties are practical ones at our present levels of understanding; they may not be in-principle impossibilities.'7 So, for the present, at least, it would seem that single- unit models would not be likely to be instructive relative to neural systems (or, for that matter, relative to system-models, viz., neural nets). As it turns out, 6See [MIN 5]. 7 [HAR1 1]. 16 an inpu has n s x, terr- 1 X50. 1 integer l and I EaVi M RS PO 17 single-unit formal/mathematical models are not even par- ticularly instructive with respect to the single-unit neurons which they model. The reason for this is that formal/mathematical single-unit neural models, viz. "for- mal neurons," are very simplified and idealized models of the corresponding portions of neurophysiological theory (i.e., they are highly incomplete). To clarify this point, we shall briefly consider formal neurons and neural nets, as based upon the McCulloch-Pitts framework. We shall, then, note the biological inaccuracies of single-unit formal neurons and we shall indicate the heuristic value of neural nets. Let NW be a formal neuron. Then, Nw may be de- scribed using n+l numbers. The first n numbers are input values, v1, v2, . . . , vn, where for every vi there is an input line (or path) xi to which vi is associated. Nw has n such inputs, x1, x2, . . . , xn (nil). For each xi, xi terminates in an exitatory endbulb if and only if xi>0. If xi>0, then vi = y, where y is some positive integer. If xi<0, then, vi = -w. Let 'Nw fires along 0¢ at time t' =df 'Nw(t) = l' and 'Nw does not fire along Ow at time t' =df 'Nw(t) = 0' Having introduced these symbolic conventions, we may, at this point, posit the following three axioms: CS-II) duced in \ neuron, ‘9 Shall \ the axic Seated a 18 (N-I) N functions on the basis of a sequence of dis- ID crete moments of time. The respective moments may be indexed by the integers: t: l, 2’ 3' o o o (N-II) For every N there are exactly two possible states: 112 A firing state: N¢(ti) = l, and A resting state: Nw(ti) = 0 Hence, (Nw)(ti)(Nw(ti) = l v Nw(ti) = 0) (N-III) (Nw(t+l) = l) E Zvixi(t) 1’0 [the dependency rule] i I I _ I I where 9 _df the threshold of NW Considering axiom (N-I), a delay may be intro- duced in a transmission path by introducing a formal neuron, NW' having 9 = 1. To graphically represent this, we shall make use of the following symbolic conventions: Let 40 represent an excitatory endbulb, and let 0 represent an inhibatory endbulb. Finally, let 0=n represent a formal neuron, as characterized by the axioms (N-I), (N-II), and (N-III). Using these conventions, a delay may be repre- sented as: x Iv = l N (t) = l l 1 a ‘P (at time t) (Figure 2.1-1) excitatc SEUICH , excitato neuron K using a: “We. 19 A simple and-gate may be constructed using two excitatory inputs, each having values of l, and a formal neuron, NW' having 9 = 2: xllv1 = l Nw(t) = 1. lev2 = l O=2 (at time t) (Figure 2.1-2) A simple or-gate may be constructed using two excitatory inputs, each having values of l and a formal neuron Nw, having a e = l: xllvl = l. NW(t) = 1 lev2 = l G=l (at time t) 3 . (Figure 2.1-3) Finally, a simple negation-gate may be constructed using an inhibatory input and a formal neuron, NW' which has a 0 = 0: x|v=o N(t)=l 1 1 —o 0=0 w (at time t) (Figure 2.1-4) defined more adv the pres We may 6 tanner: sat: 20 Clearly, if NW is a negation-gate, then (ti)(Nw(ti) l). The term 'brain' (or, alternatively, 'CNS') may be defined as 'an aggregate of formal neurons.‘ It may be more advisable, however, to avoid cognitive issues, for the present, and Speak instead in terms of 'neural nets.‘ We may define the term 'neural net' in the following manner I A system S is a neural net if and only if it satisfies the axioms (N-I), (N-II), and (N-III). (2.1-1) As McCulloch and Pitts demonstrated, neural nets may be effectively described using the propositional cal- culus. Consider, for example, the neural net which is graphically represented in (Figure 2.1-5). Clearly, the thresholds of N1, N2, and N3 need not be specified in this restricted example. N1 N6 N2 1 I N3 [t-ll [t] [t+l] (Figure 3.1—5) sion: 21 We shall suppose, however, that we have N4|e4 = 3, NSIO5 = 2, and N6|96 = 1. N4 may be characterized by the following expres- sion: N4(t) E N2(t-l) - (Nl(t-l) v N3(t-1)) (2.1-2) N5 may be characterized by the expression: N5(t) E Nl(t-l) - N3(t-l) (2.1-3) Similarly, N6 may be characterized by the expres- sion: N6(t+1) E ~(N4(t) ~~N5(t)) (2.1-4) Using (2.1-2), (2.1-3) and (2.1-4), we may generate the following theorem for the neural net represented-in (Fig— ure 2.1-5): Theorem: N6(t+l) a ~((N2(t-l) - (Nl(t-l) v N3(t-1))>~ (N1(t-l) - N3(t-l))). (2.1-5) Such a system as S is, clearly, only anincomplete1 model of a neurophysiological theory. As has been indica- ted, by, for example, McCulloch, M. Minsky, and others, formal neural models, especially single-unit models (as specified, for example by (N-I) and (N-II)) are extremely sirplifi They pos th-ECIY, some of simply f scribed :‘ere ar sociated franewor‘ 22 simplified mathematical models of biological neurons. They possess what is frequently called in scientific model theory, large "negative analogies." In addition, however, some of the assumptions made in formal neural modeling are simply false when compared to biological neurons as de- scribed by a neurophysiological theory. Specifically, there are five general ways in which the assumptions as- sociated with formal neural modeling in the McCulloch-Pitts framework, may be shown to be false.8 First, it is assumed, in the construction of the system described above, i.e., S, that neural activity is an "all-or-none" process (viz., as specified by (N-II)). While, in general, this is true of biological neurons, a more strict biological account may not be so inflexible. More importantly, however, we should note the fact that S fixes the output values and thresholds of all of the formal neurons for all time; biological neurons are not absolutely stable in these respects. It should also be noted that, in this simple system, 8, all elements are reliable and the structure of a neural net is fixed for all time.9 Here again, this is not true of biological neural systems. 8This discussion will be carried out in terms of the simplified, non-technical account of neural operations which is provided by W. S. McCulloch in his "Finality and Form in Nervous Activity." [McC 1], pp. 256-275. We also make use of some points made by M. A. Arbib; see [ARB 1], pp. 3-7. 9For a more detailed and biologically accurate system, see [RAS l], passim. is not 1 Signific 3J8, 23 Secondly, S treats time in an unnatural way. It is not true of biological neurons that they are completely synchronated. In addition, the designation of discrete moments of time may involve dividing the on-going activ- ity ofra CNS in an arbitrary manner. (N-l) also requires that inter-neuronal transmission time be absolutely uni- form; CNS delays, however, could, for example, also be a function of axonal diameter, whereas the only sort of delay permitted in system S is synaptic delay.10 Thirdly, S ignores all interaction between formal neurons except those that take place at synapses. Recent findings, for example, those of Lettvin, indicate that other interaction, such as inter-systemic noise, can be significant in the efficient information processing of a CNS. Fourthly, in system S, an inhibitory endbulb ab- solutely inhibits the formal neuron to which it is con- nected. It is not clear that this is always the case in actual biological inhibition. Fifthly, as Arbib indicates, no account is taken, by system S, of possible changes in behavior of formal neurons as the result of chemicals or hormones. Formal neurons, at least in the McCulloch-Pitts model, are in- tended to simulate only electrical activity. 10For an alternative treatment of time, See [MIN 5]! PP. 2‘7. simplif taken 1: such as S is ob neurons Accordi; 331 mod« ZCdEI e: Stitute biOlogic used as PIOVe tC theOries 10m n! general 9L- nds r85 v ‘R .|. (D Dries mctiOh ‘espOndi 24 This brief survey of the biological inaccuracies of the formal neurons of a system such as S is by no means exhaustive. We mention these inaccuracies only to empha- size the sort of simplification that one might expect in incomplete single-unit neural models. The fact that the simplification is so drastic, however, should not be taken to reflect unfavorably on the value of neural nets, such as S.. Considered as a source of single-unit models, S is obviously defective. But, as we noted, biological neurons are best understood as components of systems. Accordingly, we may expect that the most useful mathemati- cal models of neurophysiological theories are those that model entire systems, viz., mathematical models that con- stitute neural nets. So, we may ask, considering the biological inaccuracies of formal neurons, can they be used as the basic constituents of neural nets which will prove to be valuable as models of neurophysiological theories regarding biological neural systems? The consensus seems to be that nets constructed of formal neurons are of value in that they model the general logical functions of biological systems. In this respect, it is perhaps more transparent to regard neural nets as being formal models of neurOphysiological theories. It is held, by Harmon, for example, that the functions of a neural model and the functions of.a cor- responding neurophysiological theory may be similar without the; basic Const Alt: in form sentatl‘ ceivabl‘ prototYE specifi‘ It should bé to distingui gmsical mOC productive c sciences the It 5 models, such were never r physiologica theories. C mathematical Plete (in E models of t2".- % wrmal/mathe Lie lOgical dise - elay disc 25 without there being a corresponding similarity in their basic constituents. As Harmon expresses this point, Although the "neurons" used in these models [viz., in formal models] are exceedingly simplified repre- sentations of biological neurons, it is entirely con- ceivable that the gross behavior in both models and prototype is essentially independent of finer unit specification. It should be noted, to avoid confusion, that Harmon fails to distinguish between formal/mathematical models and physical models. This failure is a common one, and it is productive of confusions in both model theory and in the sciences themselves. It seems clear that formal/mathematical neural models, such as the one formulated by McCulloch and Pitts, were never really intended to be complete, accurate, physiologically oriented models of neurophysiological theories. Consequently, the logical models of formal/ mathematical neural models were not intended to be com- plete (in Brodbeck's sense of 'completeness') physical models of their original neural prototypes. Rather, formal/mathematical neural models are intended to model the logical structures of various sorts of systems which display discrete processes; such systems may be computers as well as brains.12 11[HAR1 1], p. 24. 12For a more detailed discussion of this point, see [MIN 1], Chapter 3. We 5 tion that a: scents which that there a psyche/soul/ eXplanatory that some as be EXplained eXplanations 0R an obscur Ifiated to c Wen §2.2 MECHANISM1 We shall regard Mechanism1 as being the proposi- tion that an SNT is logically possible. The main argu- ments which are used by Mentalistsl against this view is that there are some features of the human mind/spirit/ psyche/soul/etc. which are emergent with respect to causal explanatory theories. In other words, Mentalistsl contend 'flmat some aspects of distinctively human activity cannot be explained by means of contingent, causal, scientific exPlanations. Much of the debate, in this regard, has been on an obscure, metaphysical level, dealing with topics related to categories and substances. Twentieth Century behaviorists, however, have gone a long way toward clarifying the metaphysical debate. The more linguistic questions which are related to the debate, have, however, remained. So, considering just that portion of human activities that may be characterized as being rational, intentional, or, more generally, "dis- tinctively human," we may reformulate the prOposition of Mechanism1 in terms of the question of whether or not 26 such activi a causal ex; The. been the $01 Mentalistl a that is pote that Mechan: successful, for its own The Will be exa: 27 such activities can be sufficiently explained by means of a causal explanation, i.e., by contingent causal laws. There are many important side issues which have been the source of seemingly endless debate, but the one Mentalist1 argument that is most directly to the point and tfliat is potentially the most devastating is the argument tflmat Mechanisml is self-refuting. If this argument is snaccessful, then either Mechanism1 constitutes evidence for its own falsehood, or it is self-contradictory. The self-refutation argument against Mechanisml imill be examined, in some detail, in §3. We it is logic neural math. lost often, or "machine: of words in and misuse, that 'modelu (as is, for the Contenti fUture, "thl (2) 'thinkir Man} MechaniSm 1. 2 . OUtdated Chi discusSing 3. definitio“ C the .4: \ stty years ' fi §2.2 MECHANISM2 We may take Mechanism2 to be the proposition that it: is logically possible for there to be both a sufficient ruaural mathematical model and a sufficient neural artefact. hhost.often, Mechanism2 is discussed in terms of artefacts or: "machines." It is unfortunate that there are a number (xE words in natural language that, as a result of long use and.misuse, have become extremely vague. We have seen that 'model' is one such word; 'machine' is another one (as is, for example, 'life,' or 'intelligence'). It is the contention of Mechanists2 that (1) within the near future, "thinking machines" will be developed13 and that (2) 'thinking' is not incompatible with 'machine.‘ Many of the arguments which have been used against Mechanism2 have been based upon an overly restricted or outdated characterization of "machine." Therefore, in discussing Mechanismz, we shall be on guard against a definition of-‘machine' that has a built-in bias against the possibility of there being an SNM. In addition, we 13Turing speculates that this will occur within fifty years. See [TUR l], passim. 28 must watch f confusion of such as 'mat tury counter scientific r extensive re that a Sever. call a mode: tively argue body probler from Sevente We s problems fa: nor shall WE. de""’el‘3pecl arl also attempt. classically considel. thg are poss'ible Mos~ bility of ".l viOusly POS; Tl}. I \14 Se 15h 29 :must watch for possible ambiguities which result from the confusion of Seventeenth Century definitions of terms such as 'matter' and 'machine' with their Twentieth Cen- tury counterparts. It has been pointed out, by several scientific methodologists, that these terms have undergone extensive revisions with regard to meaning, so much so tfliat a Seventeenth Century dualist would very likely not 14 It can be effec- call a modern computer a "machine." tively argued, in fact, that the entire classical mind- body problem may be based upon a misconception derived from Seventeenth Century mechanism.15 We shall not discuss, in this study, the practical problems facing the development of "thinking machines," nor shall we discuss the feasibility of their being developed and constructed in the near future. We shall also attempt to avoid, inasmuch as is possible, the more classically philoSOphical side-issues. We shall merely consider the question of whether or not "thinking machines" are possible in principle. Most of the arguments against the logical possi- bility of "thinking machines" seem to depend upon pre- viously posited definitions of 'mind' and 'machine.‘ This procedure seems to involve equivocation as well as 14See, for example, [TOU 3], [SMA 3], and [NAG 2]. 15For example, see [RYL l], [TOU 2], and [TOU 3]. involving ] avoid . ‘artefact') to avoid t}: term 'neura A F tion regard one thing 1; by an artef this questi. "behavioral biOlogical GiVe is at least artefact car futed , 30 involving the sort of definitional bias which we wish to avoid. What we mean by 'machine' is 'physical model' (or 'artefact'), as characterized in §2.0. We shall attempt tn: avoid the vague word 'machine' and use instead the term 'neural artefact' or simply, 'artefact.‘ A possibly more effective way of posing the ques- tion regarding Mechanism2 is to ask: Is there at least one thing that the human brain can do that cannot be done by' an artefact? Of course, the sort of "things" to which ‘UQis question refers should be restricted to rational "behavioral" outputs, and should exclude the specifically biological functions of the brain. Given this restriction, it appears that, if there is at least one thing that a human brain can do that an artefact cannot do, then Mechanism2 is effectively re- futed. If, on the other hand, it is not the case that there is at least one thing that the human brain can do that cannot be done by an artefact, then, to this extent, Mechanism2 is viable. It has been argued that Godel's Theorem has, as one of its philosophical consequences, that there will always be something that a human brain can do that an artefact cannot do. The reasoning, very briefly, is that artefacts are "programmed" (where, for our purposes, a program or "machine description" is a formal/mathematical- thecre fixed that a sentar can al not pr tion i. viz., ' 31 theoretical model) to do a finite set of operations by a fixed set of directions (or rules, or axioms). Assuming that a neural artefact is sophisticated enough to do ele- inentary arithmetic, then it is argued that a human brain <3an always generate a formula that is true but that is rust provable by the artefact (i.e., the formula in ques- ‘tiom.is not derivable from the artefact's formal system, \niz., the system which the artefact satisfies). We shall consider this argument, in some detail, in §4. in t”. vircn . .A . . 1 id The eh m m. e e «x e pi. e e e h. PM S Cl. AU 5 "V NW C Q. by W74 0 l n .v u .. r a v a A u n v a s Q» L . a a: G» e “a 9.. my a. 14 n n v S sf. 5 3. T.. s s L a m i alcliublllqlp 5| :1 ,,.. . e. 5"" y‘enti §2 . 4 DISCUSSION: THE BRAIN-MODEL PROBLEM As has been pointed out by various investigators iJl the field of neural modeling, a CNS "models" its en- 16 xnironment, i.e., a CNS maps its environment onto itself. TWuis finding has two interrelated consequences. First, it: gives rise to an alternative formulation of the classi— cal epistemological questions posed by empiricists, and secondly, it results in intriguing self-reference prob- lens in neural modeling. Concerning the first consequence, we should ob- serve that a distinction is occasionally made by scientists, as well as philosophers, between immediate and mediate perceptions.l7 Correspondingly, we may say that a primary model is a direct representation of an environment. A secondary model, on the other hand, is a model of a pri- mary model (we are, of course, using the term 'model' in a somewhat more general sense here than we did in §2.0. The sense of 'model' which is intended, in this instance, 168ee [BRE l], [HAR1 2], and [FOG 1]. 17Different terminology is, of course, used by scientists; see, for example, [HAR1 2], pp. 2-3. 32 33 ‘will become more clear in §5.L Considering these defini- tions, then, we may have the following result: (I) Only primary models are empirically signifi- cant or scientifically valuable. (II) All models in empirical science are second- ary. (III) Therefore, it is not logically possible to produce a significant and valuable (scien- tific) model. {Hue rationale of this argument seems to be that primary Imodels are immediate representations of the environment (for example, they are alleged to be immediate representa- tions of a scientist's environment); primary models, therefore, are taken to be unmediated by neural/brain processes. The inference from this, is that the "models" used in empirical science are really "models" of the CNS's interpretation of its environment; these "models" are, therefore, secondary. Secondary models, however, not being direct representations of the environment, are not, it is presumed, scientifically significant or valuable. A practical refutation of this argument would seem to be readily available. (I) can be denied for two reasons. First, apparently, there are no primary models; therefore, the primary/secondary distinction is not 34 meaningful. Secondly, the models used in science have proved to be useful; in addition, it seems to be somewhat spurious to deny scientific models "significance, assum- ing that it is even meaningful to speak of "significance" in a general, non-relative manner. We contend that the primary/secondary distinction should be dropped entirely. Good evidence against the corresponding immediate/mediate distinction in epistemo- 18 We feel that there logy has recently been presented. is also good evidence against the primary/secondary model distinction. The second consequence of the fact that a CNS "models" its own environment may be considered in terms of the Brain-Model Problem. The Brain-Model Problem in- volves a series of questions which fall, roughly, into two groups. The main question of the first group is: What are the physiological processes by means of which animals (and especially human beings) are enabled to perform such activities as learning, intending, reasoning, etc.? This question is related to practical aspects of Mechanisml; clearly, however, it is based on the presupposition that activities of the sort mentioned can, in principle, be explained by a neurophysiological theory, or by a theory in some similar science. 18See [HAL l]. f7: 2. 1E , 35 The second group contains questions having both a practical as well as a conceptual/logical nature. There are, generally, two main questions in this group, the first of which is: How can the activities of rational, con- scious organisms be modeled using systems or devices which human beings can actually construct? Here again, there is a presumption that Mechanism2 holds. It would make little sense to ask how such systems could be constructed in the absence of an assumption that such systems are possible, in principle. Assuming that both Mechanism1 and Mechanism2 are true, then there is one additional question remaining: Is it possible to construct an artefact that is sufficient to model human rational activity, but is still simple enough to be understood? It is clearly desirable that such an artefact be understandable by human beings, as- suming, of course, that such an artefact is logically possible. As we have indicated, one can consider the CNS of, for example, a human being as "modeling" its environ— ment, i.e., as mapping its environment onto itself. Pre- sumably (assuming the truth of Mechanismz), among the things in the environment of an investigator in the field of neural modeling will be a neural artefact. This arte- fact, if it is sufficient, will be complex enough to model rational activities that are characteristic of so. 36 human brains, including those of the brain of the investi- gator in question. There are basically two problem areas associated with this situation. Initially, there is a question regarding the pos- sibility of there being a logical paradox associated with the concept of one system, for example, a human brain, constructing a model (for example, of itself) that is as complex as it is. This question will be briefly discus- sed, in 55, with regard to self-reproducing systems. The remaining question is of both a conceptual and epistemological nature. At present, the human brain is not well understood with respect to its over-all functions. We may ask, then, can it be expected that a sufficient aretfact, even if one were to be constructed, would be any more understandable than human brains are at present? One further question seems to be logically required at this point, namely, could an artefact which is sufficient to model a human brain understand itself any better than human beings presently understand themselves? These question, together with some general con- clusions regarding Mechanism1 and Mechanism2 will be discussed in §5. 53.0 MECHANISMl AND THE SELF-REFUTATION ARGUMENT Generally speaking, we take 'Mechanisml' to be the proposition that it is logically possible to formu- late a sufficient neurophysiological theory. We must indicate, however, what is meant by 'sufficient neuro- physiological theory' (hereafter, 'SNT'). Following N. Malcolm, we shall say that a neurOphysiological theory is sufficient if and only if it can be used to both explain and predict all movements of human bodies, with the ex- cePtion of movements which are caused by forces which are external to a given body.l Such a theory will also describe the movements in question. We may say that Mentalisml is the view that an SNT is not logically possible. There are several qualifications which should be nOtEd immediately. First, Mechanism1 clearly involves a promissory note. At their present stage of development, the neuro- sciences have not yet developed an SNT. The Mechanistl, however, is optimistic that an SNT is forthcoming. \ 1 . [MAL 2], p. 334. We shall use the pagination in [KRI 1] throughout. 37 “m' A ' 'LNJW' ..’. r“ m— «a “ u 4- 3r.- 38 The lack of an SNT, in any event, does not seem to adversely affect the present study. We need not have a candidate SNT in order to discuss the question of whether or not an SNT is possible, in principle. As in the case of a sufficient neural artefact, as we shall see :h1§4, the development of a neurophysiological theory is dependent upon both theoretical and empirical discoveries. But, the question of whether or not an NT could be termed 'sufficient' and the question of whether or not an SNT is logically possible, must be answered on the basis of a decision, not a discovery. Secondly, we should note that an SNT, as we shall treat it at least, does not entail a reductionist view with respect to psychology. In other words, we shall not take Mechanisml to be the view that psychology is reducible to an SNT (for our purposes, it is not strictly essential that an SNT is a theory of neurophysiology rather than, for example, biology, chemistry, physics, etc.). As a result, we presume that there may be more than one ade- quate (and independent) explanation for the same event, i-e-. an event E may be adequately explained by an SNT as well as by a sufficient theory of, for example, psychology. In general, our view regarding an SNT can be characterized most succinctly by the following formula: (X)(if x is a behavior, then SNT explains x) (3.0-1) .s u ..l a: ll 39 'Behavior' is an ambiguous term which includes both 'ac- tion' and 'motion.‘ We shall use 'action' to refer to behavior that is purposive, goal-directed, intentional, or "done for a reason" (hence, "rational"). 'Motion,‘ on the other hand, will be used to refer to behavior that is done indifferently or involuntarily.2 We shall, of course, assume that if an SNT explains a behavior, x, it also may predict x and describe x. It still may not be altogether clear what counts as "sufficiency" in a neurophysiological theory. Gener- ally speaking, we take the sufficiency of an SNT to be characterized by a predictivity criterion. This criterion is largely in accordance with the covering—law model of scientific theories and with the structural identity thesis. SPecifically, it seems that an SNT must be able to explain the occurrence of any output of a human brain; an SNT, therefore, will also have to be capable of predicting human brain outputs (viz., human rational behavior). It should be emphasized, however, that many current neurophysio- logical theories are sufficient to explain and predict discrete neural events. An SNT, however, must be suffi- Cientto explain and predict total brain functions and t' ?Malcolm also makes the 'action'/'motion' distinc- llon° 318 version of the distinction will be discussed ater, viz. in 53.1. M-.‘._.-~v- . ' e" .' n.“ 40 the outputs of the human brain, regarded as a total sys- tem. This requirement is rather dramatically characteri- zed by Bertrand Russell,3 where he suggests that an SNT would have to be sufficient to permit an investigator to predict verbal or written outputs of a person on the basis of given inputs, even if the investigator did not know the language of the subject who was being used in the investi- gation. In general, then, the sufficiency of an SNT is based upon a predictivity criterion which is in accordance with the usual covering-law model of scientific explana- . 4 . . . . . tion. In order to make the criterion intuitively clear, 3See-[RUS l], p. 274. A hypothetical situation is suggested by Bertrand Russell. Suppose that a person, P, was handed a telegram which real "All your property has been destroyed in an earthquake." and after reading it, P exclaimed, "Heavens, I am ruined!" Presumably, an SNT, by taking account of (1.) the shapes in the written state- ment "All your property has been destroyed in an earth- quake." which were in P's visual field, together with (2.) P 8 brain structure, etc., could effectively predict the Verbal utterance, which is written as "Heavens, I am Ininedl" Furthermore, this prediction could have been- Wade: FSing an SNT, without knowing the English language, i'e' Wlthout knowing the meaning of the message contained n the telegram, or the meaning of the verbal response. if Accordingly, we shall say that a theory is an SNT "Heand only if, in the situation described, the response, in :Vens, I am ruined!" is explicable by the theory, even he absence of a knowledge of the English language. 4 are As we shall see, there are some problems which co tassoclated with the concept of "explanation," in this n ext. We shall be forced to distinguish between ex— 1 ~ - . - - goggations which Clte causes and those which oite rea- S.” ‘4' I. ‘ ”Va 41 however, we shall generally say that an NT is sufficient if and only if for every stimulus (input) of a human brain, the NT can be used to predict the resultant output. We shall call this criterion "Russell's Test." We may generalize this characterization of NT sufficiency in the following manner: Let Russell's Test be generalized so that: An empirical theory, T, passes Russell's test if and only if for every overt behavioral event E enacted by any given subject P, T can predict E. (3.0-2) As a corollary, if T passes Russell's Test with respect to a given E of P, then T also explains E. (3.0-3) Additionally, if T passes Russell's Test with respect to a given E of P, then T describes E. (3.0-4) With these results, we may proceed to describe 'SNT.‘ It should be noted that, of course, Russell's Test could be applied to a theory of, for example, physics or chemistry, etc. For convenience, we shall restrict our discussion to neurophysiological theories. In principle, however, our findings should be applicable to theories in 42 other fields. Given Russell's Test, therefore, we shall say that, For any neuroPhysiological theory (NT), NT is sufficient if and only if NT passes Russell's Test. (3.0-5) We have taken Mechanisml to be the proposition that an SNT is logically possible. We may rephrase this characterization in the following manner: Mechanism1 is the proposition that it is logically possible for there to be an NT that can pass Russell's Test. (3.0-6) Alternatively, Mechanism1 is the prOposition that it is logically possible for there to be an NT such that NT is an SNT. (3.0-6') Perhaps the most potentially devestating argument against Mechanism1 is the argument that it is self-refuting. We shall, however, immediately want to distinguish between two sorts of self-refutation, namely, pragmatic self- refutation and absolute self-refutation.5 A proposition 5We follow Passmore in making this distinction. See [PAS 1], Chapter 4. i ) ’l.".'s'hl\u4 ‘ .\ 43 is said to be pragmatically self-refuting if and only if an utterance of it provides evidence for its own false- hood; for example, the spoken statement "I cannot pro- nounce the word 'horseradish.'" is pragmatically self- refuting. On the other hand, a proposition is absolutely self-refuting if and only if it entails a contradiction; for example, the statement "There are no truths. is ab- solutely self-refuting. We shall, in this section, examine three arguments to the effect that Mechanisml is self-refuting. The first, the Intentional Paradox, is used to argue against Mecha- nisml from the intentional nature of assertions; the second argument, the Rational Paradox, is used to argue against Mechanisml on the basis of an alleged incompati- bility of reasons and causes; finally, the third argument, the Description Paradox, is used to argue against Mecha- nism1 on the basis of the relation between a brain description and the composition of a brain's environment. The Intentional Paradox and the Rational Paradox seem to be based on pragmatic self-refutation arguments, while the Description Paradox seems to be based on an ab- solute self-refutation argument. We shall attempt to demonstrate that Mechanisml is not effectively refuted by these arguments. §3.1 THE INTENTIONAL PARADOX In his 1968 paper, Norman Malcolm argued against the conceivability of what we have called Mechanisml. In the final analysis, he concluded that Mechanisml is self- refuting and, perhaps, self-contradictory. The basis of this conclusion was formed by two paradoxes, the first of which is the Intentional Paradox (hereafter, IP).6 The IP may be constructed in the following manner: (IP-l) Mechanism1 is incompatible with the existence of any intentional behavior. (IP-2) Speech (viz., the making of assertions) is a form of intentional behavior. (IP-3) Assume that a person P asserts that Mechanism1 is true. (IP-4) The making of this assertion is incompatible with its content, i.e., the proposition: "Mechanism1 is true, and someone (P) asserts 'Mechanisml is true.'" is self-contradictory. (IP-S) Therefore, Mechanism1 is self-refuting. 6 Malcolm's second paradox constitutes one formula- tion of the Rational paradox; it will be discussed in §3.2. 44 45 For clarification, we might add: (IP-4') If Mechanisml is true, then it cannot be asserted. (where "Mechanisml" is the proposition "There is an SNT.") It is not immediately clear whether the IP is based upon pragmatic or absolute self-refutation. As is stated in (IP-4), Malcolm holds that the proposition "Mechanism1 is true and someone asserts it to be true." is self- contradictory.7 He considers, however, the possibility that the IP may be based upon "mere" pragmatic self- refutation. For example, he notes that, The mere prOposition that mechanism is true is not self-contradictory. But, the conjunctive proposition, "Mechanism is true and someone asserts it to be true," is self-contradictory. It does not seem obvious that a logical self-contradiction is present here, although substitution of suitable defini- tions of 'mechanism' and 'assertion' might turn it into one. What Malcolm means, in effect, is that, Mechanism1 is true. (3.1-1) 7[MAL 2], p. 346. 8Ibid. 46 is pragmatically self-refuting, since, no person, P, can make a statement of the form (3.1-1) without pragmatically self-refuting himself. Elsewhere,9 Malcolm describes a situation that is, in some respects, similar to this one. He asks us to con- sider, for example, statements of the form: P is unconscious. (3.1-2) Suppose that a person, Pl, makes a statement which has the form (3.1-2), i.e., Pl says, "P1 is unconscious.", then, as Malcolm contends, P1 has pragmatically refuted himself, by making that statement. Consider, on the other hand, a person other than Pl' i.e., P2. P2 can say "P1 is unconscious." without pragmatically refuting himself. P2, of course, cannot state that "P2 is unconscious." without pragmatically refuting himself. It is on this point that Malcolm takes statements of the form (3.1-1) to differ from statements which have the form (3.1-2). Malcolm contends that statements of the form (3.1-1) are self-refuting when spoken by any person, P, and he also contends that statements of the form (3.1-1) are self- refuting when spoken of any person, P. This is not true, cni the other hand, of statements which have the form (3.1-2). 9See [MAL l], passim. Also see [MAL 2], p. 347. q A 1 DP vw 47 The kind of pragmatic self-refutation which is used in the IP differs somewhat from the more particular- ized sort of pragmatic self-refutation which we charac- terized in 53.0. The IP is based upon what we might call "universalized pragmatic self-refutation." As we indicated in 53.0, the statement, 'P1 cannot pronounce 'horseradish'.‘ is pragmatically self-refuting, when it is spoken by P1' Consider, however, the statement 'No one can pronounce 'horseradish'.‘ As it stands, this statement is simply false, but the conjunction, 'No one can pronounce 'horseradish,‘ and P1 just pronounced 'horse- radish'.', is self-contradictory. In this respect, we may say that the prOposition 'No one can pronounce 'horse- radish'.‘ is universally pragmatically self-refuting. It is, apparently, Malcolm's view that the pro- position (3.l-l) is universally pragmatically self-refuting, in the sense which we have just indicated. He certainly holds that (1.) the proposition (3.1-1) is false, and (2.) conjunctive statements of the form 'Mechanism1 is true, and P1 asserts that Mechanism1 is true.' are self- contradictory. Malcolm would also subscribe to the stronger contention that propositions of the form 'Mecha- nisml is true, and P1 asserts p.', where p is any propo- sition whatever, are self-contradictory. Furthermore, he holds that a statement of the form 'Mechanisml is true, 48 and P asserts p.' is self-contradictory, even if it is 1 made by someone other than P1. Malcolm would allow that statements which have the form 'Mechanisml is true, and P1 asserts p.' could be stated without constituting self-refutation, only if they were stated by a person who was a solipsist and who took Mechanisml to be true of "other organisms" but not of 10 Elsewhere, Malcolm finds solipsism to be in- 11 himself. conceivable. We may conclude, therefore, that he would consider Mechanisml to be inconceivable in any event, even if it were asserted by a solipsist who applied it only to others and not to himself. If Malcolm is correct, then the IP may be based upon a "universally" pragmatic self-refutation argument. Clearly, the success of the IP is dependent upon the truth of its first premise, (IP-l), since it is argued, in the IP, that the proposition 'Mechanism1 is true' can- not be asserted, in virtue of the intentionality of assert- ing anything whatever. It is Malcolm's view that Mechanisml "rules out the possibility of" there being any intentional activity, viz., this is what he means by say- ing that Mechanism1 is "incompatible" with the existence of intentional activity. But, Malcolm also indicates that 1°[MAL 2], p. 347. 11See [R00 1] and [MAL 3]. 49 making an assertion is an intentional activity, so the in- ference to (IP-4) and (IP-S) is obvious. It seems apparent, therefore, that if (IP-l) could be maintained, then the IP would be successful in showing that Mechanism1 is self- refuting, when it is asserted. Accordingly, we shall concentrate our attention upon Malcolm's argument for (IP-l). We shall then apply our findings to the IP itself. Malcolm's argument for the proposition (IP-l) may be constructed in the follow- ing manner: (I) Mechanisml is incompatible with purposive behavior. (II) If there could be no purposive behavior, then there could be no intentional behavior. (III) Therefore, Mechanisml is incompatible with intentional behavior. (IP-l) This formulation of Malcolm's argument for (IP-l) is very «averhsimplified. As we shall see, Malcolm's arguments .forr CI) and (II) are rather complex. We shall have to examine them, in some detail, inorder to be in a viable position to form a conclusion regarding his proposition (IPh-l) and,more generally, regarding the IP itself. Before presenting the argument for Malcolm's pro- position (II), ". . . ifthere could be no purposive 50 behavior, there could be no intentional behavior."12 it will be helpful to note two distinctions. Firstly, while Malcolm is somewhat unclear on this point, he seems to hold that there are two ways to characterize intentions. In the first way, it might be said that to intend to do an action, B, is to do B "on purpose," i.e., voluntarily or consciously. In the second characterization, on the other hand, to intend to do B is to do B with a purpose in mind, i.e., for the sake of some purpose, or to achieve some goal. Considered in the first sense, intentions are non-purposive, or not goal-directed. For example, a per— son, Pl, can intentionally raise his arm (i.e., he can "voluntarily" raise his arm), while having no purpose for doing so. Considered in the second sense, they are purposive, or goal-directed. Malcolm seems to indicate this distinc- tion in his statement that, "A man can do something in- tentionally, but with no further intention." as well as at various other points in his paper.13 Secondly, Malcolm explicitly makes a distinction between "simple" and "further" intentions. A simple in— 14 tention is merely "The intention to do X," where X is 12[MAL 2]. p. 346. 13[MAL 32, p. 345. 14[MAL 3], p. 342. 51 some action. A further intention, on the other hand, is "The intention to do something else Y in or by doing x,"15 where both Y and X are actions. This distinction is a factor in the arguments that Malcolm presents for the proposition (II), 'If there could be no purposive behavior, then there could be no intentional behavior.‘ as well as being a factor in the subsequent proof of the prOposition, (IP-l), 'Mechanisml is incompatible with intentional be- havior.‘ The proof for the proposition (II) proceeds in seven stages, in Malcolm's paper. The first stage is: (l) A man can do something intentionally, but with no further intention; his behavior is intentional but not purposive. This proposition calls upon both of the distinctions which we have just cited. An example of such an intention would be a non-purposive, simple intention. The second step, in Malcolm's argument, employs a form that is common to his writings: (2) If some intentional actions are purposeless, it 17 does not follow that all of them could be purposeless. lsIbid. 16[MAL 2], p. 345. 17Ibid. 52 Malcolm's reasoning regarding this and similar matters is sufficiently well-known that a discussion of it here seems to be unnecessary. Proceeding, then, to the next stage in Malcolm's argument, we have, (3) Any physical activity is analyzable into compo- nents. 8 This proposition is assumed by Malcolm. It should be noted that "activities," for Malcolm, may include both actions and motions. This step may, per- haps, be called into question, since Malcolm has nothing to say about how the analysis to which he refers could be accomplished. It would seem to be the case that the sepa- ration of components of an activity would be likely to be somewhat arbitrary. For example, the activity of a motor- ist applying the brake on his automobile would seem to be an example of an activity that could be analyzed into com- ponents: e.g., the components might be, (a) raise foot off of accelerator, (b) tilt toe down, heel up, (c) move foot to the left, and (d) lower foot onto the brake pedal. There is, firstly, the difficulty of dividing a smooth activity into components; we may ask, for example, could (a) and (b), in our example, be combined? Many activities, laIbid. 53 such as the one in our example, are not easily divisible into discrete components. In this respect, the selection and the division of components of an activity would be likely to be arbitrary. Secondly, and more importantly, however, it is not clear which of (a), (b), (c), or (d), in our example, should be regarded as being an intentional component of the activity of a motorist applying the brake; but, ac- cording to the next step, in Malcolm's argument, at least one component of every activity is intentional: (4) If the activity is intentional, then at least some of its components will be intentional. It is a trivial consequence of (4) that, (5) . . . an intentional activity must have intentional components. It seems to us that Malcolm provides no criteria for dis- tinguishing between intentional components of activities and non-intentional components of activities, with the exception, perhaps, of definitional criteria. Defini- tional criteria, however, do not seem to be helpful in the actual carrying out of the analysis of an activity. 54 Malcolm treats (4) as a definition. As well as resulting in some practical difficulties, this also in- volves a vicious circularity which unfavorably affects the IP as well as Malcolm's entire theory of intention- ality. We presume that the intentional components of an activity are actions [by definition]. Actions are char- acterized, by Malcolm,in the following manner: Two things are implied by an ascription of an "action" to a person: first, that a certain state of affairs came into existence . . . , second that the person intended that this state of affairs should occur. One important factor, here, is that Malcolm makes a shift from talking about the purposive behavior of "organisms" to talking about the intentional actions of "persons." The shift is badly blurred, in his paper. One result is that intentions, since they are relevant only to actions (and not to mere motions) are defined, by Malcolm, in ternm of actions. Actions, on the other hand, are defined in terms of intentions, resulting in a vicious circularity. In the next stage of Malcolm's argument, an attempt is.made to definitionally link the concepts of'purpose and intention: 21[MAL 2], pp. 340-341. 55 (6) The components [of an activity; will be purposive in relation to the whole activity. 2 This step, (6), clearly means that the given activity, taken as a whole, is done for the sake of some purpose or goal, since it is an "activity" and not a mere sequence of motions. The intentional components, according to (6), are actions, all of which are done for the same purpose. In other words, the purpose of all of the respective actions which are components of a given activity, is iden- tical to the purpose of the activity taken as a whole. This claim does not seem to be at all obvious. It appears that, considering Malcolm's definitions, an inten- tional component of an activity could have its own purpose independently of the purpose of the whole activity of which it is a component. For example, a motorist might inten- tionally move his foot in order to make it avoid an ob- stacle on its way to the brake pedal. The purpose of the avoidance action might be to prevent pain or to prevent a scuff on his shoe. This example may be made more feasible if one assumes that the obstacle in question could have been forcably knocked aside and did not have to be avoided in order to perform the general activity of applying the brake. This seems to be a situation wherein an intentional component of an activity has a degree of autonomy, in that 22[MAL 2], p. 345. 56 it may be purposive in relation to itself and may not be purposive relative to the activity of which it is a com- ponent, since its purpose is not identical with the pur- pose of the activity as a whole. In fact, the avoidance action, in our example, may be termed a mere "motion" rela- tive to the activity of which it is a component. The small avoidance action, in our example, may also be considered as being an activity in its own right, since it has an intentional component. So, [by (3)] it may be divided into further components, some of which must be intentional [by (4)]. If one of these intentional components turns out to be purposively autonomous, in the sense that we have indicated, then it too may be an ac- tivity in its own right and would, therefore, have to be analyzed into discrete components, at least one of which must be intentional. This argument may be repeated at each level of the analysis, thus generating an infinite regress. One additional difficulty, in Malcolm's proposed analysis of activities into discrete components, results from his distinction between intentions which are pur- posive and those which are non-purposive. Strictly inter- preted, (5) requires that at least one component of every activity be an intentional action of some sort. Such an activity could have a purpose that is identical to the purpose of the activity of which it is a component only if 57 it constituted a purposive intentional action. By Malcolm's own account, however, some intentions are non-purposive. But a sequence of movements need only have one intentional component in order to be termed an activity. We may, there— fore, draw two consequences. First, if the only intentional component of an activity is a non-purposive intentional action, then the activity would also be non-purposive. This contradicts Malcolm's stipulation that all activities are purposive. Secondly, suppose that a given activity does have a purpose. Suppose, further, than at least one of its intentional components is a non-purposive inten- tional action. Then, it follows that the given action could not have a purpose which is identical to the purpose of the activity of which it is a member, since it does not have a purpose at all; this result also contradicts (6). We conclude, therefore, that MalColm has not been successful in showing that the components of any given activity will be purposive in relation to the whole activ- ity. We have shown three reasons for adopting this con- clusion. First, the division of components of an activity is likely to be arbitrary; secondly, the "division of com- ponents" for the purpose of locating the intentional com- ponents of an activity leads to an infinite regress. Finally, we have seen that components of activities can have autonomous purposes, which are unrelated to the pur- pose of the larger activities of which they are members, or components of activities can be non-purposive. posi‘ \"x \ 58 The conclusion of Malcolm's argument is the pro- position, (II), which has been our present concern: (7) . . . if there could be no purposive behavior, there could be no intentional behavior. Clearly, if we do not grant Malcolm step (6) in his argu- ment, then the conclusion does not follow. It should be noted, additionally, that Malcolm uses 'intentional behavior' here. Given his own definition, the term 'be- havior' is, at best, vague, and 'intentional behavior' may be inconsistent, since, for.Malcolm, only actions (or "activities") are intentional. In summary, it seems warranted for us to conclude that Malcolm's argument for (7), is unsound. The next phase of our discussion is concerned with Malcolm's proposition ((111) = (IP-l)) that Mecha- nism is incompatible with intentional behavior. Before entering into this discussion, however, note should be taken of why it is important to Malcolm that Mechanism1 be incompatible with intentional behavior. He mentions at one point, that the two forms of "eXplana- tion," purposive explanation and neurophysiological ex- Planation, are irrelevant to one another; his reasoning, here, is that they-belong to different "language.games": they make use of different concepts.24 The P0SSibility ¥ 24 See [MAL 2], p. 338. 59 remains, however, that even though the two forms of ex- planation are not relevant to one another, they may be compatible, i.e., an "explanation" of either form could be offered for one and the same sequence of movements. Malcolm hesitates to permit this sort of compatibility, however, because his aim is to show that Mechanisml rules out the possibility of intentional "explanations" of events. If he were successful in doing this, he would hope to infer that since we do "explain" certain events with the use of intentional "explanations," Mechanism1 cannot be true. We shall discuss the alleged divergence between the two forms of explanation, PE and NE, in more detail in section 3.2. The argument which is used by Malcolm to substan- tiate the proposition (IP-l) that Mechanisml is incompatible 'with intentional behavior begins, as we have indicated, smith the conclusion, (7) (= (II)), of the last argument which we considered. We shall continue to number the steps, in sequence (beginning with (8)), in order to pre- serve continuity. For the sake of discussion, we shall permit the use of (7) in order to examine the direction in which the remainder of Malcolm's argumentation proceeds. While taking (7) as a(previously proved) premise, Malcolm adds a second premise, in his argument for (IP-l): YT r I“, :71 60 (8) . . . mechggism is incompatible with purposive behavior . . . Earlier in his 1968 paper, he argues for this point. This is the central argument, for Malcolm, to the effect that purposive explanation and neurophysiological explanations are incompatible. To attempt to make his point, he uses an example, rather than a rigorous argument; he asks us, in particular, to consider a man who intentionally climbs up a ladder to a roof for the purpose of retrieving his hat. Suppose that this event were to be explained by an SNT. Then, Malcolm holds that, . . . the movement of the man on the ladder would be completely accounted for in terms of electrical, chemical, and mechanical processes in his body. This would surely imply that his desire or intention to re- trieve his hat had nothing to do with his movement up the ladder.26 Clearly, however, the implication that Malcolm wishes to make, here, follows only if he presumes, in advance, that Mechanism1 is incompatible with intentional behavior. This is the point upon which the argumentation of his en- tire 1968 paper turns. We find that Malcolm makes the assumption in question while the only thing that he pro- vides as a justification for the required incompatibility presumption is the brief comment that the implications 25[MAL 2], p. 346. 26[MAL 2], p. 338. The italics are Malcolm's. '52 mi 61 "would surely" follow. We must conclude, then, that his argument for (8) is invalid, or, more correctly, we should conclude that he has no argument for (8) at all. Malcolm seems to recognize that this is the case. He mentions in a postscript to his 1968 paper that, I must confess that I am not entirely convinced of the position I have taken in reSpect to the crux of this paper--namely, the problem of whether it is possible for there to be both a complete neurophysio- logical explanation and also a complete purposive ex- planation of one and the_same sequence of movements. I do not believe I have really proved this to be impossible. As we have concluded, he has not been successful in show- ing this to be impossible. . It should also be mentioned that, in a short re- statement of his "proof" for (8) in his 1968 paper, Mal-‘ colm calls upon a concept of "explanation." He argues, that if an SNT were to be taken to completely "explain" the event in question, i.e., the man climbing the ladder to get his hat, the mention of a man's intentions would not "explain" (or be a part of an "explanation" of) the event. Here Malcolm plays upon an ambiguity of the word 'explain.‘ In one sense of the term, to "explain" an event E, when, e.g., E is a behavioral event, is to give a causal account of the occurrence of E; in another sense, 27mm. 2], p. 349. “l.l 5%, . rs..- 62 to "explain" E is to justify the performance of E (by some person, P). When one recognizes these two divergent senses of this term, then it seems clear that there is no inconsis- tency in saying that an SNT (causally) e§plains an activ- ity while a mention of intentions justifies it. We shall discuss the ambiguity of 'explain' in more detail in sec- tion 3.2. We shall continue with Malcolm's argumentation, in order to bring our recent findings into perspective with the central concern of the present study. Malcolm's next step, in his argument for (IP-l) is: (9) . . . mechanism is incompatible with intentional activities. Malcolm expects this step, (9), to follow from (7) and (8). Aside from other difficulties which we have already men- tioned related to (7) and (8), it should be noted that Malcolm has once again made an innocuous, but possibly defective shift in terminology. (7) and (8) speak only of "behavior," while (9) involves a shift in terminology to the term 'activities.".If the motion/action distinc- tion were rigorously observe, this small defect could likely be corrected. 28[MAL 2], p. 346. 63 Malcolm's final conclusion is: (10) [Therefore] . . . [Mechanismll is incompatible with all intentional behavior. Clearly, (10) is equivalent to (IP-l). Note, however, that Malcolm has again made a shift to 'behavior,‘ in (10). The effect of our findings upon the IP should now be clear. It is evident that the IP depends upon its first premise, (IP-l), for its success. As we indicated earlier, Malcolm attempts to support the prOposition, (IP-l), by using the proposition (II), 'If there could be no purposive behavior, there could be no intentional be- havior.’ and (I), 'Mechanisml is incompatible with pur- posive behavior.'. If Malcolm's defense of (IP-l) had been successful, then the IP itself would have been successful. We have found that there are two points which are critical to the success of the IP. The first point is Malcolm's defense of steps (4), (5), and (6) in the argu- ment which we considered. We have found these steps to be defective, and we have found that (7) is not well-founded. Without (7), of course, the remainder of Malcolm's argu- ment is not sound. The second point that is critical to the success of the IP, in any event, is Malcolm's non-defense of (8). 29Ibid. 64 We have found (8) to be an unwarranted, unproved proposi- tion; Malcolm has, himself, expressed doubts about it. Without (8), however, (10) cannot be established. Since (10) is the first premise of the IP, viz. (IP-l), and since (IP-l) forms the basis of the entire IP, we must conclude that the IP fails. Hence, considering that the IP is unsuccessful, we conclude that Mechanisml has not been shown to be self- refuting. Consequently, we conclude that an SNT still- appears to be logically possible. 5302 THE RATI ONAL PARADOX The Rational Paradox (RP) is based upon an alleged incompatibility of rationality and causality. Specifical- ly, if the RP is sound, then Mechanism1 is self-refuting, since, can be may be (RP-l) (RP-2) (RP-3) (RP-4) (RP-5) according to the RP, if there is an SNT, then there no reason for believing it. In what is probably its most simple form, the RP constructed as follows: Assume that reasons can be given for believing Mechanisml (or, assume that a (rational) proof can be cited, the conclusion of which is a pro- position expressing Mechanisml). Then, Mechanism1 can be held for a reason. But, if Mechanism1 is true, then reasoning (and inference) is a strictly causally determined pro- cess which occurs in human brains. Therefore, reasoning (and inference) is a non- rational process. Hence, belief in Mechanisml is arrived at by causal, non-rational processes. 65 66 The conclusion may be restated as: (RP-5') Consequently, if Mechanism1 is true, there is no reason to believe that it is true. Alternatively, the conclusion may also be stated as: (RP-5") Mechanisml both can and cannot be held for a reason . The form of self-refutation argument which is used in the RP has a long history. It is usually used to com- bat theories that are similar, in certain respects, to Mechanisml, the essential similarity being the relevance of doctrinessuch as Mechanism1 to a certain set of isSues in the philosophy of mind. Recently this form of self- refutation argument has been used to criticize material- ism (as the view is related to the philosophy of mind), determinism,3o and versions of mechanism that are similar to our Mechanism1.31 The RP which we are presently con- sidering, has been extrapolated from these recent argu- ments. In logical form, the RP closely resembles the self-refutation argument in J. N. Jordan's "Determinism's 3oSee [JOR l], passim. 31See "Townes' Paradox" in [TOU 5], p. 5. 67 32 (See Appendix I(A)). In terminology and in- Dilemma." tent, the RP closely follows "Townes' Paradox" which is presented in S. Toulmin's recent paper, "Reasons and 33 (See Appendix I(C)). In some respects, the Causes." RP also resembles Malcolm's Rational Paradox34 (see Appen- dix I(B)), although, Malcolm's RP is based upon considera- tions regarding intentionality, which are the same as those which are discussed in §3.l. The argument of the RP, as it stands, is clearly fallacious. (RP-4) is an unwarranted inference unless it is presumed that reasons are incompatible with causes. It seems, then that the RP has a suppressed premise that may be expressed as: (RP-4') Causality is incompatible with rationality. The defense of (RP-4'), however, is seriously ham- pered by the fact that the terms 'cause' and 'reason' may be used in various ways. Questions regarding causes, such as, for example, the question, "What caused P to do A?," can result in quite different sorts of replies, depending upon the circumstances in which they are asked. The 32See [JOR l], passim. 33[TOU 5]. See especially pp. 5 and 23-34. 34See [MAL 2], p. 348. 68 question, "What caused P to do A?" could illicit a re- sponse in which P's reasons for doing A were cited, while in other circumstances, physical conditions or empirical laws could be mentioned as_being "causes" for P's doing A. It seems to us that, in the final analysis, the RP is not based simply upon an incompatibility of causality and rationality, but rather it is based upon an alleged incompatibility of giving a causal account of an event, and giving a "rational" account of the same event. So, it appears that the RP is based upon an alleged incompati- bility of causal explanations and rational "explanations." Accordingly, we may further clarify the suppressed premise of the RP by reformulating it as follows: (RP-4") Causal explanations are incompatible with ra- tional explanations. It seems to be clear that, if (RP-4") is not equivalent to (RP-4'), it is at least implied by it.. Hence, if we can refute (RP-4"), i.e., if we can prove that it is not the case that (RP-4"), then we can obtain a denial of (RP-4'), by modus tollens. This alleged incompatibility between causal ex- planation and rational explanation is, as we have indi- cated, not proved in the RP, and, for this reason, the RP is invalid. We must, however, consider the possibility that (RP-4") is true even if it is not proved in the RP. 69 It is our contention that (RP-4") is false. This contention can be most readily defended by pointing out that (RP-4") could be plausibly maintained only if 'ex- planation' was being used unambiguously in (RP-4"), i.e., if both occurrences of the term 'explanation' were being used in the same sense. Close scrutiny reveals that 'ex- planation' is ambiguous and that Mentalists1 play upon the ambiguity in the RP, and specifically in [RP-4"). One can be said to "explain" a behavioral event E in two senses. In one sense, to "explain" E is to give a causal account of it. Presumably if an adequate causal account of E were to be given, it would be a covering-law explanation, and it could be used to explain events which are similar to E, and it could have been used to predict E. In another sense, however, to "explain" E is to justify E or to give an indication of one's intentions in perform- ing E. In this sense, one does not seek to be able to infer E from a set of laws and a set of statements of con- ditions, nor does one seek to predict-E.. Rather, one seeks, generally speaking, to "justify" the performance of E and thereby attempt to illicit praise and dispel blame. It appears, therefore, that 'explain' may be used in two divergent senses. In one sense, to "explain" an event E, is to causally explain the occurrence of E. In a second sense, to "explain" an event E is to justify one's 70 doing of E. Considering this ambiguity, we may once again reformulate the suppressed premise of the RP as: (RP-4"') Causal explanations are incompatible with rational justifications. It is, specifically, this proposition, (RP-4"'), that we claim to be false. (RP-4"') could be successfully defended (as being true) only if it could be demonstrated that a causal ex- planation of an event E (in Malcolm's words) "rules out the possibility" of there being a rational/purposive justification of E. So, we must ask if this can be demon- strated. One approach, in the attempt to demonstrate that causal explanations rule out the possibility of there being rational justifications, would seem to be to argue that there are not two senses of 'explanation,‘ but rather there are two "kinds" of explanation, and these two kinds of explanation are incompatible. Malcolm takes this ap- proach with regard to purposive explanations. His View is that purposive explanations are a subset of rational explanations, since one way in which one can rationally explain one's actions is to indicate one's purpose or in- tention in performing one's actions. For Malcolm, as we have seen, purposive explanations explain actions, whereas, 71 neurophysiological explanations explain movements. Both explain behavior. Malcolm, however, notes that, . . . we can say this only because we use the latter word [i.e., 'behavior'] ambiguously to cover both actions and movements. To generalize, Malcolm subscribes to the proposition: Purposive explanations differ in kind from neuro- physiological explanations. (3-2-1) As Malcolm states this contention, in one paper, If thought is identical with a brain process, it does not follow that to explain the occurrence of the brain process is to explain the occurrence of the thought. And, in fact, an explanation of the one dif- fers in kind from an explanation of the other. The explanation of why someone thou ht such and such in- volves different assumptions ang principles and is guided by different interests than is an explanation of why this or that process occurred in his brain. These explanations belong to different systems of explanation.36 Elsewhere, he clarifies what he means by different "sys- tems" of explanation; Malcolm holds, to be specific, that any neurophysiological explanation (NE) is of the follow- ing form: (NE-l) "Whenever an organism of structure S in in neurophysiological state q, it will emit movement m. 35[MAL 2], p. 338. 36[MAL 4], p. 179. The italics are Malcolm's. 72 (NE-2) Organism 0 of structure S was in Neurophysio- logical state q. (NE-3) Therefore, 0 emitted In.37 Any purposive explanation (PE) is considered, by Malcolm, to be of the following form: (PE-l) Whenever an organism 0 has goal G and believes that behavior B is required to bring about G. 0 will emit B. (PE-2) 0 had G and believed B was required of G. (PE-3) Therefore, 0 emitted B.38 Potential difficulties regarding tense shifts, in both NE and PE, could be easily avoided by a simple restatement of both of the explanation forms in terms of an arbitrary time "t." Hence, we shall not comment on difficulties regarding tense. Other difficulties, however, remain. In particular, it does not follow, from there being two kinds of events, that there are two kinds of explanations. Specifically, .Malcolm argues that, where 'PE' and 'NE' are defined as above, (1.) PE's explains actions. 37[MAL 2], p. 335. 38Ibid. 73 (2.) ME's explains motions. (3.) A's differ in kind from M's. (4.) Therefore, PE's differ in kind from NE's. This argument form can be shown to be not valid. We may generalize the argument form as: (1.) E1 explains X. (2.) E2 explains Y. (3.) X's differ in kind from Y's. (4.) Therefore, El's differ in kind from Ez's. where El and B2 are explanations and X and Y are events. Specifically, let X designate the rising of the ocean tides, and let Y designate the revolution of the moon around the earth. Also, let El be a Newtonian explanation of X, and let E2 be a Newtonian explanation of Y. Then, certainly it is clear that X and Y can be said to differ in kind, but it cannot be said that E1 and E2 differ in kind. So we may conclude that Malcolm cannot validly in- fer that PE's differ in kind from NE's from the fact that they are used to explain different kinds of events. Malcolm may still be able, however, to demonstrate that PE's differ in kind from NE's by some other means. Such a demonstration, however, will have to contend with the views of E. Nagel, M. Schlick, C. Hempel and others; these methodologists have convincingly argued that 74 teleological and functional explanations do not differ in kind from other covering-law explanations. Malcolm argues that the PE differs from the NE in 39 First, purposive explanations three essential respects. are ordinarily not organized into theories whereas neuro- physiological explanations are. Secondly, purposive ex- planations employ the concept of "purpose or intention" whereas Neurophysiological explanations do not. Finally, purposive explanations do not make use of contingent, causal laws whereas neurophysiological explanations do. The first difference does not seem to be a crucial one. We may argue that while ordinarily PE's are not axiomatized, there does not seem to be a reason for say- ing that they cannot, in principle, be so organized. Malcolm's construction of a formal PE seems to support this contention. Regarding the third difference, Malcolm holds that specification of an "implicitly understood" clause regard- ing the absence of "countervailing circumstances" makes (PE-l) an "a priori" principle, while (NE-l) is, at best, 40 This, presumably, is one a contingent, causal law. basic reason for holding a PE is different "in kind" from an NE. Malcolm's terminology, on this point, however, is open to suspicion. For example, he regards (PE-l) as being 39See [MAL 2], p. 335. 4OSee [MAL 2]. pp. 335-337. 75 "a priori" rather than using the, in some respects, stronger term 'necessary.‘ Yet, he contrasts (NE-l) with (PE-l) by noting that (NE-l) is "contingent," rather than using the weaker term 'a posteriori.‘ If one grants any merit to W. V. O. Quine's remarks regarding this entire 41 then Malcolm's argument, in this set of distinctions, instance, is doubly suspect. In any event, it seems unlikely that the introduc- tion of a provision regarding countervailing circumstances could adequately distinguish between the PE and the NE. It seems apparent that the NE could likewise be supple- mented with an appropriate ceterus parabus clause which would seem to put (NE-l) and (PE-l) on the same footing. It seems clear that, for Malcolm, the major differ- ence between the PE and the NE is that the PE uses the con- cepts of "purpose" and "intention" whereas the NE does not. It is apparent that, for Malcolm, an explanation of purposive/intentional activity is emergent with respect to a neurophysiological explanation. This aspect of Mal- colm's vieWpoint is, in fact, generalized for any instance of PE, not just those involving uniquely human purposive/ intentional activities. Specifically, note that the PE is worded in terms of "organisms" in general and is thereby not restricted to only explanations of human activities. 41See [QUI l], passim. 76 We take reductionism, and correspondingly, emer- gentism, to be characterized in terms of certain formal relationships between theories. We presume that a PE or an NE would include the laws of some respective theories. In the case of a PE, the laws of a purposive/psychological theory (PT) would be used, which employs concepts such as "intention," "purpose," "rationality," "intelligence," etc. Similarly, an NE would be based upon the laws of a neurOphysiological theory. Hence, a PT would be reduced to an NT if and only if all of the primitive terms of PT were definable by means of the primitive terms of NT and the theorems of PT were derivable from the axioms of NT. Malcolm, however, holds that no PT is reducable to any NT. Specifically, he indicates that terms such as 'purpose' or 'intention' could not be defined by means of the ter- minology of any NT. It follows, then, that some of the vocabulary of any PT is emergent with respect to any NT. We are in essential agreement with Malcolm on this point. We differ with him, however, in holding that both a PE and an NE could explain the same event, and explain it sufficiently. Where the event in question is E, we have the result that PT and NT may be satisfied by the same relational system and that E is an (output) state (or internal state) of that relational system. Hence, 77 C NT = SNT = T PT E = M0 (Figure 3.2-1) So, our position is that PE and NE could indepen- dently and sufficiently "explain" the same event, while PT is emergent with respect to NT. Hence, in view of our observations that the term 'explanation' is ambiguous, and in view of our observation that a rational, purposive explanation (or, more, properly, "justification") will be emergent with regard to an SNT, we conclude that an SNT, which causally explains a given event E, will not rule out the possibility of a rational justification of the same event E. Therefore, we conclude that an SNT is not incompatible with the existence of ra- tional "justifications;" consequently, we conclude that the proposition (RP-4"') is false. Since (RP-4"') has been found to be false, it is apparent that the RP cannot be maintained. To this ex- tent, we conclude that Mechanisml has not been shown to be self-refuting, and hence is still viable. It should be recalled that Mechanism1 is really a quite weak claim, namely that every human bodily motion is (causally explicable. As we have attempted to 78 demonstrate, Mechanisml does not rule out the possibility that some of these motions may also be rationally justifi- able. Therefore, there may be two "explanations" for the same set of movements, one a causal SNT and the other a (rational, intentional, or purposive) justification. § 3 . 3 THE DESCRIPTION PARADOX One of the most frequently mentioned distinctions between social science and natural science is that the laws of social science may become either invalid or valid if they become a matter of public knowledge. E. Nagel suggests two sorts of instances in which this might be the case.42 First, a prediction in the social sciences, which is valid at the time at which it is proposed, may be fal- sified at a future time, if future actions are made by agents who have knowledge of the social science theory which is used in making the prediction. Nagel provides the example of the 1947 recession which was predicted by economists, but failed to materialize because of the com- mon knowledge of the prediction. Secondly, there are self-fulfilling prophecies which might occur in the social sciences. A prediction which is not valid at the time at which it is prOposed may be made valid, in a particular instance, if actions of agents who have knowledge of the prediction make it 42See [NAG 2], pp. 466-473. 79 80 valid. For example, the United States Bank, as Nagel notes,. was in no financial trouble in 1928, but trouble was falsely predicted. Depositors reacted on the basis of the false prediction, thus making it true. It may be the case, that an SNT is in a similar situation to a theory in the social sciences in that it may become invalid, if it becomes known. The features of an SNT that might make it invalid if known are, however, somewhat different from the features of social science theories that have this result. Theories, as well as be- ing used in explanations and predictions of events, may also be used to describe them. Also, as we indicated in §2, a brain can be regarded as "modeling" its own environ- ment. If an SNT were in existence, it would constitute a~ portion of the environment of some brain, and hence, would be "modeled" by some brain. This situation prompts the Description Paradox (DP) which may be formulated in the following manner: Let 'BS' abbreviate 'Brain state' and, accordingly, . let 'BSn' abbreviate 'Brain state n', where BSn occurs at some specifiable time t. Then, we have: (DP-l) Mechanism1 is true. - [Assumption] (DP-2) Then, it is logically possible for there to be an SNT, such that for every BSn, SNT can be used to explainBSn. ' w 81 (DP-3) Let B1 be a brain which is in BSn’ and assume that SNT explains Blsn' (DP-4) Hence, SNT also described Blsn' (DP-5) But, a brain which has received a description of itself cannot be in the brain state described.45 (DP-6) Assume that B1 has received SNT. (DP-7) Then, it is not the case that SNT describes Blsn' [From (DP-5) and (DP-6)] (DP-8) SNT describes B Sn and it is not the case that SNT 1 describes Blsn' [From (DP-4) and (DP-7)] (DP-9) Therefore, not (DP-l), i.e., Mechanism1 is false. Clearly, the conclusion is supposed to follow by indirect argument. It should be immediately noted that, in our weak version of mechanism, namely, Mechanisml, we supposed that an SNT would have to pass Russell's Test with respect to overt bodily movements. The motion in question would pre- sumably be the final stage of a causal chain having as one of its members a corresponding brain state. So, for clarification, and to bring the DP into perspective with our limited conception of Mecahnisml, we should'amend the DP with: 45For a detailed discussion of this point, see [MAC 1]. p. 395. 82 (DP-3') SNT predicts En’ where En is an overt bodily motion, and En is the last member of the causal chain of which B18n is a member. If (DP-3') is accepted, it seems trivial that (DP-3). One initial approach to the DP might be taken in terms of chronological factors. Let 'D' abbreviate 'Self- descriptive data provided by an SNT.‘ Suppose that, at a given time t, SNT describes Blsn' Then, at t + 1, B1 re- ceives D, which results in Blsn+l' Hence, SNT does not describe Blsn+l' So, suppose that SNT is up-dated to form SNT'. Then, at t + l, we have SNT' describes Blsn+l' But, suppose that, at t + 2, B1 receives D'. Then, Bl's reception of D' results in Blsn+2’ Hence, SNT' does not describe Blsn+2' We can again up-date SNT' to form SNT", but this seems to result in an infinite regress.‘ As we see it, the central question is whether Bl could be in a given brain state, BlSn' and receive D at the same time. There are three things to consider here: first, D will undoubtedly be quite complex; secondly, if an NT is sufficient, and it explained a given Bisn' then it could predict Bis So, we must ask whether a given n+l' Bi could "receive" a given D within a single Bisn‘ Thirdly, we must ask whether a given Bi could keep up with an SNT in generating the series SNT', SNT", SNT"', . . . since, for any Bisn' an SNT could predict Bisn+l' These 83 questions are relevant to §4 and §5, as well as being relevant to the present section. D. M. MacKay argues46 that an SNT can describe a Blsn only if B1 does not receive D. We shall argue that Bl cannot "receive" D. First, it should be emphasized what constitutes D. Assuming, for the sake of simplicity, that SNT conforms 47 the explanans of SNT to the D-N model of explanation, will consist of a set of law-like generalizations, to- gether with a set of explicitly specified conditions, the conjunction of which effectively describes Blsn' The ex- planandum of SNT would be a proposition which describes Blsn (as we have indicated, an SNT, in the extended sense, would have an overt bodily motion as the object of its~ explanandum). As we shall demonstrate in 55, it is questionable whether a sufficient neural artefact can be constructed that is simple enough to be understood. Correspondingly, the theoretical model which a sufficient artefact satisfies would be of such a complexity that it may not be under- standable, if by 'understandable' one means that the theo- retical model could constitute the substance of a single thought (viz. a single Bisn). Clearly, an SNT would be as 46See [MAC 1], especially p. 395. MacKay's termi- nology, it will be noticed, is quite different from ours. 47See [HEM 1], p. 336. 84 complex, structurally, as its theoretical model. So, correspondingly, it is questionable whether an SNT (con- sidered, e.g., as the description of a given Bisn) could constitute the substance of a single thought (viz., a single Bisn)' If this could not be the case, then a Bi could not "receive" a relevant D in the sense that is re- quired by the DP. This general problem will be discussed in detail in §5; our present argument against the DP will, accordingly, be completed in that section. For the pre- sent, however, a few brief comments should suffice. Let L L . . . , Ln be the laws of an SNT, and l' 2’ let C1, C2, . . . , Ci be the statements of conditions required to deduce a statement describing Blsn' We pre- sume:that B1 could know or "receive" L1' L2, . . . , Ln' Hence, a person could have knowledge of the principles of his own neural operations. What the Brain-Model Problem questions, with regard to an SNT, is whether or not B1 could know or "receive" C1' C2, . . . , Ci within a single Blsn° Therefore, we deny that B1 could "receive" D, where D is a total description of Blsn of the sort indicated, and where D is "received" by Bl within a single Blsn' This conclusion constitutes a denial of (DP-6). Without (DP-6), however, the DP fails. Consequently, if we are correct, the DP is not successful in refuting Mechanisml. §3.4 DISCUSSION ‘We have attempted to demonstrate, in this section, that Mechanisml is not self-refuting. In order to do this, we have considered three arguments to the contrary. The Intentional Paradox was shown to be defective, because it is based upon the unproved and illicit assump- tion that Mechanisml is incompatible with the existence of intentional behavior. The Rational Paradox is mistakenly based upon the assumption that causality and rationality are incompatible. We also demonstrated that both the Intentional Paradox and the Rational Paradox fail to take account of the distinc- tion between 'justification' and 'explanation.‘ Consider- ing the ambiguity of 'explanation,‘ confusions seem to be inevitable. Finally, the Description Paradox is based upon the apparently fallacious assumption that an SNT, considered as a total description of a given brain state, can be-"re- ceived" by a given brain, from moment to moment and from brain state to brain state. The argument which we have presented against the Description Paradox will be com- pleted in §5. 85 86 Since Mechanisml has not been shown to be self- refuting by the arguments which we have considered, we may conclude that, .to this extent, at least, an SNT is logically possible. Granted this conclusion, we may proceed to con- 5 ider the question of whether a sufficient neural artefact j. s logically possible. §4.0 MECHANISM AND THE GODEL ARGUMENT 2 Gilbert Ryle mentioned, at one point, ". . . a 1: autology which is sometimes worth remembering." namely, " Men are not machines, . . . They are men . . . ."1 Cer- tainly, it is clear that the proposition 'men are men' is a tautology, as is 'machines are machines.‘ It is not Clear, however, that 'men are not machines' is a tautology, and it is even less clear that (as is often inferred from Ryle's tautology) 'men cannot be modeled (e.g., by "ma- c1'1I.;i.nes") ' is a tautology. Since tautologies imply only tautologies, the inference from 'men are men' to 'men <2axinot be modeled' seems to be immediatelysuspect. Probably the strongest non-metaphysical argument th at has been presented in favor of this inference, and a~gainst Mechanismz, is the argument based upon Proposition VI of GBdel (1931).2 As was the case with Mechanisml, we somewhat re- strict the general doctrine of mechanism to Mechanismz. \ l[RYL 1], p. 81. 2See [GOD 1], p. 57. 87 88 For our purposes, we take Mechanism2 to be the proposition that it is logically possible to construct a sufficient (physical) neural model (SNM), viz., an artefact that is sufficient to simulate all characteristically rational human brain functions. If there were an SNM, then, it would, presumably, be able to perform the verbal, mathe- matical, and other "rational" behaviorisms that are usually associated with rational human beings. What the Gadel Argument allegedly proves is that there is at least one thing that a human brain can do ( aside from strictly biological functions) that no arte- fact can do. The conclusion which is drawn is that, while there may be- an incomplete (physical) neural model, a sufficient complete neural model is not logically pOssible. We shall examine this argument in the present 8 e ction. We shall begin by discussing neural artefact suf- fi ciency. Sufficiency in an SNM has generally been re- garded as being a more difficult concept than sufficiency in an SNT, but there seems .to be no inherent reason why this should be the case. We shall, then, present the Mentalistz, G6del Argument against Mechanismz, while cit- ing some of the variations which have been made on the central argument. We shall, at that point, raise some olbjections to the G6del Argument. Finally, we draw some 9e heral conclusions which are based upon the objections raj-Sed. §4.1 NEURAL ARTEFACT SUFFICIENCY AND THE CONCEPT OF "MACHINE" The problem of establishing sufficiency criteria for a candidate SNM has received considerable attention. I 1:. is not at all surprising that this should be the case, s ince these criteria should closely parallel the criteria for human intelligence. A definitive account of human .5. ntelligence has not yet been produced, and accordingly, there are no universally accepted criteria for "machine" intelligence. Investigations of this dual criteria prob- lem would very likely have ramifications in both psychology and in the philosophy of mind. For our purposes, we shall treat the concept of II - intelligence" in a rather cavalier manner, as seems to be the common practice in the sciences. We shall take (human) 'intelligence' to be operationally defined on the ba~sis of stimulus-response correlations that are charac- teristic of normal, adult humans. Unfortunately, there is, at present, no such thing as a "normal adult artefact," so criteria that are less lhtuitively satisfying must be used to provide an account of What an intelligent artefact might be like. (It should, 89 90 perhaps, be mentioned that some investigators in the field (of neural modeling, are involved with evolutionary models, ea.g., as is M. Minsky; these models are used, at least :izmitially, to simulate "immature" humans). We, of course, must be on guard against an account of artefact intelli- ggyeence which would definitionally rule out the possibility of there being an SNM. Probably the most famous criterion for an (intelli- gggneant) SNM is Turing's Test.3 Turing suggests a hypotheti- <:=.Eal situation in which a person, viz., an "interrogator," :i_.£3 placed in an enclosed room. In an adjoining room, 'tziriere may be either another person or a "machine" (i.e., EEK <2andidate SNM). Communication between the two rooms is facilitated by two console typewriters. The interrogator ~jL-E3 challenged to determine, by typing out questions and JTeceiving printed out replies, whether the replies which he receives are made by a person or by a "machine." If 'tllhlea interrogator is unable to distinguish between human- ‘Eilfilsswers and machine-answers, then the "machine" is said t3<=> have passed the test, and in our terminology, consti- tutes an SNM. More precisely, we shall say that, where 'NM' =df 'neural model' and 'SNM' =df 'Sufficient neural model,‘ we have: \ ( 3See [TUR l], passim. See especially pp. 4-5. ‘Vqsa shall use the pagination in [AND 1] throughout.) 91 For every NM, NM is an SNM if and only if NM passes Turing's Test. (4.1-1) We shall, shortly, restrict Turing's Test situation to s trictly computational matters. The mechanics of the testing situation, viz., the room, the typewriters, etc. are non-essential. As'far as intellectual capabilities are concerned, it does not mat- ter if the "machine" is in humanoid form viz., isvan android, or if it has an appearance that is similar to that of a present-day digital computer. For Turing, if a "'rnachine," regardless of its physical appearance or mode <:=IIE communication, were able to pass for a human being ( $1 human being), that is to say, if it could pass Tur- i ng's Test, then it could be considered to be an SNM. Turing, in fact, suggests that a question such as "Can a machine pass Turing's Test?" is more coherent than the c3."l..:l.estion "Can machinesthink?" and, hence, should replace it.5 Turing's Test, however, does have some defects. one of the initial difficulties is the presence of a \ 4A Wittgensteinian might disagree: see [WIT l], 5 § 281-283. SSee [TUR 1], pp. 5-6. f 6A critical commentary of Turing's Test may be c>1.1nd in [GUN l] . A somewhat qualified defense may be Ound in [SCR 1]. 92 confusion which results from an inadequate specification of what it is that an SNM models. As with the SNT, we somewhat restrict what would be minimally required of an SNM. Specifically, it seems enough that an SNM be able to simulate the verbal/written behavior that is character- istically generated by normal human brains. This is simply another way of saying that the input-output correlation in an SNM should correspond to the input-output correlation o :E a normal human brain with respect to verbal/written responses. Some of the most often used criticisms of this and similar criteria are to the effect that an SNM could not be expected to exhibit certain esthetic sensitivities, e -g., an SNM could allegedly neither write nor appreciate poetry. Of course, there are many people who can neither Write nor appreciate poetry. Hence, it does not seem that the charge that artefacts cannot write or appreciate Poetry adversely affects the possibility of an SNM. Mini- ma 11y (at least according to Turing's Test), an SNM must be sufficient to model some person. An "uncultured" SNM may model only the CNS of uncultured pe0ple, but it seems c:Lear that an "uncultured" SNM is still an SNM, since it c<>1:11d pass Turing's Test. Also, recent findings seem to iliciicate that this charge is ill-founded, but human intel- 1e ctual behavior that might be termed "esthetic" or that WODId fall under various similar designations, need not 93 be a part of our present discussion. The reason for this is that the Godel Argument will allegedly be effective even if extremely severe limitations are placed on the sort of behavior that is required of an SNM. Specifically, we may restrict the verbal behavior that is required of an SNM to computational matters. Let us say that an NM is sufficient if and only if it displays what might be called "mathematical intelligence." Turing machines, for example, are often said to display such intelligence. So, we may further clarify our sufficiency criterion in the following manner: Where A is an NM and P is a person (or more specifically, P is a human CNS), then we may say that A sufficientleodels P if and only i :E A can compute everything that P can compute. This criterion may be expressed by the following formulae, where 'S(a)' is interpreted as 'a is sufficient,’ ' M (a,b)' is interpreted as 'a is modeled by b' and 'C(a,x)' 5— S interpreted as 'a can compute x' . Using these conven- tions, we have (where A is an artefact and P is a person): M(P,A) E (x) (C(PIX) :> C(AIX)) (4.1-2) (4.1-3) SM) 5 (3P)(M(P,A)) Ihtuitively, these formulae mean that, in Turing's Test s:5—tuation, A (the "machine") passes the test if and only 94 if there is at least one person such that the interrogator is unable to distinguish between that person and A. For our purposes, of course, the person, P, would minimally be a normal human adult and the interrogator would be re- stricted to questions of a computational nature. It may still be asked, however, whether or not A "understands" the questions and answers. In a more gen- eralized Turing Test situation, it might be argued that A is comparable to a very sophisticated phonograph but that A no more understands its outputs than a phonograph does.7 If the "understanding" in question is taken, in some obscure sense, to be a distinction that is character- istic only of human behavioral outputs, then this might arbitrarily rule out the possibility of an SNM "under- standing" anything. It should be noted that it is tautologous that human outputs are human outputs, and likewise,_SNM outputs are SNM outputs. The most that can :reasonably be expected of an SNM is that its outputs be similar to human outputs. A strong argument may be raised against Turing's Test on the basis of the question of whether or not an artefact can meaningfully be said to "understand" any- 1-‘~‘1‘.l.:i.ng, or more generally, be said to be "intelligent." \ 1) 7This seems to be the sort of objection that would ‘3 raised by Malcolm and others who share his views. See [IWUAL 3], passim, and [R00 1], passim. 95 A Malcolmian, for example, may be unwilling to permit Turing to exchange the question "Can artefacts think?" with the question "Can any artefact pass Turing's Test?" It may be objected, in particular, that this latter ques- tion illicitly avoids the problem of deciding whether or not an artefact can be said to "think" or be "intelligent." If a Mechanist2 answers the question, "Can an artefact pass Turing's Test?," affirmatively, he would, presumably, want to hold that this affirmative answer implies that artefacts can, in some sense, be said to be "intelligent." A Malcolmian, however, may well object that this move begs 8 that the question. Specifically, it might be objected one could "be mistaken" in Turing's Test situation. It is alright, in Turing's view, to speak of the interrogator as being mistaken merely about whether a person or a "machine" was in the other room, but it is .potentially devastating to Turing's Test and to Mechanismz, tc>speak of the interrogator as being mistaken about vihether or not the object in the next room was thinking (‘~1‘l:.onewould not ordinarily deny that he is intelligent, r1<>xy for that matter, would one deny that he is behaving it'l‘lzelligently. If, on the other hand, an artefact (say, 97 genuine PERSON ———> stimulus (CNS) simulated stimulus ACTOR programmed ARTEFACT ‘ E stimulus (SNM) (Figure 4.1-1) ,_______, genuine response feigned response artificial response an android) were pretending to be in pain, one might deny, both that it is in pain (or could be in pain) and that it is intelligent, without expecting to be contradicted. The ‘point that we wish to make here is that there is a differ- eence between the artificiality of an actor's performance (when he is acting, of course) and the artificiality of an artefact's performance. An actor can be said to be Performing both artificially and intelligently, while an aJTtefact may be regarded as performing artificially and ncDim-intelligently. There are two basic reasons why this would be the (ziisse: one reason is related to the criteria for under- s‘t:Einding/intelligence/thinking/etc., and the other reason 98 is concerned with what we shall call the "anti-machine bias." It is held, by various investigators in mathemati- cal neural modeling, that the distinction between human beings and present-day neural models (for example, appro- priately programmed digital computers) in terms of "intel- ligence," is a matter of quantity, not of quality. So, if this is correct, a sufficiently sophisticated artefact, if one were to be produced, would presumably be termed "intelligent." What is at issue, however, is whether or not an artefact can meaningfully be called "intelligent." This question may be formulated in various ways, for example, "Assuming that artefacts are (capable of being) intelli- gent, are they (capable of being) intelligent in the same sense that humans are intelligent?" or "Is anything per- Initted to count as an (intelligent) SNM?" Questions of ‘this sort require a decision, not a discovery.10 The de- <:ision which is made will inevitably depend upon intelli- gence criteria and modeling sufficiency criteria. , 9See, for example, Turing's "Critical Mass Thesis 4111 [TUR 1], pp. 25-26. See also [ARE 1], p. 140 and [MIN 4], passim. 10Authors on both sides of the Godel Problem issue €19113ee on this point. See, for example, [PUT l], and [REM 1]. 99 At present, the most widely accepted criteria for intelligence/understanding/etc. are overt, public, be- havioral criteria. Since, given our restrictions, the output of an SNM would be entirely overt, this sort of criteria should be acceptable and readily applicable. This should especially be the case when the outputs in question are restricted to computational matters. The relationship which obtains between "intelli- gent" human behavior and "intelligent" artefact behavior is indicated in (Figure 4.1-2). In (Figure 4.1-2), let 'PT' and 'E' be interpreted as in 53. We take E and M0 to be the same bodily event, explained respectively by PT and SNT, and we take A to be the model-event corresponding to E = M Admittedly, there is a potential confusion on 0’ this point since explanation instances explain discrete events which occur in physical relational systems; the relational systems, in this instance, are a brain and a neural artefact respectively. For present purposes, using :38 we have, Russell's Test and Turing's Test, we take P, bdo and A to denote specific events (in (Figure 4.1-2)) Which are explained respectively by PT, SNT, and MT. 11 Normally, in Malcolm's view, for example, a person is said to understand (what he is saying) only if he can, \ 11$ee [MAL 3] and [R00 1]. 100 SNT To é—-—)I:[T = T1 = M0 = braine————)A = artefact = M state state '6 [:11(——'|-3 1 (Figure 4.1-2) by publically observable behaviorisms, for example, point- ing, demonstrate that he understands. Presumably, he would have to point at things and name these things con- sistently over a period of time (or, at least, until all doubt was erased on the part of observers), in order to successfully demonstrate his understanding. The problem of humanoid androids has been somewhat disconcerting to people who share Malcolm's vieWpoint, since a "cleverly" designed android could conceivably fool anyone but its :manufacturer for a long period of time, or at least until, :for example, it was dismantled in the presence of observers vvho had previously called it "intelligent." At that Emoint, like the In mentioned above, the observers, if they Shared Malcolm's view, would say that they had been mis- taken (in the second sense which we indicated). It is not clear to us what conclusion should be Ci-‘E‘awn from this. It seems that the whole question of Sufficiency and intelligence criteria is in great need of 101 further research, as we shall urge in §5, both in terms of human intelligence and in terms of artefact sufficiency. One conclusion that could be drawn from the difficulty which we have been discussing is that artefacts may be "intelligent," but the intelligence in question may not be identical to human intelligence.12 The distinction we should make then, is between "real" human, intelligent behavior and artefact modeled behavior. It should be clear that an M1 may model an M0 without being identical to it; indeed, a large part of the value of an M1 is that it is not identical to M0 (if it were, it would be trivial in the worst sense, since every relational system is isomorphic to itself). If, for example, an adequate mechanical model were constructed to model an electromagnetic system (as was, for example, attempted by Maxwell), it would not be meaningful or constructive to criticize the'mechanical model by saying that its "behavior" was not identical to the behavior of 'the electromagnetic system.which it models. It is our (contention that claiming that an SNM is not "intelligent" is similar to claiming that a mechanical model of an electromagnetic system is not "electromagnetic." We conclude that the outputs of a model are not necessarily identical to the outputs of the system modeled. \ 12To be more "philosophical," one might want to Say that artefact "intelligence" is in a different cate- Eiszggi from human intelligence.~ 102 What is at issue, then, is not whether the distinctive attributes of the system modeled, e.g., "intelligence," should be applicable to the respective model, but, rather, whether a given model is heuristically valuable or instru- mentally valuable. Turing suggested, as we have noted, that the ques~ tion "Can artefacts think?" should be replaced by the question "Can an artefact pass Turing's Test?" The Malcolmian objection, however, showed that this question is also inadequate. This objection, however, was based upon confusions regarding model theory. As we see it, artefacts cannot think, if one definitionally restricts 'thought' to 'human thought.’ Artefacts (if they are SNM's) do not "think," in this sense [by definition], they mgdgl_thought. What is minimally required of an SNM is that its modeled thought converge upon human thought. Hence, we suggest that Turing's question "Can an artefact pass Turing's Test?" should be replaced by a question that is phrased in terms of modeling. The ques- tion that we submit is: "Can an artefact be constructed VVkLich.can model human thought processes?" Ideally, we would like to simply speak of the modeling sufficiency of an SNM in terms of Turing's Test. AS we have seen, however, Malcolmian objections seem to in'<3-:'Lcate that the designation 'intelligent' may be with- 11‘33L<3 from an SNM, even if it had passed Turing's Test. 103 Basically, a Mentalist2 may argue for the conten- tion that no artefact is capable of being "sufficient," in the sense that it can be "intelligent" or can "think," in the following manner: (1.) If A is an SNM, then A is "thinking" (or "intelligent"). (2.) But, only humans are "thinking" (or "in- telligent"). (3.) A is not a human. (4.) Therefore, A is not "thinking" (or "intel- ligent"). (5.) Therefore, it is not the case that A is an SNM . Clearly, this argument may be reiterated, for any A (where A is a candidate SNM). We have attempted to refute this argument by denying step (2.), i.e., we deny that only humans are (or can be) "intelligent." We observe, specifi- cally, that (2.) holds only if 'intelligence' or 'thinking' is defined to apply only to humans; this definitional re- striction results from a confusion regarding modeling relations. If we take 'thinkingl' to designate human thought processes and 'thinkingz' to designate neural artefact processes, where 'thinkingz' = 'modeled (human) thought,‘ df 104 then the interrogator in Turing's Test situation can claim to be mistaken about there being a thinking1 thing in the other room without destroying the intent of Turing's Test. We may rephrase his remark as "I was mistaken, there was not a thinkingl thing in the other room, but there was a thinking2 thing, viz., an SNM." Since 'thinkingl thing' is synonymous with 'person' and 'thinking2 thing' is synonymous with 'SNM,‘ we may conclude that our question "Can an artefact be constructed which can model human thought processes?" may be equivalently phrased as "Can a thinking2 artefact be constructed?" or, alternatively, "Is a thinking2 artefact logically possible?" When "thought" is restricted to computational in- telligent behavior, then it appears that the convergence between thinking1 and thinking2 is nearly complete. It, in fact, does not seem immediately clear that Malcolmian criteria would be adequate to distinguish between humans and artefacts when only computational behavior is used as a basis for comparison. Both humans and artefacts would presumably have quite similar modes of communication, viz., for the most part, computational communication of any ap- preciable level of sophistication would be made by means of symbols written on paper, a scratch pad for a human and a print-out for a machine. Pointing and naming, as overt criteria for "understanding" and intelligence, would seem to be less relevant here than it would be in determining 105 "understanding" of a more general sort. What is important here, as a criterion for understanding, is whether or not computational principles (e.g., axioms and postulates) can be applied in various circumstances and, accordingly, in various ways. It is argued by proponents of Mecha- nism2 that, at present, machines can be programmed to apply principles in various ways, and therefore "under- stand" the principles, in this sense.13 Apparently, then, artefact understanding is, to a large extent, convergent with human understanding in this respect. Therefore, an Operational definition of 'intelli- gence' or 'understanding,’ with respect to computational matters, might be at least convergent for both humans and artefacts. Considering our stated objective, however, it seems adequate to Operationally define 'artefact (compu— tational) intelligence' in terms of human (computational) intelligence. In order to do this we shall say, where 'I(a)' is interpreted as 'a is intelligent,‘ I(A) E S(A) (4.1-4) Hence, we have [by (4.1-2), (4.1-3), and (4.1-4)], (3P)(I(A) 3 (x) (C(P,x) : C(A,x))) (4.1—5) 13See [MIN 41, [ARB 1], [NEU 1], and [WIE 1]. 106 The Godel Argument is specifically used to deny proposi- tions of the form (4.1-5) by falsifying the consequent. We want to mention, finally, the anti-machine bias that might serve to definitionally rule out the pos- sibility of an SNM by permitting nothing [by definition] to count as an SNM. What must be guarded against is an unwarranted assumption regarding the limitations of model- ing. Apparently, one of the reasons that it is assumed that the human brain cannot be sufficiently modeled is that the existence of such a model might challenge human dignity. For somewhat similar reasons, the findings of Copernicus were denied. At any rate, some find it demean- ing that a mere "machine" could model human rationality. Most often, theological and metaphysical doctrines are the bases of such feelings; we, however, choose not to consider such issues in this study. Suffice it to say that, as we mentioned earlier (in 52) a great deal of the mind-body problem, as well as the anti-machine bias, is a result of misconceptions which are based upon Seventeenth Century mechanism, as a result of which a strict dichotomy was formed between "lifeless, inert matter, and "vital" Spirits. The dichotomy rests on a confusion, because later findings have indicated that matter is not inert, in the Seventeenth Century mechanist's sense; as a result <3f these findings, the meanings of the terms 'matter,‘ 107 'machine,‘ etc. have undergone radical changes. Confusion results from using Seventeenth Century meanings in Twen— tieth Century contexts. Since "machines" are made of matter, and, hence, are in the Seventeenth Century mecha- nist's view, "lifeless" and "inert," they are considered to be incapable of modeling human rationality. It should be noted, however, that, as in the case of the concept of "matter," the concept of "machine" has undergone a remark- able transformation, particularly within this century."14 The transformation is so striking, in fact, that (as_ Toulmin has indicated) if Descartes were to be confronted with a Twentieth Century computer, he might well remark, "That is not what I mean by a 'machine.'"15 The term 'machine' has nevertheless retained some of the emotional stigma that was placed on it by Seven- teenth Century mechanism. It is primarily for this rea- son that we prefer the more benign and potentially less offensive term 'artefact.‘ It is also to our advantage that we have introduced 'artefact' as a technical term, i.e., as being synonymous with 'physical model' (as ex- plicated in §2.0). In summary, then, the logical possibility of Mechanism2 depends on the answer to the question of ‘ 14For some excellent discussions of this point, see [TOU 3], [SMA 3], [TOU 2], and [TOU 5]. 15See [TOU 3], p. 825. 108 whether or not neural artefacts can think2 (or, alterna- tively, 'are intelligentz', where the concept of "intel- ligencez" corresponds in an obvious manner to the concept of "thinkingz"). We have established that artefacts do not, and cannot thinkl. Hence, the Godel Argument will be effective only if it can successfully deny the possi- bility of a thinking2 neural artefact. Since artefact and human computational intelli- gence seem to converge, however, it appears that the Godel Argument can be used to deny the possibility of a thinking2 artefact. We allow, therefore, that the Godel Argument may be applied to the appropriate issue (i.e., the issue of whether or not artefacts can possibly be intelligentz) and, hence, is potentially devastating to Mechanismz. So, while making use of our working definition of neural arte- fact sufficiency, we now turn to the Godel Argument itself, together with some of its more significant variations, in order to determine its logical structure and its effec- tiveness. 54.2 THE GdDEL ARGUMENT Godel's first theorem states, in effect, that for every system that is both consistent and expressively powerful enough to produce elementary arithmetic, there are well-formed formulae which are true under the intended interpretation, but which cannot be proved in the system, hence making the system incomplete. We presume that Godel's proof is sufficiently well known that a recasting of it here would be superfluous.16 Suffice it to say that Godel demonstrated how, for every system, 50' which is powerful enough to produce simple arithmetic, an arithmetical formula, G0, can be constructed which represents the metamathematical statement, 'G0 is not provable in So.', which is not provable in 50' but which is true. 16Godel's own proof may be found in [GOD 1]. Other accounts of his proof are available in [KLE 1], es- pecially pp. 204-213 and 298-316; [MEN 1], pp. 142-149; [CAR 1], pp. 129-134; [QUI 1], Chapter 7; [QUI 2]: PP. 245-248; and [CHW 1], pp. 61-68. Less rigorous accounts- may be found in [ARB 1], pp. 119-138 and [BEN 11, pp. 14- 17. Informal, intuitive accounts of Godel's proof may be found in [KLE l], p. 205; [BOH l], assim; [R084 1], pp. 155-24; [NAG l], passim.; and [DeL El, passim. but es- pecially pp. 59-60. 109 110 The Godel Argument, which has been presented by such pe0ple as Rosenbloom, K. R. Popper, E. Nagel, J. Kemeny, J. Lucas, S. Jaki, and others, makes use of Godel's findings by suggesting that for any candidate SNM, a human brain can always produce a formula, namely a Godelian formula, that cannot be produced by that given candidate SNM. If this is correct, then there is always at least one thing that a human brain can do that any given arte- fact cannot do. There have been, since Rosenbloom's first formula- tion of the argument17 in 1950, several variations of the argument which differ from one another in some matters of significant detail. Accordingly, we shall present first a generalized formulation of the argument and then we shall indicate some of the most significant variations. Some of the more prominent variants of the GA, in their original form, have been placed in Appendix II. We shall refer to them, in pointing out differences. Finally, we shall demonstrate an uninteresting way in which the GA may be temporarily avoided. In a generalized form, the GA may be structured as follows: ' (GA-l) The human brain can produce elementary arithmetic. l7See [ROS2 l], P. 208. (GA-2) (GA-3) (GA-4) (GA-5) (GA-6) (GA-7) (GA-8) (GA-9) (GA-10) (GA-11) 111 Therefore, in order for an artefact, Ai’ to model a human brain, it must be able to produce elementary arithmetic. Assume that P is a person such that an artefact A 0 models P (in the sense that A can compute every- 0 thing that P can compute). Artefacts are definitionally regarded as being instantiations of formal systems, i.e., for every artefact, Ai' there is a formal system, 81’ such that Ai satisfies Si. Assume that A0 is a candidate SNM and that it satisfies a formal system 80' Assume that S0 is consistent. Then, a Godelian formula may be constructed in So; let G0 be that formula. G cannot be proved in S 0 [By Godel's First 0' Theorem, i.e., Prop. IV.) Therefore, AO cannot produce G0 as being true. But, any "rational being," for example, P, could, if he knew Godel's proof, convince himself that G0 is true, even though G0 is not provable by A0. Therefore, for every artefact Ai there is a true proposition which it cannot prove (or "produce as being true") but which a human brain, viz., P, can prove (or "produce"). 112 (GA-12) Therefore, it is not logically possible than an artefact can sufficiently model the human brain. The logical structure of the GA, as we have pre- sented it, is most similar to the second formulation given by J. R. Lucas in his 1961 paper.18 Lucas' (second) formulation of the argument is probably the most compre- hensive one that has been presented (it is rivaled only by Jaki (1969)19). The primary objective of Lucas' paper was to set out the GA in some detail and thereby attempt to convince logicians (and others) who were suspending judgment regarding the impossibility of an SNM until they had seen a well-structured argument. Lucas cites the Nagel-Newman version of the GA as his primary source.20 While the Nagel-Newman formulation of the GA is not actually an argument, much of the intent of Lucas' formulation is derived from it. There are not- able differences as well, however. Nagel and Newman make a much more modest claim than the one which Lucas makes. Lucas wants to rule out the possibility of any artefact 18 (D) (1.). 19 See [LUC 2], pp. 44-47. Also, see APP. II, See [JAK 1]: PP. 215-216, and APP. II. (E)(1.). 208ee [NAG 1], pp. 100-101, and APP. 11, (B). 113 whatever being an SNM, while Nagel and Newman restrict . . . . . .21 their results to "currently conceived art1f1c1al mach1nes.‘ At any rate, the following table indicates the correspondence between the steps of the GA and the steps of the original arguments, as listed in Appendix II. It should be noted that we have, insofar as is possible, made explicit, in the GA, some steps that are either implicit or suppressed in some of the original arguments. This should serve to make the GA at least somewhat more coher- ent than most of the original arguments, and thereby, the GA may constitute a stronger attack upon Mechanismz. Some of the more notable differences between the various formulations of the Godel Argument should be men- tioned at this point. We have already indicated an im- portant difference between Lucas' arguments and the Nagel-Newman argument (see APP. II,(B) and (D)). Rosen- bloom's argument differs from the others in that it in- cludes a psychological element. Rosenbloom holds that Godel's findings constitute a mathematical analogue of what is called "introspection" in psychology; he concludes from this that Godel's theorem may be used to distinguish human brains from artefacts (see APP. II, (A), especially (R-2) and (R-3)). A somewhat similar point is made by Jaki, who argues that additional("extraneous") components 21[NAG 1], p. 101. See also APP. 11, (B), (N-6). 114 Table 4.2-1. GA 2233:5252:stgznéi‘xti‘: ,°r(i?i’—‘aim ) . (GA-1) - Implicit in all; most explicit in (R-l) (GA-3) (GA-4) (N-l), (K-l), (Ll-2), (L2-1), (Jl-l) - (Jl-3). (GA-5) (N-l), (K-l), (Ll-2), (L2-1), (Jl-l) - (Jl-B). (GA-6) Implicit in all; explicit in (Ll-3), (L2-2, 3), (Jl-2). (GA-7) (R-S), (N-2), (K-3), (Ll-3), (L2-7), (Jl-S). (GA-8) (R-4, 5), (N-2), (K-3), (Ll-3), (L2-8), (Jl-S). (GA-9) (Ll-3), (L2-9). (GA-10) (R-2, 3), (N-6), (K-6), (Ll-3), (L2-10), (Jl-G, 7). (GA-11) (R-4, 5), (N-6), (K-6), (Ll-3), (Lz-ll), (J1-7). (GA-12) (R-S), (N-7), (K-7), (Ll-4, 5), (L2-12, l3), (J1-8). 115 would be required for any machine to prove its own consis- tency, where, allegedly, a proof of its own consistency would constitute one sort of "self-reflection" (see APP. II, (E)(1.), especially (Jl-5)). It is not clear, in Jaki's argument, whether the additional component would be a hardware or softward "component." Jaki's argument also differs from the others in that he explicitly points out that a randomizing device would not avoid the diffi- culty which is posed by the GA. In one respect, at least, Lucas' argument seems to be more honest than the others, since he allows that it may be a considerable task to "write down an analogue of the machine's operation" (see APP. II, (D)(2.), (L2-5)). This point provides the basis of one of the strongest objections to the GA. It should be made clear that the GA is used to deny the possibility of an SNM only indirectly. As we have seen (largely in §2) an SNM (= M1), considered in its perspective in the modeling framework, is an instantiation of a mathematical-theoretical model (= T1) which consti- tutes its "program" or "rules of operation." The mathematical-theoretical model (T1), in turn, is an in- stantiation of a formal system, C, and is a model of an SNT, which also satisfies C. The SNT similarly, is satisfied by its logical model, a human CNS (= M0), which 116 is (physically) modeled by the SNM. C also constitutes a formal model of both the SNT and the MT. The GA, in virtue of the fact that Godel's theorem applies only to consistent formal systems, is used to at- tack the formal system of which a candidate SNM is a logical model. The GA, therefore, is applicable to the mathematical model (of an SNT) which "programs" the SNM directly, and the GA is applicable to the SNM itself only indirectly. Note also that C (being the structure of the relevant mathematical model) is likewise affected by the GA, since it is a formal system of the kind specified in (GA-4) - (GA-6). An additional, and somewhat surprising, result is pointed out by Jaki.22 Since C is a focus of the GA, and since the SNT is a logical model of C (alternatively, C is a formal model of the SNT) the GA may be taken to be applicable to the SNT and hence it may be applicable to Mechanisml. Specifically, if Jaki is correct, it appears that the GA implies that an SNT is not logically possible. We have attempted, in 53, however, to demonstrate that an SNT is logically possible. Hence, if our findings of 53 are correct, we may conclude that an SNT is logically possible. Therefore, we may conclude that the GA is not sound. 2 See [JAK 1], p. 217. See also APP. II, (E)(2.). 117 We have, of course, based our argument on the pre- mise that an SNT is logically possible. We nevertheless contend that this argument should be ranked among the strongest that have been presented against the GA, since it indirectly indicates an equivocation in the GA, which we shall discuss in more detail in §4.3. Additionally, our argument seems to indicate that the superiority of a brain's computational output to an artefact's computational output is illusory. In any event, several sorts of arguments have been presented in opposition to the GA which do not rely on the presumption that an SNT is logically possible. We shall discuss these arguments in §4.3. Before considering these arguments, however, we shall briefly discuss a trivial (and allegedly unsuccess- ful) way of avoiding the GA. This will become a matter of concern in some of the arguments which we shall discuss in 54.3; in particular, it plays a central role in Lucas' "one-up-manship game," which we shall characterize pre- sently. Let P be a person/CNS such that T0 models P in the sense that T0 can compute everything that P can compute. Furthermore, let G0 be the Godelian formula for To, and suppose that P "produces" G0 and thereby uses the GA against To, thus arguing that T0 does not constitute an SNM. An easy, but trivial, way that a Mechanist2 can avoid 118 P's GA is to add G0 to T0 as an axiom. Thus, we have T = TO U G Then, according to the proponents of the l 0' GA, P can allegedly produce a Godelian formula, G1, which T1 cannot compute. The Mechanist2 can once again supple- ment T1 by adding G to T1 as an axiom, and thereby gen- 1 erating T2 = Tl U G1, but P can then allegedly produce yet another Godelian formula G2 for T2. It is the essence of the GA that P can always stay ahead of any artefact T1 in the sense that P will indefinitely be able to keep producing Godelian formulae for any Ti in the series T0, T1, T2, . . . , T (i i n). n This constitutes Lucas' "one-up-manship" or "matching" game, which, according to him, P will always win. If, at some point, however, P is unable to produce such a formula, then the GA fails. If, on the other hand, P can always produce a G6delian formula for every Ti’ then, purportedly, the GA is successful. While keeping this "matching" situation in mind for reference, we shall presently consider various of the arguments which may be used to attempt to refute the GA. This enterprise should enable us to assess the effective- ness of the GA and thereby provide us with a basis for a decision regarding the possibility of an SNM (and, con- sequently, the truth of Mechanismz). §4.3 ARTEFACTS, BRAINS, AND GODEL Opponents of Mechanism2 (Mentalistsz) often hold that it may be possible for an artefact to model 32y human rational behavior, but that it is not possible for an artefact to model gzggy such behavior. Lucas and (espe- cially) Jaki,24 for example, feel that the point which is at issue in the GA is whether or not a single artefact can (simultaneously) simulate every aspect of "mind-like" behavior. Lucas apparently takes the one-up-manship game to provide the primary basis for this contention. Proponents of Mechanismz, on the other hand, view the contention that any facet of "mind-like" behavior can be modeled, as supporting their views. To show this, let {F1' F2, . . . , Fn} be an exhaustive set of human rational behavior (i.e., outputs), and let {A1, A . . . , An} be 2' neural artefacts that sufficiently model the respective members of {F1' F2, . . . , Fn}. We make the unproble- matic assumption that the various kinds of human behavior can be finitely listed. Then, as Mechanists2 would 24See [LUC 2], pp. 47-48; and [JAK 1], pp. 216 and 218-219. ‘ 119 120 hold,25 a single artefact could be constructed which would consist of a conjunction of Al, A2, . . . , An (viz., in terms of the mathematical systems which Al' A2, . . . , An satisfy, the new artefact A' would satisfy a system S which included all of the systems which are satisfied re- spectively by A1, A2, . . . , An’ i.e., the larger system S would incorporate the systems, which are satisfied re- spectively by Al' A2, . . . , An, as "subroutines"). Hence, if the Mechanists2 are correct, we might want to say that the union of F1’ F2, . . . , Fn constitutes a "person" (or, at least, the union would constitute the set of outputs of a "person") or "brain," whereas the union of A , A , . . . , A constitutes an SNM. Hence, 1 2 n we have, F1 U F2 U . . . U Fn = a "person"/"brain"/CNS (4.3-1) and A U A2 U . . . U A = an SNM (4.3-2) 1 n This, of course, cannot be the case if a person can always out-Godel every candidate SNM, since, in this case, there will always be at least one Fi such that the candidate SNM cannot model Fi' A Mentalist2 would hold 25See, for example, [MIN 4], pp. 446-447; [MIN 1], passim.; [ARB 1], P. 140; and [G00 2], p. 146. 121 that, considering the GA, there is always at least one Fi for every artefact AA such that Al cannot "produce" Fi’ where Fi is, of course, a Godelian formula for AA' There are, roughly, three sorts of arguments that can be used in opposition to the vieWpoint of Mentalism2 and specifically in opposition to the GA. (1.) First there is the argument, which was first proposed by Turing, to the effect that human brains may have the same limita- tions that are attributed to neural artefacts by the use of the GA. (II.) The second sort of Mechanism2 argument is the argument that the possibility of an SNM is ruled out definitionally (and, hence, arbitrarily) by the GA. (III.) Finally, it is argued, in the last sort of Mecha- nism2 argument, that the GA is based on the presumption that the formal system/"program" of an SNM can be known; it is claimed, furthermore, that this presumption is un- justified and, for this reason, that the GA fails. We shall presently consider each of these sorts of argumentation in order to determine the viability of the GA and to determine the consequent effect that the GA may have on Mechanismz. 122 I. One way of appraising the import of the GA is to say that it constitutes an attempt to prove that arte- facts, in virtue of their defining characteristics, have some (computational) limitations which are not shared by humans. In particular, it is argued that human brains do not have the computational limitations, with respect to the production of Godelian formulae, that are character- istic of artefacts. Computational limitations, therefore, have been used by Mentalists2 to characterize a way in which people are, in principle, different from artefacts. The import of the GA may be symbolically repre- sented in the following manner: Let 'Ax' abbreviate 'x is an artefact,‘ let 'Px' abbreviate 'x is a person,‘ and, finally, let 'ny' appreviate 'x can out-Godel y,‘ viz., 'x can produce a Godelian formula for y.' Then, we have: ~ 26 (x)(y)((Ax-Py)::((ny° ny)) (4.3-3) That is, any person can out-Godel any artefact, i.e., a person can produce a Godelian formula for any artefact, but it is not the case that an artefact can produce a G6de1ian formula for any person. Mentalists take this to 26This formulation was suggested by Richard Hall, in private conversation. 123 be the basis for the contention that a person, P, is, in principle, different from, or superior to, any artefact. Generally speaking, propositiomsof the form (4.3-3) are used, by Mentalists to defend the contention that 2: humans can be "intelligent," or can"think," but that arte- facts cannot. We have already indicated some of the con- fusions which result in the contention that artefacts cannot think. We shall attempt to refute the GA, and maintain Mechanismz, by denying the proposition: (X)(y)(Ax ° Py :>ny) (4.3-4) It should be noticed that (4.3-4) is a logical conse- quence of (4.3-3). Hence, if we can show that (4.3-4) is false, we can obtain a denial of (4.3-3). The import of (4.3-4) may be expressed by the contention that artefacts will always have some computa- tional limitations that people do not have. So, our denial of (4.3-4) will amount to being a denial of this contention. It will be necessary, for the purpose of our pre- sent discussion to consider two closely related kinds of computational limitations which artefacts allegedly have but that proponents of the GA suppose that humans do not have. First, Mentalists2 often argue that humans have 124 access to "informal," "intuitive," and inductive proofs, whereas artefacts do not. This amounts to saying that humans have different methods of producing their computa- tional outputs than artefacts do; the conclusion which is drawn is that the computational processes of any person are in principle superior to any artefact's computational processes, hence, for this reason, the consequent of (4.3-3) is supposed to be true. Secondly, it is contended, by Mentalistsz, that every person, P, is computationally more complex than any artefact A. Hence, it is concluded that P is essentially more complex than any A. Here again, this alleged divergence in complexity is taken to be in support of the contention that the consequent of (4.3-3) is true. Considering first the contention that humans have methods of computation which are different from any arte- fact's methods, it is often contended, by Mentalistsz, that artefacts are, by definition, consistent, deductive, etc., whereas humans are not. This is apparently taken to be one primary reason why humans are allegedly always able to produce Godelian formulae for any artefact, and are, in this respect, computationally superior to arte- facts. In order to determine the sense in which human computational processes differ from artefact processes, therefore, we must examine the possibility that humans are either not consistent or not deductive. We shall P' .4 h..- 125 presume, for the present, that artefacts are both consis- tent and deductive, while recognizing that this charac- terization of artefacts is based on definitional grounds. We shall discuss the difficulties which are related to an overly restricted definition of 'artefact' later. Godel's theorem applies only to formal systems which are consistent and deductive.27 Hence, the GA is taken to apply directly to the formal system which an artefact Ai satisfies [by definition, see [GA-4)]. It is a supposition of the GA that for every neural artefact, Ai, Ai's formal system is (deductive and) consistent, al- though this assumption is not stated explicitly in most formulations of the GA.28 It is, consequently, assumed in the GA that the entire output of any artefact Ai is a deductive, deterministic output (since, by definition, Ai's formal system is entirely deductive). In order to distinguish between P and any Ai' then, granting for the moment that an Ai must be both consistent and deductive, we must consider the possibility that P is either not consistent or not deductive. If P is inconsistent, then, of course the GA pre- sents no computational obstacle for P since Godel's 27See [JAK l], p. 219; see also [GOD l], passim. 28See Benaceraff's consistency objection, [BEN l], 126 theorem applies only to consistent systems (we presume that, if P were "inconsistent," then the SNT which P satisfies would be inconsistent). Clearly, if P, or an Ai’ was basically inconsis- tent, then they could generate as outputs any propositions whatever, to include Godelian formulae which state their own consistency. If P was inconsistent, then it seems apparent that one minimal requirement for an artefact, Ai' to be an SNM (i.e., for Ai to model P, with respect to P's computational output), would be that Ai would have to be inconsistent. If Ai is inconsistent, however, Ai could, as we have just indicated, have any formula what- ever as an output. If this were the case, clearly P could not out-Godel Ai' This situation would be unacceptable to a Mental- ist2 for two reasons. First, the assumption that P is inconsistent, seems, as we have seen, to lead to the con- sequence that an SNM must also be inconsistent; the result of this circumstance is that P cannot produce a formula which the SNM will be unable to produce. This result re- futes the GA and constitutes a denial of Mentalismz. Secondly, the Mentalist2 prOposes the GA only to attempt to secure a certain degree of dignity, uniqueness, and perhaps, superiority for human beings. The assumption that P is inconsistent may, initially at least, provide a convenient way to establish a distinction between 127 (inconsistent) human and artefacts, which are [by defini- tion] consistent, deterministic, etc., but this assumption involves a trade-off which is apparently unacceptable to Mentalistsz. On the one hand, a distinction can be made between humans and artefacts, on the basis of the incon- sistency assumption, but on the other hand, it seems that a certain amount of the dignity, etc. that Mentalists2 would attempt to purchase for humans, by making the in- consistency assumption, would be lost. It is probably for these, and similar reasons, that Mentalists2 typically contend that humans are consistent. Suppose, on the other hand, that P is consistent. Lucas makes the claim that P is consistent, but he, of course, adds that the GA does 29 The basis of Lucas' claim may consti- not apply to P. tute the alleged non-quantitative distinction between P and any Ai for which we have been searching. The basis of Lucas' claim is that P can informally prove his own consistency, and therefore, P may be consistent without being a victim of the GA. Two major points are at issue here. First, as Lucas argues, both P and A1 are consistent (Lucas, of course, does not use our symbolism). But, secondly, P can allegedly (informally) prove his own consistency whereas Ai cannot, the reason being that P has access to 29See [LUC 2], pp. 55-57. 128 informal proofs whereas Ai does not [by the definition of 'machine' or 'artefact'; this point is made explicit in (GA-4) and in the corresponding premises in the original arguments]. Therefore, P and Ai are similar in both be- ing consistent, but they are dissimilar in that P can prove things informally whereas Ai [by definition] cannot; specifically, P can informally "prove" his own consistency (and P can informally "prove" G6delian formulae). Lucas' account of how P "proves" his own consis— tency is extremely obscure, and in view of the severe criticism that it has received, it seems to be the weakest part of his 1961 paper. Generally, he relies upon meta- physical considerations, such as those regarding the "unity" of human consciousness,30 which are totally un- related to the mathematical (and, hence, non-metaphysical) premises of the GA, which give it its apparent rigor and appeal. Lucas' failure to adequately describe how P can "prove" his own consistency, or, for that matter, how P can informally produce successive members of the sequence G0, G1, . . . , indicates a serious defect in the GA. This finding may also be taken as an indication that the purely "non-metaphysical" nature of the GA is illusory, since, in the final analysis, appeal is made to metaphysi- cal argumentation, viz., regarding the "unity of conscious- ness." 3°3ee [LUC 2], pp. 56-57. 129 We must conclude, at this point, that the Mental- ist2 requires that both humans and artefacts be consistent. We have seen that the Mentalist's account of human con- 2 sistency (at least Lucas' account) is rather obscure and relies upon ontological rather than mathematical argumen- tation. Artefacts, on the other hand, are supposed, by Mentalistsz, to be logically consistent [by definition]. There remains, however, the possibility that humans can differ essentially from artefacts in that humans can be non-deductive while artefacts are necessarily deductive. There are two points to consider relative to this possi- bility. First, the only thing that requires artefacts to be deductive is the Mentalist's2 definition of 'artefact.‘ This definitional restriction begs the question. Speci- fically, the Mentalist asks whether or not humans are 2 essentially different from artefacts; he then definition- ally distinguishes between humans and artefacts (viz., humans are non-deductive and artefacts are deductive, both by definition); finally, he concludes that humans are es- sentially different from artefacts, thus begging the ques- tion. Secondly, Minsky, and others, have recently developed some inductive artefacts which have been at least moderately successful; also, as we shall indicate shortly, self-amending artefacts may be designed which inductively survey their own syntax. 130 We conclude that Mentalists2 have not proved that the computational processes of humans are necessarily dif- ferent from (or superior to) those of artefacts. The only way that Mentalists2 have argued for this contention is by using arbitrary definitions which beg the question. The possibility remains, however, that while humans and arte- facts may have similar computational processes, humans may still differ essentially from artefacts in that they are essentially more complex than any artefact. Hence, we shall have to consider this possibility. If it could be shown that humans are essentially more complex than any artefact, then it would follow that artefacts have computational limitations which are not shared by humans. In order to adequately examine the critique the Mentalists2 view that humans are essentially more complex than any artefact, we shall discuss the following con- siderations: (1.) We shall discuss Turing's observation that there is no proof, in the GA, that a human will always be able to continue to produce Godelian formulae, for any artefact. (2.) Secondly, we shall attempt to analyze the general concept of one thing being "essentially more complex" than another thing, and then we shall apply our findings to the alleged divergence in complexity be- tween humans and artefacts. (3.) Thirdly, we shall also consider, in some detail, the possibility of an 131 appropriately designed, self-amending artefact being com- parable to a human with respect to computational output. (4.) Finally, since a proof of the contention that a human will always be able to produce a Godelian formula, for any artefact, is required in order to maintain the GA, we shall consider the consequences, for Mentalismz, of there being such a proof. The question which we must consider, at this point, is, does the GA prove that this alleged divergence of computational limitations exist between humans and artefacts, or does it simply include the assumption that the alleged divergence exists? Turing has argued that it is assumed, but not proved, in the GA, that the alleged divergence exists. He points out, in particular, that ". . . it has only been stated, without any sort of proof, that no such limitations apply to the human intellect."31 Turing notes that people often give the "wrong answers" to ques- tions, and, hence, we should not be too hasty in drawing conclusion about the fallibility of artefacts, from the evidence which is provided in the GA. Additionally, Tur- ing notes that the computational superiority of a person, P, can only be evidenced by a triumph over a particular artefact. As Turing expresses this point: 31[TUR 1], p. 16. 132 There would be no question of triumphing simulta- neously over all machines, in short, then, there might be men cleverer than any given machine, but then again, there might be other machines cleverer again, and so on.32 We shall presently discuss this matter in some detail. It should be immediately noticed, however, that, as Turing indicates, the contention that humans do not have the computational limitations which artefacts have, is assumed, but not proved, in the GA. In order to characterize the way in which people and artefacts differ with respect to computational com- 33 which we plexity, consider the one-up-manship game described in §4.2. In order for the GA to have the de- sired result, a human being, P, must be able to produce a G5delian formula, Gi' for any artefact, Ai' If, as we have indicated, A1 is continually augmented by the addi- tion, at each stage in the one-up-manship game, of the Godelian formula Gi as an axiom (thus producing Ai+1 = Ai U Gi)’ then P must, according to the GA, be able to continually produce successive Godelian formulae in the series Gi' Gi+l’ . . . . The submerged assumption of the GA is, therefore, that P will never be unable to continue 321bid. 33Lucas prefers the term 'matching game,‘ since an SNM need not surpass P (i.e., be "one-up") in order to pass the test which is posed by the GA. An SNM need only match, or equal, P in the generation of the sequence G0, G1, . . . , Gj of Gddelian formulae. 133 generating the sequence Gi' Gi+l' . . . (where n is an arbitrarily large number). Although both Mechanists and Mentalists2 typi- 2 cally discuss the GA in terms of a one-up-manship or "matching" game (as we shall) it is not strictly necessary to do so. One could also speak in terms of a rank order- ing of artefacts such that each successive artefact has, as an axiom, the Godelian formula with the smallest Godel number of the last. The first member, A0, of the order- ing, would have, as its formal system, a system, So, which is minimally capable of producing elementary arithmetic. Then, where G0 is the Godelian formula for 80’ the next member of the ordering would be S1 = S0 U Go. Hence the ordering would proceed as follows: SO' 51' S 2’ O O 0 where for each Si' Si = S U Gi- There will, of i-l 1' course, be a corresponding series, A0, A1, A2, . . . , of artefacts. Therefore, an alternative way of stating the submerged assumption of the GA is to say that P can always produce a Godelian formula for each successive member of the sequence SO’ 81' $2, . . . . (which, of course, cor- responds to a sequence of artefacts A0, A1, A2, . . . .) It appears that storage limitations are not at issue here, i.e., this is not a question of whether a human or a machine will run out of internal storage first. If we grant both P and Ai an indefinite amount of external storage, e.g., scratch paper, then only a minimum amount 134 of internal storage is required by either of them. The finiteness of the internal storage of P or Ai could not prevent P or Ai from constructing some (constructible) infinite ordinals. A Turing Machine, for example, needs 34 Therefore, only one binary digit of internal storage. it appears that the distinction between P and Ai cannot be made on the basis of storage capacities, especially if possible temporal restrictions are not considered to be relevant (i.e., if one assumes that the computations in question could be carried out "given enough time"; it should be noted that Lucas makes this assumption with re- gard to the production of GBdelian formulae by P (see Appendix II, (D)(2.), (L2-5)). Lucas states, to be speci- fic, that P could continue the generation of the sequence Gi’ G . if at each stage in the sequence, P was i+1’ ' ' "given enough time" to carry out the required computation). The Mentalistz, however, does not attempt to argue that humans have a larger storage capacity than artefacts; rather, the Mentalistz argues, simply, that humans are, in some general sense, essentially more complex than any artefact. The general tenor of the GA is that for every P, P's brain/CNS is of a higher level of complexity than any Ai. This is most obvious in the Nagel-Newman formulation 34See [SHA 1], P. 157 and [G00 2], P. 147. 135 of the GA (see Appendix II, (8), (N-6)). So, we may re- phrase our present question as: "Is P essentially more complex than any Ai?" As soon as this question is asked, it seems to be somewhat ludicrous. It is not at all clear what would he meant by say- ing that one thing is essentially more complex than an- other thing (for example, one might want to say that the Empire State Building is more complex than the Eiffel Tower, but it is not clear what would he meant by saying that the Empire State Building is essentially more com- plex than the Eiffel Tower). The qualification ("essentially") is required, by Mentalistsz, however, since omitting it would make the distinction between P and an Ai merely quantitative. That the distinction between P and currently existing Ai's is merely quantitative is precisely what Mechanists2 claim and what Mentalists2 deny. It seems to be the case that no "essential" distinction of the sort that proponents of the GA require can be established. If the primary distinction between P and Ai is based upon their divergent levels of computational- deductive complexity, then this seems to provide no dis- tinction between P and artefacts in general, with regard, of course, to computational output. Ai can be challenged with a Gddelian formula, Gi' by P, but it can also be challenged by an artefact AU (0 >* i). Until the level 136 (of complexity) u is reached, Au will perform as effec- tively in challenging an artefact Ai as P will. AU, how- ever, will be able to out-Godel Ai only until the level U is reached, i.e., until Ai = AU with respect to deductive complexity. The concept of "levels of computational- deductive complexity" may be effectively characterized in terms of the series A0, A1’ . . . where A1 = A0 U Go, A2 = A1 U G1, and so on, and where Gi is a Godelian formula for artefact Ai' Given the sequence A1, A2, . . . , then, we may say that for any two artefacts Aj and Ai, Aj is (computationally-deductively) more complex than Ai if and only if j > i. (4.3-5) i.e., Aj is more complex thanAi if and only if Ai occurs earlier in the sequence A1, A2, . . . than Aj does. It seems apparent that every normal, rational adult, human (who knows Godel's proof, of course), is more complex than currently existing neural artefacts. We have also found, however, that no case can be made for the con- tention that people are essentially more complex than artefacts, firstly because no clear sense can be made of the proposition that for any two things such that one is more complex than the other, one is "essentially" more complex than the other, and secondly because for any arte- fact, Ai' such that a person, P, can produce a G5delian 137 formula for Ai' an artefact, Aj, can be constructed such that Aj can produce a GBdelian formula for Ai’ i.e., Aj can do the same thing that P can do. This seems to indi- cate that there is no "in principle" distinction between a person, P, and artefacts, as such. This also seems to indicate that the computational complexity distinction which obtains between a person, P, and currently existing artefacts is a quantitative (rather than qualitative) distinction. Therefore, if the distinction between P and Ai is quantitative, as it seems to be, then it follows that P could out-Godel Ai only if P's level of complexity U' was higher than that of Ai. Hence, no distinction between PU, and an artefact AU could be made unless it were proved that it is logically necessary that U' > U, and the GA has not dene this. An SNM, therefore, would have to be an artefact AU, where, for at least one PU., U :_U'. P then, would U.: be no more able to produce a Godelian formula for AU than he is presently able to produce Godelian formulae for other human beings (i.e., for other human brains or CNS's) or for himself. Another consideration, which favors Mechanismz, is the observation that artefacts may be self-amending. Where Ai is an artefact of an arbitrarily high level of complexity and U is a formalization of a person P's method 138 of generating GBdelian formulae for the series of arte- facts, A0, A , then Ai (where Ai is any member of 1' o o the series A0, A .) need only be amended with u, 1, . . resulting in Ai' = Ai U u, in order to match P's perfor- mance in the one-up-manship game. Ai can, therefore, be expected to perform as well as P can in a one-up-manship game; if P and A1 are playing the game with a third arte- fact, Ah (h i.i)r then it is assumed, but not proved, in the GA, that Ai would fail to produce the next term of the sequence G0, G . . . , Gi whereas P would not fail to 1: produce the next term, viz., P would not fail to produce Gi+l° There is an additional difficulty, however.. Even if an artefact Ai were to be augmented with u, in the manner which we have indicated, to form Ai' (= A1 U u), then, it may be argued that P could still produce a G6delian formula for Ai'. This difficulty is dependent upon the assumption that Ai' is of a level of complexity that would permit P to produce a Godelian formula for it (viz., it is assumed that where U is the complexity level of Ai' and U' is the complexity level of PU., U' > U.). This assumption is defective in at least two re- spects. First, it is not clear that P could comprehend the formal system of A1 if this formal system were ex- tremely complex (we shall discuss this difficulty shortly). Secondly, the assumption is logically defective. 139 Ai' (= Ai U 11) could, if our earlier contentions are correct, amend itself in the appropriate manner, given the function of u, resulting in Ai" = Ai' U uUGi = Ai' U Gi’ where G1 is the Godelian formula which P has produced for Ai", If P again produces a G5delian for- mula, Gj’ for Ai", then Ai" can once again amend itself, using u, to form Ai"' = Ai" U U U G3. = Ai" U Gj‘ But, it seems clear that, considering just Ai' and its series of augmentations, the series could be continued indefi- nitely (viz., the series Ai', Ai", Ai"', . . . . ) thereby generating an infinite regress. Furthermore, a mechanization of the method of constructing a G1 at any stage of the series A0, A1, A2, . . . would result in an axiomatization of elementary number theory; the process of Ai' systematically amending itself could be continued throughout the constructible ordinals. There would, ad- mittedly, always be a Godelian formula if one assumes, of course, that the formal system Si which is satisfied by any given Ai must be both deductive and consistent. It cannot be proved that Ai' would always be able to continue the series G0’ G1, . . . . But, what is required of a Mentalist2 is that, in order to preserve the GA, he must prove that P can always continue the series, i.e., that P can always produce the next member of the series, G0, G1, . . . in a one-up-manship game. 140 To summarize our findings, at this point we have seen that the point of view which is taken by proponents of the GA is essentially refuted by the observation that G5de1's construction could be carried out by an artefact, Ai. Specifically, we have demonstrated that for every artefact, Ai' such that a person, P, can produce a G6delian formula for Ai' a more complex artefact, Ai', can be con- structed which can do the same thing. It is still possible that a Mentalist2 could argue for his viewpoint on the basis of transfinite counting. What we have labeled by a u (= P's method of producing GBdelian formulae), amounts to Godel's proof. Therefore, the amended artefact, Ai' =Ai U u, may be said to "know" GBdel's proof. Clearly, if A1 is consistent, Ai U u will still be consistent. More correctly, the system Si which Ai satisfies, when amended with u to form Si' Si U u will remain consistent. According to the proponents of the GA, a person, P, will be able to produce a G5delian formula, Gi' for every artefact Ai’ since Gi will not be provable in Ai's formal system Si‘ But, as we have seen, an artefact Ai' = Ai U u could do the same thing as P. It is signifi- cant to observe that Ai' could also be "programmed" to Produce the sequence of Gadelian formulae, G0, Gl’ G2, . . . , by repeatedly applying u. Additionally, Ai' could be "Programmed" to produce each of the theorems of the series 141 A0, A1' A2 . . . which correspond to the above mentioned series of Godelian formulae. Assuming that these theorems could be arithmetized, in the GBdelian sense, the produc- tion of the theorems of the series, A0, A1' A2, . . . , 35 It is our con— would constitute transfinite counting. tention that a suitably designed artefact could go as far in transfinite counting as any person, P, can go. Let Ai' be such an artefact; then, Ai' would still have a formal system which "programs" it, and a Mentalist2 could retort by saying that Ai' could still be improved, i.e., by adding its Godelian formula to it. This can most cer- tainly be done if the Mentalist2 can give an adequate account of how the improvement would be accomplished. An adequate account of how P can always improve any given artefact Ai would constitute an explanation of how P is always able to produce Godelian formula for any given artefact, viz., an explanation of how P is always able to continue the series, G0, G1, G2, . . . The best concept of explanation which we have is the covering-law model.36 Since, in the framework of the covering-law model, explanations are either deductive or stochastic, it follows that the proof, that P can always improve any 35See [GOO 2], passim. 36This point was suggested by R. Hall, in private conversation. 142 given artefact, could also be formalized (i.e., "program- med") and be added to the formal system of an artefact, unless, of course, arbitrary definitional restrictions prohibit any artefact from being probabilistic or induc- tive. Apparently, the Mentalist2 loses either way. A proof that P can always improve any artefact is required in order for him to maintain his position, since, in the final analysis, such a proof is necessary to show that a person, P, is in principle, different from any artefact. But, if the Mentalist2 produces such a proof and the proof is an adequate one, then it could be programmed also and added to the formal system of an artefact. It is not clear, however, that such a proof is even logically possible. Suppose that P; is the proof that P can always continue the series G0, G1 . . . If a E£_were provided by a Mentalistz, then it could be axiomatized and used as an augmentation of the formal system S' of Ai', thus generat- ing 8" = S" U BE! and hence generating Ai", so that Ai" could always continue the series G0, G1, . . . . Unfortunately, no such proof as P£_is possible. Where T. is the smallest non-constructible ordinal, §£_would lead to the conclusion that T is constructible. Therefore, BE 143 has as a consequence, the contradictory proposition that T is both constructible and not constructible.37 We may conclude, therefore, that the GA cannot be maintained, since it cannot be proved that a human will always be able to produce a Godelian formula, for any artefact (proposition (4.3-4)). We have seen, in par- ticular, that the divergence of computational processes between.humans and artefacts, which would allegedly have the result which the Mentalist2 desires, is based only upon arbitrary definitional grounds which we have seen to be not well-founded. Secondly, we have seen that no com- plexity distinction between humans and artefacts can be maintained, and we have seen that a proof for (4.3-4) is logically contradictory. Hence, we may conclude that (4.3-4) is false; so, we can obtain a denial of (4.3-3). Consequently, we may conclude that, to this extent, the GA has been shown to fail. II. Artefacts are commonly treated by contemporary Mentalists2 in much the same way that they were treated by Seventeenth Century dualists. We have already 37This argument is suggested, in part, by some re- marks of W. V. O. Quine's, which are reported by J. J. C. Smart, and, in part, by some remarks made by I. J. Good. See [SMA 2], pp. 109-110; [G00 2], passim.; and [G00 1], passim. 144 indicated the conceptual difficulties which may result from carrying Seventeenth Century mechanistic conceptions of "matter" and "machine" over into Twentieth Century con- texts.38 It seems apparent that the Seventeenth Century concept of "machine" is used by Mentalists2 in the GA. Jaki, for example, explicitly states that the GA can be successfully used only because of the ". . . rigidity in- 39 herent in the concept of machine." This rigid concept of "machine," as described by Jaki, makes a machine, . . . a combination of resistant bodies so ar- ranged that by their means the mechanical forces of. nature can be compelled to do work accomplished by certain determinate motions. This quotation could easily have come from Descartes rather than coming, as it does, from a Twentieth Century methodologist. What is important, for our purposes,is that passages of this sort, by claiming an ontological distinction between mental processes and mechanical pro- cesses, definitionally restricts the sorts of things that (physical) artefacts can sufficiently model; when certain aspects of "determinism" are also considered, these may rule out, allegedly, the possibility of an SNT and, cor- respondingly, a sufficient MT (= Tl). 38For a more classically philosophical treatment of these confusions, see [PAS 11, Chapter 3. 39[JAK l], p. 217. 40Ibid. 145 These points are substantiated by some remarks made by Lucas, in his 1961 paper. After presenting the GA and considering some objections to it, he discusses Turing's "Critical Mass Thesis."41 The critical mass thesis is the view that currently existing artefacts are "subcritical," i.e., they are not yet complex enough to have the higher intellectual attributes (e.g., "conscious- ness," "thought," etc.) predicated of them. Turing feels, however, that eventually an extremely complex, hence "supercritical" artefact will be constructed; when an artefact's degree of sophistication reaches a certain "critical mass," then it will be appropriate to attribute the higher intellectual characteristics to it. It will then constitute an.SNM. Suppose that such a "supercriti- cal" device were to be constructed, then, in Lucas' words, . . .Ait might turn out that above a certain level of complexity, a machine ceased to be predictable, even in principle; and started doing things on its own account, or, to use a very revealigg phrase, it might begin to have a mind of its own. We have certain reservations about saying that such a de- vice would have a "mind" (if what is meant is "human mind"), since it would be merely a model of a "mind." In any event, such a device would constitute an SNM. As 41See [TUR 1], p. 25. 42[LUC 2], p. 58. 146 Lucas holds, however, ". . . it would cease to be a mach- ine . . . " if it (i.e., an artefact) reached.a "supercriti- cal" level of complexity. Specifically, Lucas holds that, If the mechanist produces a machine which is so complicated that this ceases to hold good of it [i.e., it is no longer true of the "machine" in question that it is predictable, strictly determinant, etc.], then it is no longer a machine for the purposes of our discussion, no matter how it was constructed. We should say, rather, that he had created a mind, in the same sort of way as we procreate people at present. This suggests that, for Lucas, if an artefact is not de- feated by the GA, viz., if there is an Ai that can win over or match a person P in the one-up-manship game, then it is not a "machine." It suggests that no artefact is permitted to win a one-up-manship game with a person P, and that this is in virtue of definitional criteria. The fallacy that this finding introduces in the GA is obvious. Specifically, the GA includes the assumption that an artefact A0 models P. But, by definition, A0 is a "machine." Furthermore, a "machine" cannot model P be- cause it is definitionally prohibited from doing so, since anything that (sufficiently) models P is not a "machine." Since A0 is a "machine," therefore, it follows that A0 does not model P and A0 cannot model P [by definition]. Considering our account of physical modeling (in §2), it is clear that an "artefact" or "machine" is a 431bid. 147 physical model, or for present purposes, simply a "model." Hence, we may substitute 'model' for every occurrence of 'machine' in the above argument; we have a result which clearly indicates the fallacious reasoning as well as in- dicating the significance of using a Seventeenth Century meaning for 'machine' in the argument. When we actually carry out the substitution, we have: (1) the GA includes the assumption that a model A models P. (2) But, [by 0 definition] A0 is a model. Furthermore, (3) a model can- not model P, because it is definitionally prohibited from doing so, since anything that (sufficiently) models P is not a model. Since A0 is a model, therefore, it follows that (4) the model A does not model P, and that (5) the 0 model A0 cannot model P. By (1) and (4), we have A models P and A does 0 0 not model P, which is a contradiction. In summary, the GA includes the assumption that an artefact Ao both does and does not model P. As we mentioned earlier, our main reason for choosing to use the term 'artefact' was to avoid confusions such as the one which we have just discovered. In the final analysis, the confusion is based upon ontological and conceptual matters. It should be pointed out, furthermore, that, in this instance, Lucas and Jaki, at_1east, have abandoned once again the mathematical argument of the GA and have resorted to metaphysical argumentation (viz., to attempt 148 to demonstrate that "machines" are essentially different from humans). It, therefore, seems safe to conclude that the GA is not a purely mathematical (and, hence, non- metaphysical) argument as it initially appeared to be. III. It is most clear, in the accounts of the GA which are given by Lucas and Jaki, that there is an assumption implicit in the GA regarding P's knowledge of an artefact Ai's formal system. The GA, in effect, is dependent upon P's ability to demonstrate that for any formal system that is adequate to generate arithmetic there is a Godelian formula for the system, and that this formula is true. What is overlooked by proponents of the GA is that the argument depends upon the manner in which P is given the formal system of a condidate SNM, Ai. It is further presumed that P can be given, i.e., can £223, the formal system of Ai' It is clear that P must be able to deter- mine the formal system/program of A1 in order to carry out the GA with respect to A1.44 It is not clear, as we have indicated, that a model-maker who is constructing a very complex system such as an SNM will, practically speaking, be able to know what its detailed formal structure is, although it is certainly 44This point is made by P. Benacerraf. See [BEN l], 149 logically possible that he know this. As M. Minsky notes, if a computer, for example, were given a very complex program, it does not follow that the programmer will have complete knowledge of what will be the output of the pro- gram.45 This is especially true of evolutionary (i.e., "learning" or "growing") systems, such as the self- amending system which we mentioned, Ai' (or, to be abso- lutely precise, the System Si' which Ai' satisfies). Similarly, one does not expect a logician or a mathemati- cian to know all of the consequences (theorems) of a set of axioms which he has constructed. Hence, we have the following result: The formal structure of an SNM (or, for that matter, of a brain) may be too complex to, practically speaking, be known simul- taneously except in idealized and carefully controlled circumstances. But, the formal structure of an SNM must be known in order for the GA to be applied to it. There- fore, we may conclude that even if the GA were a sound argument, it would be practically impossible (or at least extremely difficult) to apply. We also draw the somewhat weaker conclusion that, in view of the practical difficul- ties of determining the formal structure of an SNM (es- pecially a self-amending SNM such as Ai') from moment to moment, and considering that (in view of the Turing Test 453ee [MIN 4]. p. 447. 150 situation, which could, of course, be reconstructed in terms of the one-up-manship game) an SNM would operate on real time, it is not at all evident that a person P would be able to keep pace with an SNM in a one-up-manship/ matching game. As evidence for these conclusions, we should men- tion that an SNM would be expected to be as (structurally/ formally) complex as a human brain. The reason that P cannot apply the GA to other persons (or to himself; re- call that, by the reflexivity of the isomorphism relation, P is isomorphic to P, i.e., P 2_P, hence, P is a model of P) is that the formal systems (SNT v C) which other human brains satisfy, are too complex for P to be practically able to apply the GA to them (or to himself). This is an aspect of the Brain-Model Problem, which we shall discuss in §5. To summarize our findings, we have seen that, firstly, human brains are likely to have the same compu- tational limitations that artefacts have, and that no essential distinction can be made between humans and arte- facts on the basis of their respective computational out- puts or on the basis of computational limitations. Secondly, we have found that a definitional fallacy is in the GA which results in the consequence that an artefact, Ai, both does and does not model a person, P. Thirdly, 151 and finally, we have found that the proponents of the GA have overlooked practical difficulties which may be in- volved in the determination of the principles and condi- tions of operation of an SNM, and this determination is required for the production of Godelian formulae for an SNM, and consequently, is required for the application of the GA. We conclude that the preponents of the GA have; failed to show that an SNM is not logically possible. To this extent, then, we may still take Mechanism2 to be viable. §4.4 DISCUSSION Lucas (1961) noted that most logicians were re- serving judgment regarding the Godel Argument until they had seen the entire GA set out in detail. Lucas attempted to do this. We, on the other hand, have attempted to pre- sent our opposing arguments in detail. Lucas, of course, hOped to convince logicians (and others) that the GA was a sound argument and that it could be defended against some strong objections. we have, however, found that the GA is defective in several crucial respects and that it does not refute Mechanism2 as Lucas, Jaki and other Mentalists2 had hoped that it would. The extent to which the GA is effective, and hence, applicable, now becomes clear. To be specific, it seems evident that the GA can be applied to all or most exist- ing neural artefacts and that, therefore, a person P will win a matching game with all or most existing neural arte- facts. In effect, we are in agreement with the Nagel- Newman formulation of the GA which is-explicitly restricted, in application, to "currently conceived 152 153 46 artificial machines." It is openly admitted, by Mecha- nists that currently conceived artefacts have negligible 2 intelligence with respect to humans; it is further admit- ted that new concepts in artefact design will be required before an "intelligent" artefact can be constructed. The implication is that while the GA applies to currently con- ceived artefacts, it may not be applicable to future arte- facts. Additionally, as we have indicated, the GA is to a large extent based upon a confused conception of "mach- ine," "thinking," "intelligence," etc. The confusions regarding the concepts of "machine," "thinking," "intelli- gence, etc. may, however, be largely dissolved when these concepts are placed in the perspective of the physical modeling framework which we presented in §2. It is our contention that Mechanists2 have been in error if they have held that artefacts can attain hpmgg_intelligence. Since they are not, in fact, human artefacts cannot have human intelligence. We have argued that it is logically possible, however, that they can at- tain artefact intelligence, which converges with human intelligence, especially, as we have indicated, in the case of computational "intelligence." As we noted, Turing replaced the question "Can artefacts think?" with the question "Can an artefact pass 46[NAG 1]. p. 101, and APP. II, (B), (N-6). 154 Turing's Test?" We have replaced this question, in turn, with the question, "Can an artefact sufficiently model a human brain/CNS?" This latter question provides a basis for making a distinction between intelligence and modeled intelligence. Additionally, Turing felt that within fifty years an SNM would be developed and that, at that time, a person could say 'Artefact A0 thinks.‘ without expecting to be contradicted. Considering our recent findings, however, this supposition is not altogether correct. As we pointed out earlier, the determination of whether or not artefacts may be said to think will be based upon a decision, not a discovery. If one takes thinking to include both human and artefact thought, then (and only then) would it be correct to say that artefacts think. The decision, in this regard, will be based upon a decision regarding the use of the word 'thinking,‘ together with the establish- ment of criteria for its use. we contend that it is more correct and more graphic to say that humans Ehi§k_whereas sufficient neural artefacts model thought. Generally speaking, the Godel Argument may be taken to lead to the conclusion that a sufficient neural artefact would, perhaps, have to be far different from the neural models which have presently been conceived and constructed. Almost certainly, an SNM would have to be more complex than currently existing neural models. 155 This outlook, however, presents a difficulty for investigators in the field of neural modeling which, while being tremendously overwhelming, is nevertheless one of "mere" practicality. We conclude, therefore, that a sufficient neural artefact is logically possible. Accordingly, we conclude that, at least insofar as the G6del Argument has been shown to fail, Mechanism2 is viable. We maintain these conclusions, however, while holding some reservations re- garding the contention that a sufficient neural artefact will ever actually be constructed. §5.0 MECHANISM2 AND THE BRAIN-MODEL PROBLEM The Brain-Model Problem, as we indicated in 52.4, is really a cluster of problems, some of which are scien- tific or practical problems, and some of which are logical or philosophical problems. The most important aspect of the Brain-Model Problem, for the purposes of this study, is expressible in terms of the question, "Is it logically possible to construct a sufficient neural artefact (SNM) that is simple enough to be understood?" This question is clearly based on the presupposition that an affirmative answer can be given to two questions which are logically prior to this one, namely, "Is an SNT logically possible [Mechanisml]?" and "Is an SNM logically possible [Mecha- nismzl?" We concluded, in 53, that, to the extent to which the self-refutation arguments which we discussed were un- successful, an SNT may be considered to be logically pos- sible. Similarly, we concluded, in §4, that inasmuch as the GA was unsuccessful, an SNM may be regarded as being logically possible. The Brain-Model Problem leaves us with one remaining question: "Can an SNM be understood?" 156 157 Some clarification is required at this point. Clearly, one theorizes in order to gain a viable under- standing of a given subject matter, and, to an extent, one constructs models (theoretical or physical) for the same reason. However, in the instrumental view of modeling, which we have taken, models are considered as being con- structed in order to (heuristically) facilitate theory formulation, and therefore, models may be regarded as only indirectly contributing to understanding. To be specific, one theorizes about the human brain (and its functions) in order to understand it. So, a person who "understands" the human brain may be said to do so only in virtue of a theory which he knows,that has the human brain (or CNS) as its logical model. If he completely understands the human brain, then the theory, in question would constitute an SNT. Similarly, one could only be said to understand an SNM in virtue of knowing the~ theory, MT (= a mathematical-theoretical model of the SNT) which it satisfies. We may safely assume that an SNM would be on a level of complexity that is comparable to that of the human brain, considering that the behavioral outputs of an SNM would have to be as varied and as sophisticated as human brain outputs. It may be expected that an SNM would be as difficult to understand as the human brain. Hence, the aspect of the Brain-Model Problem which we are 158 presently considering may be phrased in either of two ways: First, "Can the human brain (and its Operations) be under- stood (viz., by a human being)?," and secondly, "Can an SNM (and its operations) be understood (e.g., by a human being, or by another SNM)?" In the most important and interesting respect, these questions deal with self-understanding (and are, in that sense, self-referring). If we may be permitted a rather cavalier treatment of the concept of "degrees of complexity,"1 we may generalize these questions in the following manner: Let S? be a physical system (e.g., a human brain or an SNM) having a degree of complexity = n, where the relative degree of complexity is, generally speaking, determined by the sophistication of the system's behavioral output. Then, the question which is posed by the Brain-Model Problem becomes, Can S? understand any SS (and its operations)?" (5.0-1) where SS is an arbitrary system having a degree of complex- ity = n. This question may also be asked with regard to systems which are not of the same level of complexity, 1Turing employs this concept, for example in his 1950 paper; at this point, we want to use 'degree of com- plexity' in a somewhat more general sense than we did in 54. See [TUR l], passim. 159 n+0 U 2 (5.0-2) Can S? understand any S where n + c > n, i.e., can a system understand another system that is more complex than it is? It is generally assumed that any system can understand another system that is less complex than it is; the problem is to deter- mine whether or not a system can understand another system that is either as complex as it is or is more complex than it is. More precisely, the central question posed by the Brain-Model Problem, which we shall consider here is, n a Can Si understand any SU? (5.0-3) where o :_n. This generalized question has several interesting applications. We may ask, for example, whether or not a neuroPhysiologist will be able to deve10p an SNT that he can understand.- Alternatively, and more intuitively, we may ask, is the human brain understandable by other human brains? On the other hand, we may ask, would an SNM be understandable by a human being? Alternatively, could an SNM be understood by another SNM, or by itself? An intriguing speculation which has been made by M. Minsky is that an SNM would very likely have the same mind-matter convictions that people do, and therefore would be unwilling to admit that it was a "mere" 160 machine.2 Accordingly, an SNM may have the same limited understanding of its own "mental" operations that people presently do of theirs. As we have already indicated, internal storage limitations may not be a crucial factor,3 in self-understanding. As Minsky argues, the Brain-Model Problem is based upon the dimorphism of self-models which pe0ple have and which an SNM is likely to have. In the present section, we shall discuss Minsky's argument and attempt to assess its effect on the possibi- lity of understanding an SNM, if one were to be actually constructed. We shall then make some brief suggestions for future research. 2See [MIN 2], passim. and [MIN 4], pp. 448-449. 3 [ARB 2]. See [M00 1], [PEN 2], [NEU 2], [NEU 3], and ‘I ‘\ § 5 . 1 THE BRAIN-MODEL PROBLEM It was in his 1965 paper that Marvin Minsky dis- cussed, in some detail, the aspect of the Brain-Model Problem that is our present concern.4 In an attempt to avoid terminological confusions, we must note that Minsky, uses a rather imprecise concept of "model." He apparently feels that the concept of "model" which he uses is vir- tually incapable of being rigorously characterized. Gen- erally speaking, he regards the modeling relation as being an essentially ternary one, which holds between an "ob- server" or modeler, the model itself, and, finally, the thing which is modeled. In Minsky's words, To an observer B, an object A* is a model of an object A to the extent that B can use A* to answer questions that interest him about A. 4See [MIN 2L assim. The Brain-Model Problem was introduced by Minsky, in his doctoral dissertation; see [MIN 5]. His 1965 paper, i.e., [MIN 2], is reprinted in a very slightly revised form in Minsky, M. (ed.) Semantic Information Processing. MIT Press. Cambridge: 1968, pp. - . We shall refer only to the original article as it appeared in the Proc. I.F.I.P. Congress, 1965. 5[MIN 2]. p. 45. 161 162 At an initial level, at least, this concept of "model" closely approximates our concept of "theory." One theo- rizes about an object A in order to be able to answer questions about A. Roughly speaking, then, we might characterize Minsky's "model"-concept in terms of a verbal/conceptual description or explanation of a given object. As we mentioned in §2, central nervous systems, in a sense, "model" their environments (viz., by "mapping" their environments onto themselves). In terms of Minsky's characterization of the modeling relation as being ternary, B would be a (total) human CNS, A would be the environment of B, and A* would be the portion of B, B'(cB), such that the elements of A are mapped onto the elements of B'.; In- sofar as this-sort of "modeling" provides a basis for answering questions about the environment of a CNS, it constitutes "modeling" in the sense which Minsky intends. It should be noted that neural modeling and neural coding converge at this point.‘ It is essential to Minsky's characterization of the Brain-Model Problem that both human beings and arte- facts answer questions on the basis of the environmental models which we have described. It would be convenient to speak of some internal mechanism in either humans or neural artefacts which conducts the reflective enterprise in question; unfortunately, however, there are no simple,- 163 discrete operations or mechanisms in either humans or in (existing) neural artefacts which can be pointed out as performing this function.6 Consider a person, P, together with his total environment W (which would include his own body). Let W1 be the first-level environmental model of P's W. Then, according to Minsky, P answers questions about W in terms of statements made about the structures in W1 which cor- respond to structures in W. Ideally, since W1 is embodied in physical structures within P, one might want to say that W1 is a physical model of W. Certainly, however, one would not want to say that W1 is isomorphic to W, in the old Gestalt sense of 'isomorphism.‘ Since W1 is verbalized, or at least is capable of being verbalized, it is probably mere correct and it is certainly more graphic to charac- terize W1 as being a theory about W, i.e., W satisfies w-1 . The Brain-Model Problem arises when we consider the status of P's model of himself. P is a part of the world, and, in a sense, P is a portion of his own environ- ment. Hence, Pc:W. It follows that P's model of himself, P1, must be contained in P's model of his total environ- ment, i.e., P'c:W1. Therefore, P may answer questions 6 See [MIN 2], p. 45. 164 about P in terms of statements about the (formal) struc- ture of P1, which corresponds to the structure of P. It is implicit, in Minsky's 1965 paper, that the situation would be much the same for an SNM. Specifically, let A = an SNM, and let W = the environment of A. Then, we have Ac:W. Furthermore, A answers questions about W in terms of W1, and A answers questions about A in terms of A1. Since ACW, we have A16: W1. Minsky makes a further distinction between various degrees of generality of P's (or A's) questions about W.7 This distinction calls upon some concepts that are similar to those employed in the object language/meta- language distinction. Generally speaking, if P asks specific questions about an object, say a tree, then he consults his environmental model W1. For example, if P asks how tall a given tree is, he would use W1 to answer this question. A very general question, however, such as a question regarding what "kind" of thing a tree is, would require a higher-level, more abstract environmental model W2, since, in Minsky's view, the question would not be applicable directly to W, but it is, instead, applicable to w'. At any rate, we may generalize Minsky's views to include P's questions about himself. If P asks, for 7See [MIN 2], p. 46. 165 example, "How tall is P?," then he may obtain an answer to this question by consulting P1 (CW1). On the other hand, if P asks, "What kind of thing is P?," then P ob- tains an answer in terms of the more generalized model P2 (CW2). Similarly, for A, we have A (CW), A1 (CW) and A2 (cwz). So, we have a modeling heirarchy that may be in- tuitively characterized in terms of degrees of generality and mathematically characterized in terms of recursive depth. Hence, we have a modeling heirarchy that may pro- ceed as, w°, w', w, . . . w“, or, P°,P1,P2,...P, Or, finally, as n A°,A',A2,...A. For our purposes, consider that P1 = SNT, and that A1 = MT = a mathematical theory which is satisfied by A. Therefore, in terms of the modeling framework, we have 166 1/C\ SNT=P14 ;A'=MT P°¢ e A° = SNM (Figure 5.1-1) In terms of the aspect of the Brain-Model Problem that is of immediate interest to us, (wewould like to determine whether or not P° can "understand" Pl , oralter- natively, we would like to determine whether or not an artefact A° is capable of "understanding" A1. Presumably, Po would understand P1 if P1 consti- tuted a unified explanation/description of P° which was simple. enough to be understood. The first difficulty, as is evidenced by the division betweenmental events and physical events in thoughtand language, is that P° is not unified. P°'s environmental model W1 is distinctly bipartite.8 One part of W1 deals with mechanical and physical matters, etc. and the opposite part‘deals with things such as goals, inten- tions, social matters, etc. The division in W1 is also reflected in P1, thus forming a basis for the popular mind-matter convictions. 8This point is suggested by Minsky in [MIN 2] , ,p. 46, and [MIN 4], p. 449. 167 It should be noted that environmental models should be considered as being the end-products of A a long development which includes the learning of a language which represents the bipartite structure of a W1. This development process would also include the process of socialization which, in part at least, contributes to the conceptual dimorphism of thought which is represented in language. As Minsky expresses these points, An infant is not a monist: it simply hasn't enough structure in [P1] to be a dualist yet; it can hardly be said to have a position on the mind-body problem yet.9 Minsky's central argument for the proposition that statements of the form (5.0-3) should be answered in the negative is based upon the commonly held belief in a mind- body dualism which is a consequence of having a developed W1 (and, hence, having a develOped P1). As we have indi- cated, when P° is asked a very general question about him- self, he answers the question by giving a general description of his model of himself, where P1 is his model of himself and P2 is P°'s description of P'. The conven- tional way in which P° will express the bipartite structure of P1 is for him to claim that he has a mind as well as a body. 9[MIN 2], p. 46. 168 Unfortunately, as Minsky argues, the separation of the two parts of P1 is so vague and indistinct, and furthermore, their interconnections are so complicated (and.accordingly, difficult to describe and specify) that P°'s account of his mind-body distinction is inevitably confused and unsatisfactory.10 It seems to follow from this that P°'s understanding of P1 will, consequently, also be confused and unsatisfactory. The confusion regarding P1 is based upon more, however,.than just the bipartite structure of P1. The concept of a "part" of an environmental model is, itself, quite obscure. As Minsky notes, the concept of-a part of a computer program is much more complicated and vague than the concept of a part of a physical object. Apparently, the concept of a "part" of an environmental model is com- parable to the concept of a "part" of a computer program with respect to conceptual vagueness.11 We may conclude, then, that since P°'s model P1 of himself is not unified, P°'s understanding of P1 is ad- versely.affected. It has been.argued, by Turing and others,12 that the development of an SNM will have, as one of its 10See [MIN 2], p. 47 and [MIN 4]: p. 449. llSee [MIN 2], p. 47. 12See [TUR l], passim. and [ARB l], passim. 169 consequences, a unification of language (and possibly of science). In view of this, we should note that Minsky's "bipartite model" thesis is based, to some extent, upon the "two language" view regarding the philosophy of mind. According to this latter view, the mind-body problem is-a pseudo-problem which results from a clash between two di- vergent languages, viz., two divergent ways of talking about the same thing. If this view is correct, then one. way to resolve the problem would be to develop a unitary language, as Turing hoped would happen. Turing claimed, in effect, that once such a language is accepted, one could say that artefacts "think" without expecting to be contradicted. He, of course, felt that a unitary language would develop as a matter of course after an SNM had been successfully constructed and the fact that an SNM was in existence became widely known. The establishment of a unitary language would, then, have the likely effect that P°'s model P1 of himself ‘would.likewise be unified. This effect would be realized, however, only in an ideal situation. AsMinsky indicates, the heuristic value of the bipartite model is too great. to warrant its unification, even if the means for unifying it became available, viz., as the result of scientific developments.l3 13See [MIN 2], p. 47. 170 The primary value of the bipartite model is, ap- parently, its simplicity. It seems likely that a unified language of the sort that we have mentioned would be much too complicated to use for "every day" purposes. We may conclude, then, that P°'s model P1 of him- self is likely, for purely practical reasons, to remain bipartite even if an SNM (or SNT) is developed which could potentially result in the unification of language.‘ Since P° 's understanding P1 is dependent upon the lack of vague- ness in P1 '5 structure, we seem to be forced to conclude that P° may not be able to adequatelyunderstand P1. We, however, do not view the situation as being as hopeless as Minsky does. As we noted earlier, the mind- body problem is largely based on misconceptions which re- sult from Seventeenth Century mechanism. Revisions which have taken place in the concept of "matter" within the last century have made the distinction between mind and matter not so much incorrect as irrelevant.14 There is, however, no assurance that a complete reduction of psycho- logy to physical science could be accomplished (recall that in §3 we stipulated (that there may be two adequate explanations for the same event). Nor is there any assur- ance that such a reduction would be desirable or that it would resolve the Brain-Model Problem, considering the 14 For an argument for this point, see [SMA 3]. 171 convenience» and the heuristic value of the mind-matter separation in language. We conclude, therefore, that partly by choice and partly by necessity, the bipartite structure of P°'s model P1 of himself is likely to remain. Considering, then, that P°'s understanding of P1 is likely to be adversely affected by the lack of unity in P1, we turn, now, to the question of whether or not P1 will be simple enough to be understood, if it is sufficient (i.e., if it constitutes an SNT). Recent findings regarding self-reproducing auto- mata seem to indicate that an automaton can reproduce itself completely in all detail.15 The logical consequence of this is that a system can produce another system which is as complex as it is. This consequence reflects favor- ably on our conclusions regarding the logical possibility of constructing an SNM. It is, however, impossible for an artefact (or a person) to indicate, by examining and analyzing its inter- nal states from one moment to the next, exactly what it is 16 doing at each step. An artefact (or person) that at- tempted to do this could never get beyond the first step. 15A recent paradox regarding self-reproducing sys- tems, however, remains to be resolved. See [ROSl 2] and [R081 3]. 16This point is made by Minsky in various places. See [MIN 1], passim. and [MIN 2], p. 48. 1 1 .1 ./\|. 41/11" .I .{I ‘If: 172 There is, on the other hand, no logical reason why an artefact (or person) cannot comprehend or "understand" its principles of operation. In addition, given enough storage (or memory), it seems possible for an artefact to understand every detail of one of its earlier states.’ Correspondingly, given enough information regarding specific conditions and given its principles of operation, an artefact could predict a future state of its own system. So, we have the conclusion that the principles of operation of an artefact or human brain are likely to be simple enough to be understood from moment to moment. Considering the complexity of the systems in question, however, a given state of a system would have to be singled out, recorded in some fashion, and analyzed part.by part' in order to be understood. Even at present, many of the principles of the Operations of individual components of the human brain are known. In this respect, we may expect a person, P, to understand the principles of operation of the basic components of his brain. On the other hand, the principles of operation of the basic components of, for example, a digital computer are known; so, if an SNM were to be physically similar to a digital computer, we may expect that it would be able to understand the principles of operation of its basic components. Considering the com- plexity of a brain, or an SNM, however, it seems unlikely 173 that the total systemic functions of either a brain or an SNM could be understood except, as we have indicated, when a single state is selected, recorded in some manner, and studied in detail. As in Harmon's example, the deduction of ensemble functions from single-unit functions, in a brain or in an SNM, is as difficult as the problem of deducing the properties of water from one's knowledge of hydrogen and oxygen molecules. We must emphasize, how- ever, (as Harmon does) that this is a practical difficulty at our present level of understanding; the deduction of brain or SNM ensemble functions from principles of single unit functions may not be an in-principle impossibility. We have three consequences of this conclusion to consider which are related to our earlier findings. It should be noted, firstly, that the Brain-Model Problem indicates that Russell's Test may be very difficult to perform, but it does not make the test impossible to per- form. Secondly, regarding the Description Paradox, it should be noticed that the laws of an SNT would apply to any brain state. We have seen that a person may not be able to "know" his brain states from.moment to moment, therefore, he could not "know" from moment to moment suc- cessive instances of an SNT (which is satisfied by his brain states); the Description Paradox is fallaciously based on the assumption that a person could know these instances of an SNT. As we have indicated, a person would 174 not get beyond the first instance. Finally, as Benacerraf indicated, one must know the "program" of a neural arte- fact in order to apply the GOdel Argument to it. Consider- ing our recent findings, it seems impossible that a person could keep up with a self-amending artefact in a matching game; similarly, one artefact could not keep up with an- other one. These observations also provide an indication of why one person cannot apply the GOdel Argument to an- other person.' Generally speaking, the Brain-Model Problem has the result that, in virtue of deeply entrenched mind-matter convictions, and in virtue of the complexity of the human brain and, consequently, of an SNM, there are considerable practical difficulties involved in having a situation. wherein one human brain understands another (or itself), one human brain understands an SNM, and surprizingly, in having a situation wherein one SNM understands another SNM (or itself). As we have indicated, the practical difficulties are of two sorts; first, there are the dif- ficulties which are posed simply by the complexities of the systems in question; secondly, there are the difficul- ties that result from the practical and heuristic value of bipartite environmental models (as we noted, the value of the bipartite structure of these models makes it-un- likely that they would become unified in any circum- stances). 175 As Minsky appraises this situation, When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men on their convictions about mind- matter, consciousness, free will and the like . . . . A iman's or a machine's strength of conviction about. such things tells us nothing about the world, or about the man, except for what it tells, us about his model of himself. If this appraisal is correct, then an SNM maybe expected to have the same mind-matter convictions that a person. does, although this would follow only if the SNM had a bipartite model of itself. The source of the bipartite self-model of a human being seems to be clear enough, but the source of a similar division in an SNM seems to be less clear. Per- haps what Minsky has in mind is the existence of an SNM self-model that has the same.simplicity and, hence, heuristic value-that the self-models of human beings have. Inthis case, one would have to argue that the model- dimorphism in self-models is a function of model-simpli- city. We cannot see how this argument would proceed- A more viable approach mightpbe to argue that an SNM, A°, could not be said to sufficiently (physically) model a human, P°, unless A° had a bipartite model of itself, A1, that was comparable to P°'s bipartite self-model P1. In any event, the provision of A° with a bipartite self-model 17[MIN 2], p. 48. See also [MIN 4], p. 449. 176 A1 may simply be a function of model construction and "programming," viz., A° would be constructed in such a manner that it would be able to communicate in a human language, and human languages presumably embody a repre- sentation of the bipartite structure of human environmen- tal models; hence, it may be the case, that as soon as an SNM were provided with a language, it would also be given the basis for a bipartite self-model and, more gen- erally, a bipartite environmental model. Additionally, a bipartite self-model may be required in order for an SNM to pass a generalized Turing Test. It is not clear that such a self-model would be required in a Turing Test which was restricted to computational outputs, since it is. not evident that computational/mathematical language embodies a bipartite structure. It is a further consequence of the Brain-Model Problem that an SNM should not be expected to have any better an understanding of its own Operationsthan a human being presently has of his own Operations. Also, it may be expected that, in the sense which we haveindia- cated, a person who constructed an- SNM would have no better an understanding of the SNM's, operations . than he does. of his own. operations . §5.2 SUGGESTIONS FOR FUTURE RESEARCH In the present study, we have found that a great deal of additional work is required in some areas. Firstly, more. researchis required to resolve the conceptual and linguistic confusions which result from the mind-matter dimorphism which is present in natural language.. Secondly, we have found the need for viable intelligence criteria for both humans and neural artefacts. Regarding Mechanisml, much additional research of both a conceptual and linguistic nature remains to be done in order to resolve confusions which result from the exis- tence of two allegedly incompatible modes of language, namely the thing-language and the mind-language. To a large extent, the alleged incompatibility of, for example, actions and "mere" motions, (or reasons and causes, is a consequence of this confusion. As we have indicated, a systematic up-dating of the natural language may resolve the confusion if the connotations which have been carried over from Seventeenth Century mechanism can be removed. A successful completion of this project would presumably remove many mind-matter confusions while 177 178 leaving the basic linguistic (and conceptual) mind-matter dimorphism intact; as we have shown, this latter dimor- phism has a practical and heuristic value which precludes its removal. Regarding Mechanismz, it seems clear that, in. view of the considerable practical difficulties which must be considered in the construction of an SNM, a reevalua- tion of the long-range goals of cybernetics may be re- quired. In any event, as we have noted, the final appraisal of an SNM will be based upon a decision and not a discovery. This decision will require adequate criteria for both human and artefact intelligence. It appears that a single criterion would not suffice. In view of the findings of recent behavioral studies (e.g., the works of Ryle and Malcolm), it seems clear that criteria for human intelligence would be applicable only to human beings; it is our contention that this is an uninterestingtruism which in no way affects the possibility of an SNM. Cor-‘ respondingly, we may expect the criteria for artefact.in- telligence to be applicable only to artefacts. Since artefacts and humans are at least nominally non-identical (of course, any physical system is a model of itself), we conclude that their respective intelligence criteria should also be non-identical. An adequate criterion for human intelligence has not yet been developed.. Perhaps an operational definition r If‘ 3:". 179 is the best that can be expected. At any rate, it would seemingly be verydifficult to determine whether an arte- fact was an SNM until intelligence criteria for the system which an SNM models were made clear. We may expect arte- fact intelligence criteria to parallel human intelligence criteria and also be closely associated with criteria for modeling adequacy, in a general sense. In any event, we stress that a firm distinction should be made between human and artefact outputs in order to avoid confusions; to be specific, we stress the point that, in view of the relationship between a prototype and its physical model, humans are said to think, while artefacts may only be said to model thinking. We wish to make one final comment regarding the heuristic value which one might expect to obtain from an SNM. There is a parable about ancient.map-makers who were commissioned to produce a map of a portion of a kingdom. The map was supposed to be complete in every detail. When the map-makers had completed their task, they had a perfect one-one model of a portion of the countryside, but they were dismayed to find that when the map was spread out, it covered the countryside which it was supposed. to model. The moral of this story is that the map which the map-makers produced was, practically speaking, useless; the original countryside which the 180 map-makers were commissioned to model was as useful for navigational purposes as the map which was ultimately produced. They had done their job too well. It may be the case that a similar situation con- fronts cyberneticians. An SNM may well turn out to be no more heuristically valuable than the brain which it models. BIBLIOGRAPHY [ACH [ACH [ACH [ADL [AND [ANI [ANS [APO [ARB [ARB [BEN [BLA [BOH l] 2] 3] l] l] 1] 1] 1] 1] 2] 1] l] 1] BIBLIOGRAPHY Achinstein, Peter. "Models, Analogies, and Theories." Phil. of Sci. 31. No. 4. (Oct. 1964), 328-350. . "Theoretical Models." Brit. J. Phil. Sc1. 16. (May 1965-Feb. 1966), I02-I20. . "Variety and Analogy in Confirmation THe ry." Phil. of Sci. 30. (1963), 207-221. Adler, Irving. Thinking Machines. New York: The John Day Company, 1961. Anderson, Alan R. (ed.) Minds and Machines. Englewood Cliffs, New Jersey: Prentice Hall, Inc., 1964. Anisimov, S. F. Man and Machine. Anscombe, G. E. M. Intention. Cornell University Press, I957. Ithaca, New York: Apostel, L. in the Non-Formal Sciences." (1960), 125-161. "Towards the Formal Study of Models Syntheses. 12. Arbib, Michael A. Brains, Machines and Mathe- matics. New York: McGraw-Hill Book Company, . "Simple Self-Reproducing Universal Automata." Inf. and Cont. 9. (1966), 177-189. Benecerraf, Paul. "God, the Devil, and GOdel." The Monist. 51. (1967). Black, Max. Models and Metaphors. Ithaca, New York: Cornell University Press, 1962, 219-243. Bohnert, Herbert. "The 'Intuitive' (Semantic) Proof of 'GOdel's Theorem.'" (Mimeographed, Michigan State University). 181 [BRA [BRA [BRE [BRO [BUR [BUR [BUR [BUR [BYE [CAI [CAI [CAM [CAR [CAR 1] 2] l] l] 1] 2] 3] 4] 1] 1] 2] l] l] 2] 182 Braithwaite, R. B. "Models in the Empirical Sciences." In Logic, Methodolggy, and Philosophy of Science. (Eds.) Nagel, E., Suppes, P., and Tarski, A.’ Stanford University Press: 1962. . Scientific Ex lanation. New York: Harper and Row, 1953, 88-114. ’ Bremerman, H. J. "The Evaluation of Intelligence, The Nervous System as a Model of its Environment." Tech. Rept. No. l, Dept. of Mathematics. Seattle: The University of Washington, 1958. F1 Brodbeck, May. "Models, Meanings, and Theories." In Symposium on Sociological Theory. (Ed.) Gross, L. Evanston: 1959. L‘ Burks, Arthur W. "Computation, Behavior, and Structure in Fixed and Growing Automata." Self- Or anizing Systems. New York: Peragamon, I960, 285-311. , and Wang, Hao. "The Logic of Automata." JACM. 4. (1957), 193-297. . "The Logic of Fixed and Growing Auto- mata." Ann. Comp. Lab. HarvardlUniversit . 29. Cambridge: Harvard University Press, 1957, 147- 188. , and Wright, J. B. "The Theory of Logical Byerly, H. "Model Structures and Model Objects." Brit. J. Phil. Sci. 20. (1969), 135-144. Caianiello, E. R. (Ed.) Automata Theory. New York: Academic Press, 1966. . "Mathematical and Physical Problems in tHe Study of Brain Models." In [CAI 1]. Campbell, N. R. "What is a Theory?" In Physics: The Elements. Cambridge: University Press. Carnap, R. "Discussion: Variety, Analogy, and Periodicity in Inductive Logic." Phil. of Sci. 30. NO. 3. (1963). 222-227. . The Logical Syntax Of Language. 129-134. [CAR [CHU [CHW [COHl 1] [COH2 1] [COH3 1] [COP [CRO [CUL [DAV [DeL [DUH [FAR [FEI 3] 1] l] 1] l] l] 1] l] 1] 1] 1] 183 . "The Methodological Character of Theore- tical Concepts." In Mippesota Studies in the Philosophy of Science. I. Church, Alonzo. Introduction torMathematical LO ic. Princeton, New Jersey: Princeton Uni- ver81ty Press, 1956. Chwistek, L. "A Formal Proof of GOdel's Theorem." J.S.L. 4. No. 2. (June, 1939), 61-68. Cohen, Jonathan. "Can There be Artificial Minds?" Analysis. 16. (1955), 36-41. Cohen, L. D. "Descartes and Henry More on the Beast Machine: A Translation of Their Corres- pondence Pertaining to Animal Automation." Am. Pgl. I. (1936), 48-61. Cohen, Morris R. and Nagel, E. An Introduction to Logic and Scientific Methgd. New York: Har- court, Brace, and Company, 1934, 137-141; 221-223; 286-289; 367-376. COpi, Irving M., Elgot, Calvin C., and Wright, Jesse B. "Realization of Events by Logical Nets." JACM. 5. No. 2. (1958), 181-196. Crosson, Fredrick J. and Sayre, Kenneth M. (Eds.) Philosophy and beernetics. New York: Simon and Schuster, 1967. . Culbertson, J. The Minds of Robots. Urbana: The University of Illinois Press, 1963. Davidson, Donald. "Actions, Reasons, and Causes." Jo Phil. LX. NO. 23. (NOV. ' 1963) 1 685-7000 DeLong, Howard. "Unsolved Problems in Arithmetic." Sci. Amer. 224. No. 3. (March, 1971), 50-60. Duhem, Pierre. The Aim and Structure of Ph sical Theogy. New York: Athenuem,1962. Part I, Chapter 4. Farré, G. L. "Remarks on Swanson's Theory of Models." Brit. J. Phil. Sci. 18. No. 2. (1967), 140-147. Feigenbaum, E. A. and Feldman, J. Computers and Thou ht. New York: McGraw-Hill Book Company 13+“ . ' 184 [FLE l] Flew, A. Mind, Body, and Death. MacMillan. [FOG l] Fogel, L. J. Biotechnolo : Conce ts and A - plications. EngIewood CIi¥fs: Prentice HaII, 1963. [FRE 1] Freudenthal, Hans. (Ed.) The Concept and the Role of the Model in Mathematics and Natural and Social Sciences. D. Reidel Publishing Company. Dordrech, Holland, 1961. [FRE 2] . "Models and Probability." Syntheses. 12. (I960). [GEO 1] George, F. H. Automation, Cybernetics, and Society. London: Leonard Hill Ltd., I960. [GOD l] GOdel, Kurt. On Formally Undecidable Prgpggi- tions. Meltzer, B. (transf) New York: Basic Books, Inc., 1962. [G00 1] Good, I. J. "GOdel's Theorem is a Red Herring." Brit. J. Phil. Sci. 19. (1969), 357-358. [G00 2] . "Human and Machine Logic." Brit. J. Phil. Sci. 18. (1967), 144-147. [G00 3] . "Logic of Man and Machine." The New [GRO l] Groenewald, H. J. "The Model in Physics." Syn- these. 12. (1960), 222-227. [GRU P] Grfinbaum, Adolf. "Are Infinity Machines Paradoxi- cal?" Science. 159, 396-406. [GUN 1] Gunderson, Keith. "The Imitation Game." In [HAL 1] Hall, Richard J. "What Do We Immediately Per- ceive?" (Unpublished manuscript, Michigan State University). [HAN 1] Hanna, J. "Explanation, Prediction, and Descrip- tion." Synthese. 20. (1969). [HARl l] Harmon, L. "Problems in Neural Modeling." In Ngupgl Theory and Modeling. (Ed.) Reiss, R. F. Stanford: Stanford University Press, 1964. [HAR1 2] [HAR2 l] [HAR3 l] [HEM 1‘ (Ll [HES [HES [HES [HES [HOL [HOO [HUT [JAC [JAK [JOR [KEM [KEM [KLE i] 11 21 31 41 1] 1] 1] 1] l] l] 1] 2] l] 185 , and Lewis, E. "Neural Modeling." Bell TeIepHone System Tech. Pub. Mono. 5228. Harré, R. Theories and Things. London: Sheed and Ward, 1961. Harth, Erich M. "Brain Models and Thought Pro- cesses." In [CAI l]. Aspects of Scientific Ex lanation. The Free Press, 1965, 433-44I, 445-447. "Analogy and Confirmation Theory." Hempel, C. New York: Hesse, Mary B. Phil. of Sci. 30. (1963), 207-221. . Models and Analo ies in Science. New ., York: Sheed and Ward, 196 . [ . "Models and Analogy in Science." Encyc. of Philosophy, Vol. 5, 354-358. "Models in Physics." Brit. J. Phil. Sci. 4. No. 15. (Nov., 1953). Holland, J. H. "Outline for a Logical Theory of Adaptive Systems." J. Assoc. Comput. Mach. .9. (1962), 297-314. Hook, S. (Ed.) Dimensions of Mind. New York: New York University Press, I965. Hutten, E. The Language of Modern Ph sics. New York: The MacMillan Company, 1956, 81-89. Homer. 46. "On Models of Reproduction." Jacobson, Amer. Sci. Jaki, S. Brain, Mind, and Computers. New York: Herder and Herder, 1969. Rev. "Determinism's Dilemma." Jordan, James N. Meta. 23. No. l. AlPhilosopher Looks at Science. Kemeny, J. G. P_ VanNostrand, 1959. Princeton: . "Man Viewed as a Machine." In [WAL l]. Kleene, Stephen C. Introduction to Metamathematics. Princeton: VanNostrand, 1952. [KLE [KRI [KUI [LAS [LET [LUC [LUC [MAC [MAC [MAC [MAC [MAL 1] 1] l] 1] 1] 1] 2] 1] 2] 3] 4] l] 2] 3] 4] 186 . "Representation of Events in Nerve Nets and Finite Automata." In [SHA 1]. Krimerman, Leonard I. (Ed.) The Nature and §c0pe of_Social Science. New York: Appleton- Century-Crofts, 1969, 334-349. Kuipers, A. 12. (1960), "Model and Insight." 249-256. Synthese. Lashley, K. S. Brain Mechanisms and Intelligence. New York: Dover Publications, Inc., I9 3. n Lettvin, Jerome Y., Chung, S., and Raymond, S. i "Multiple Meanings in Single Visual Units." (Un- published manuscript.) Lucas, J. R. "Human and Machine Logic: A Re- i joinder." Brit. J. Phil. Sci. 19. (1968), 155-156. . "Minds, Machines, and GOdel." Phil. 36. (I961). 112-127. Reprinted in [AND 1|. MacKay, D. M. "Brain and Will." In [VES l]. . "The Epistemological Problem for Auto- " In [SHA.l]. mata. . "From Mechanism to Mind." In Brain ana Mind. (Ed.) Smythies, J. R. . "Mindlike Behavior in Artefacts." Brit. J. Phil. Sci. 2. (May l951-Feb., 1952), 105-121. Malcolm, Norman. In Metameditations. "Dreaming and Skepticism." (Eds.) Sesonske and Fleming. . "The Conceivability of Mechanism." Phil. Rev. LXXVII. No. 1. (Jan., 1968), 45-72. Reprinted in [KRI 1]. We shall use the pagination in [KRI 1] throughout. . "Knowledge of Other Minds." 56. (1959). . "Scientific Materialism and the Identity Theory." In The Mind-Brain Identity Theo . (Ed.) Borst, C. V. London: MacMillan, 1970. J. Phil. [MAR [MAY [McC [McC [MEI [MEN [MIN [MIN [MIN [MIN [MIN [M00 [M00 [MOR 1] 1] l] l] 2] l] l] l] 2] 3] 4] 5] l] 2] l] 187 Martin, Michael. "Discussion: On the Conceiv- ability Of Mechanism." Phil. of Sci. 38. NO. 1. (March 1971), 79-86. Maxfield, Myles; Callahan, A. and Fogel, L. Biophyslcs and Cybernetic Systemg. Washington, D.C.: Spartan Books, Inc., 1965. Mays, W. "The Hypothesis of Cybernetics." Brit. J. Phil. Sci. 2. (May 1951-Feb., 1952), 249-250. Embodimepts of Mind. 1965 O McCulloch, Warren S. Cam- bridge: The M.I.T. Press, , and Pitts, Walter. "A Logical Calculus of the Ideas Immanent in Nervous Activity." Bull. Math. Biophysics. 5. (1943), 115-133. Meiland, Jack W. The Nature of Intention. London: Methuen and Company, Ltd., 1970. Princeton: Mendelson, E. Mathematical Logic. VanNostrand Co., Inc., 1964. Minsky, Marvin L. finite Machines. Prentice Hall, Computationi:_Finite and In- Englewood Cliffs, New Jersey: Inc., 1967. . "Matter, Mind, and Models." Proc. IFIP Congress. Vol. I. Spartan Books, 45-50. . "Some Universal Elements for Finite Automata." In [SHA 1]. . "Steps Toward Artificial Intelligence." In FBI 1]. . Theory of Neural Analgg Reinforcement §ystems and its Application to the Brain-Model Problem. PH.D. Dissertation, Princeton Univer- Sity. Moore, E. F. "Machine Models of Self-Reproduction." In "Mathematical Problems in the Biological Scien- ces." Proc. §ymp. Appl. Math. 4, 17-33. . Sequential Machines. Addison-Wesley Publishing Company, Inc. Morowitz, Harold J. "A Model of Reproduction." Amer. Sci. 47. (1959), 261-263. [NAG [NAG [NEU [NEU [NEU [NEW [PAS [PEN [PEN [PEN [PEN [PEN [PER [PER 1] 2] 1] 2] 3] l] 1] 1] 2] 3] 4] 5] 1] 2] 188 Nagel, E. and Newman, J. GOdel's Proof. New York:' New York University Press, 1964. Nagel, E. The Structure of Science. New York: Harcourt, Brace, and World, Inc., 1961. von Neuman, John. The Comppper and the Brain. New Haven: Yale University Press, 1958. . "The General and Logical Theory of Automata." In "Cerebral Mechanisms in Behavior." 1 Proc. Hixon Symp, (Ed.) Jeffries, L. A. New York: Wiley, 1951, 1-31. 1 but- 11." ' .__ . The Theory of Automata: Construction, Reproduction, Homogenity. (Ed.) Burks, A. W. Urbana: University 0 Illinois Press, 1966. Newell, Allen and Simon, H. "That 'Computer Be- haviorism' can Account for Human Thinking." In The Nature and Sgope of Social Science. New York: Appleton-Century-Crofts, 1969, 289-305. Passmore, John. Philosophical Reasoning. New York: Charles Scribner's Sons, 1961. Penrose, L. S. and Penrose, R. "A Self- Reproducing Analogue." Nature. 179. No. 4571. (1957). ‘_-—_" Penrose, L. S. "Mechanics of Self-Reproduction." Ann. Human Gen. 23. (1958), 59-72. . "Automatic Mechanical Self- Reproduction." New Biology. 28. (1959), 92-117. . "Self-Reproducing Machines." Sci. Amer. 200. No. 6. (1959), 105-114. . Penrose, L. S. "Developments in the Theory of Self-Replication." New Biology. 31. Perkel, D. H., and Moore, G. P. "A Defense of Neural Modeling." In Bigphysics and Cybernetic Systems. [MAX 1]. Perkel, D. H. and Bullock, T. H. "Neural Coding." Neurosci. Res. Prqg. Bull. 6. No. 3. (Dec. I968). [PIR [PLA [POL [POP [POP [PUT [PUT [PUT [QUI [QUI [QUI [QUI [RAB [RAS [RAS 1] 1] 1] 1] 2] l] 2] 3] 1] 2] 3] 4] 1] 2] 1] 2] 189 Pirenne, M. H. "Mind-Like Behavior in Artefacts and the Concept of Mind." Brit. J. Phil. Sci. 2. (May 1951-Feb., 1952), 315-317. Place, U. T. "Consciousness is Just a Brain Process." In [FLE l]. Polanyi, Michael. Brit. J. Phil. 312-315. "The Hypothesis of Cybernetics." Sci. 2. (May 1951-Feb., 1952), Popper, K. R. "Indeterminism in Quantum Physics and Classical Physics. II." Brit. J. Phil. Sci. 1. (1951), 179-188. . The Logic of Scientific Discovery. New York: Harper & Row, 1959. In [AND 1]. Putnam, H. "Minds and Machines." . "Review of [NAG 1]." Phil. Sci. 27. (1960), 205-207. . "Robots: Machines or Artificially Created Life." J. Phil. 61. No. 21. (Nov., 1964), 668-691. Quine, W. V. 0. Mathematical Logic. New York: Harper, 1965. Chapter 7. . Methods of Logic. 245-248. . "Two Dogmas of Empiricism." In Philosgphy of Mathematics. and Putnam, H. (Eds.) Benacerraf, P. . Word and Object. "Finite Automata and IBM J. Res. Devlop. Rabin, M. O. and Scott, D. their decision Problems." Rabin, M. O. "Probabilistic Automata." In Rashevsky, Nicholas. Chicago: 1938. Mathematical Biophysics. . Advagces and Applications of Mathemati- gal Biology. Chicago: University of Chicago Press,61940. [ROG 1] [R00 1] [ROSl [ROSl [R081 [R032 [ROS3 [ROS3 [ROS4 l] 2] 3] 1] l] 2] 1] [RUS 1] [RYL 1] [SCH 1] [SCR 1] [SCR 2] [SHA l] 190 Rogers, Hartley. Theorylof Recursive Functions and Effective Computability. (Mimeographed, Massachusetts Institute of6Technology), 152ff. ROOd: Harold. "Malcolm and "Other Minds."" (Mimeographed, Michigan State University.) Rosen, Robert. "A Relational Theory of Biologi- cal Systems." Bull. Math. Biophysics. 20. (1958), 245-260. . "On a Logical Paradox Implicit in the Notion of a Self-Reproduction Automation." Bull. Math. Biophysics. 21. (1959), 387-395. . "The Representation of Biological Systems from the Standpoint of Categories." (1958), 317-341. 20. Rosenbloom, Paul C. The Elements of MaEhematical Logic. New York: Dover Puhlications, Inc., 50. Rosenblueth, Arturo. Mind and Brain. Cambridge: The MIT Press, 1970. , and Wiener, N. "The Role of Models in Sc1ence." Phil. of Sci. 12. (1945), 316-321. Rosser, J. B. "An Informal Exposition of Proofs of GOdel's Theorems and Church's Theorem." J. S. P. 4. (1939), 15-24. Russell, Bertrand. In [VES 1], 274. "Human Knowledge." Ryle, Gilbert. The Concept of Mind. New York: 1949. Barnes and Noble, Inc., Scheffler, I. Anatomy of Inquiry. Scriven, M. "The Compleat Robot." In [H00 1]. . "The Mechanical Concept of Mind." In AND 1 . Automata Princeton University Press, Shannon, Studies. I956. C. E. and McCarthy, J. Princeton: [SHA [SHA [SMA [SMA [SMA [SMY [SPE [STA [SUP [SUP [SWA [TAR [TAR [THA [TOU 2] 3] 1] 2] 3] 1] l] l] l] 2] 1] 1] 2] 1] 1] Theory." 191 . "vonNeumann's Contributions to Automata Bull. Amer. Math. Soc. 64. (1958), 123-129. . "A Universal Turing Machine With Two Internal States." In [SHA 1]. Smart, J. J. C. New York: Between Science and Philosophy. Random House, 1968. . "GOdel's Theorem, Church's Theorem, and Mechanism." Synthese. 13. No. 2. (1961), m 105-110. E . "Sensation and Brain Processes." In Smythies, J. R. Brain and Mind. London: Rout- 1edge and Kegan Paul, 1965. Spector, M. "Models and Theories." Brit. J. Phil. Sci. 16. (1965). Stark, L. and Dickson, J. F. "Mathematical Con- cepts of Central Nervous System Function." Neurosci. Res. Prog. Bull. 3, NO. 2. (1965), 24-39. Suppes, Patrick. "A Comparison of the Meaning and Uses of Models in Mathematics and the Empiri- cal Sciences." Synthese. 12. (1960), 287-301. . Introduction to Logic. Princeton: VanNostrand Company, Inc., 1957, Chapter 12. "On Models." Brit. J. Phil. Sci. 297-311. J. W. (1966), Swanson, 17. NO. 4. Tarski, Alfred. Models." Inda ationes Mathematicae. and 17. (155), 56-64. 527-588; . Logic, Semantics, and Metamathematics. The Clarendon Press, 1956, 415-420. Thatcher, J. W. "The Construction of a Self- Describing Turing Machine." Proc. §ymp. Math. Theo. Automata. (1963). "Contribution to the Theory of- 16. (1954), Oxford: Toulmin, Stephen. mentary." sity.) "Brain and Language: A Com- (Mimeographed, Michigan State Univer- APPENDIX I /92. 193 APP. I:(A) [JOR 1]; DETERMINISM'S DILEMMA (DD) (DD-l) If determinism is true, recognizing a proposition's logical truth is always also a matter of what, and only what, "heredity and environment" happen to suggest. (DD-2) It is possible that "heredity and environment" are so beneficient as always to suggest the right things. (DD-3) And it is possible that, even if they do not, they should sometimes or always contrive that we fail to affirm their false suggestions. (DD-4) But, neither of these is in the least likely, and even if either were likely, we would not have any reason to think so beyond what "heredity and en- vironment" themselves suggest. (DD-5) Therefore, if all events have sufficient causal connections, such as "heredity and environment," any warrantable acceptance of logical truths and of arguments as valid is impossible and there can be no justifiable argument for any thesis, includ- ing determinism. 194 APP. I(B): [MAL 2]; MALCOLM'S RATIONAL PARADOX (MRP) Saying or doing something for a reason (in the (MRP-l) sense of grounds as well as purpose) implies that the saying or doing is intentional. (MRP-Z) . . . mechanism is incompatible with the inten- tionality of behavior. (MRP-3) [Therefore], . . . my acceptance of mechanism as true for myself would imply that I am incapable of saying or doing anything for a reason. (MRP-4) [Therefore, if mechanism is true for me], . . . it would imply that I am incapable of having rational grounds for asserting anything, includ- ing mechanism. 195 APP. I(C): [TOU 5]; TOWNES' PARADOX (TP) Let 'S' =df 'Strictly causal brain mechanisms underlie all rational thought processes.‘ Then we have, (TP-l) If S is true, then belief in S is non-rational. (TP-2) If belief in S is rational, then S is false. (TP-3) Therefore, S cannot both be true and be ration- ally believed. APPENDIX II 196 APP. II: COMMENT ‘These arguments-against Mechanism2 are included here in order to make clear some of the differences in various formulations of the GOdel Argument and to make it convenient to refer to these differences in the text above. Regarding APP. II(F), thecfimonology of the debate regarding the GOdel Argument, which we have presented here, is not intended to be exhaustive, but is, rather, intended to be merely representative. APP. (R-l) (R-2) (R-3) (R-4) (R-S) 197 II(A): [R052 1], (1950) . . . one can probably prove rigorously that a machine can solve any problem which a brain, ac- cording to Wiener's model, can solve." (See [WIE l], passim.] "We may consider intelligence as the capacity for introspection, the faculty of thinking about one's own methods of reasoning and what they can accom- plish." "In mathematical terms, this means the capacity of using a syntax language for reasoning about an ob- ject language." "It seems altogether feasible to incorporate this idea into a mathematical definition of a brain and to prove that a brain can solve some problems which a machine cannot." "Theorem 4.3.4 shows that a certain problem cannot be solved by machines, i.e., that brains are neces- sary." (Theorem 4.3.4 is equivalent to GOdel's first theorem.) APP. (N-l) (N-2) (N-3) (N-4) (N-5) (N-6) (N-7) 198 II(B): [NAG 1] , (1958) "Today's calculating machines have a fixed set of directives built into them; these directives cor- respond to the fixed rules of inference of formal- ized axiomatic procedure." "GOdel showed in his incompleteness theorem, there are innumerable problems in elementary number theory that fall outside the sc0pe of a fixed axiomatic method." "Given a definite problem, a machine of this type might be built for solving it, . . ." . . . but no such machine can be built for solving every problem." "The human may, to be sure, have built-in limita- tions of its own, and there may be mathematical problems it is capable of solving." "But, even so, the brain appears to embody a struc- ture of rules of Operation which is far more power- ful than the structure of currently conceived artificial machines." Therefore, "There is no immediate prospect of replacing the human mind by robots." APP. (K-l) (K-2) (K-3) (K-4) (K-S) (K-6) (K-7) 199 II(C): [KEM 1], (1959) "We can give a general definition of what a machine is." "This definition is so general that it certainly includes anything we have thought of so far, and it is difficult to see how we could build a machine violating this description." "Given such a machine, we can always find a problem of the kind it can understand, which it cannot solve." "Of course, there will be a better machine which can solve this particular problem, but we can ask an even tougher question which it in turn cannot answer." "These results prove rigorously that every conceiv- able thinking machine has its clearly defined limitations." "And yet it is not clear that a human being, given enough time, could not solve all these problems." The "philosOphical conclusion" is, ". . . we are essentially different from machines." 200 APP. II(D)(l.): [LUC 2]. (1961) "GOdel's Theorem seems to me to prove that Mechanism (L-O) is false, that is, that minds cannot be explained as machines." (L-O') "GOdel's Theorem states that in any consistent sys- tem which is strong enough to produce simple arith- metic there are formulae which cannot be proved-in- the-system, but which we can see to be true." (Ll-1) (Ll-3) (Ll-5) 201 "GOdel's theorem must apply to cybernetical machines . . . ". . . it is the essence of being a machine, that it should be a concrete instantiation of a formal system." "It follows that given any machine which is con- sistent and capable of doing simple arithmetic, there is a formula which it is incapable of pro- ducing as being true--i.e., the formula is unprovable-in-the-system--but which we can see to be true." "It follows that no machine can be a complete or adequate model of the mind . . . Therefore, ". . . minds are essentially different from machines." 202 APP. II(D)(2.), [LUC 2], (1961) (Lz-l) (L2-3) (L2-4) "We understand by a cybernetical machine an appara- tus which performs a set of operations according to a definite set of rules." The outputs of such a machine are determined by the rules and axioms, and, hence, could, given enough time, be calculated. "Machines are definite: anything which was inde- finite or infinite we should not count as a machine." "If there are only a definite number of types of Operation and initial assumptions built into the systems, we can represent them all by suitable symbols written down on paper." "However long the machine went on operating, we could, given enough time, paper, and patience, write down an analogue of the machine's operation." "This analogue would in fact be a formal proof . . ." "We now construct a GOdelian formula in this formal system." (L2-8) (L2-9) (L2-10) (L2-11) (L2-12) (L2-13) 203 "This formula cannot be proved-in-the-system." "Therefore the machine cannot produce the corres- ponding formula as being true." "But we can see that the GOdelian formula is true: any rational being could follow GOdel's argument, and convince himself that the GOdelian formula, although unprovable in-the-system, was nonethe- less--in fact, for that very reason--true." Therefore, ". . . for every machine there is a truth which it cannot produce as being true, but which a mind can." "This shows that a machine cannot be a complete and adequate model of the mind." Therefore, ". . . we can never, not even in prin- ciple, have a mechanical model of the mind." 204 APP. II(E)(1.): [JAK 1], (1969) (Jl-l) ". . . machines are formal systems and need no metamathematical proofs to work efficiently." (Jl-Z) "'Formal' in this context means specified by a finite number of components to perform.well- defined functions." (J1-3) "For a machine to be a machine, it can have only a finite number of components and it can operate on a finite number of initial assumptions." (Jl-4) The introduction of a randomizing device . . . will allow the machine to choose only among a definite number of possible alternatives." (Jl-S) "And as GOdel's theorem suggests, it is a basic shortcoming of all such systems, whether existing only on paper as theorems of arithmetic or imple- mented mechanically that they rely on a system extraneous to them for their proof of consistency.‘ (J1-6) "The applicability of GOdel's theorem which states that no strong enough arithmetic theory can have in itself its proof of consistency to the mind- machine relationship can easily be seen if one (01-7) (Jl-B) 205 postulates that since minds can do arithmetic, any artificial mind should be able to do the same." "Yet a machine, being a formal system, can never produce at least one truth, which the mind can without relying on other minds." "Yet, no matter how perfect the machine, it can never do everything that the human mind can, or more concretely, no machine can be built that could simulate every aspect of mindlike behavior." 206 APP. II(E)(2.): [JAK 1], (1969) (Jz-l) (Jz-z) (J2-3) (32-4) "In order for . . . a construction to qualify as a machine, it must be in some sense finite and definite . . ." . . . as such, it would not have its proof of consistency within itself." "It follows, therefore, that the mechanist cannot even in principle devise an individual machine that might serve as an adequate model for the mind." "And since machines are of necessity built of physical or chemical components, it also follows that the human mind cannot be fully explained in terms of physics and chemistry." APP. II(F): 207 A REPRESENTATIVE CHRONOLOGY OF THE GODEL PROBLEM DEBATE YEARS PAPERS 1931 [GOD 1] MENTALISTSZ MECHANIST82 1950 [Ros2 11 [TUR l] 1951 [POP 11 1958 [NAG 1] 1959 [KEM 1] 1960 [GEO 1] [PUT 2] [PUT 1] [SCR 1] 1961 [LUC 2] [SMA 2] 1964 [ARB 1] 1965 [G00 3] 1967 [BEN 1] [G00 2] [MIN 1] 1968 [LUC 1] 1969 [G00 1] [JAK 1] (Table APP. II - l) M7!]!1]![I]j]]]fl[fl]fil][WHIWWWEs 47 0557