. q»... .‘ u. .. v). ~|-nl.ul“ ,. \-‘>v‘ 9‘“:
'yS',‘@‘.‘—.f.l3'."§n‘f 3.2;“: a; ’4‘?" ,<:.:.'
9
r7?" EW‘wmfiw’fiwfl“‘1‘fi'i‘LK’vf-mm"-z'bzm-z‘sner:-.-:;:..r.'-:;;;¢
A THEORY or NEUROMIME NETS CONTAIMNGL ': ‘ " '
RECURRENT *NHIBITION, WITH AN ANALYSis :_
OF A HIPPQCAMPUS MODEL
Thesis far the Degree of PhD. " . 7 .,
.mcmem STATE umvensm .
DUANE‘G. LEET '
1971
v..- .,
“" LIBRARY -“"
M. 'g irate
Unmet _-ty
3
This is to certify that the
thesis entitled
A THEORY OF NEUROMIME NETS CONTAINING
RECURRENT INHIBITION, WITH AN ANALYSIS
OF A HIPPOCAMPUS MODEL.
presented by
Duane G. Leet
has been accepted towards fulfillment
of the requirements for
Ph. D. degree in Systems Science
WWW ‘Eva/vvvf‘
Major professor
Date April 30, 1971
0-7639
Luv-.0 us' ‘
J
realize
intercor
Some in
CA3 sec
contain:
logic ur
logic L1!
fmctlor
Connect
there e:
transfo
Se quem
with the
0f 011th
derived
particu
Possibl
its func
trainer
the tran
SYStEm
are dis
ABSTRACT
A THEORY OF NEUROMIME NETS CONTAINING
RECURRENT INHIBITION, WITH AN ANALYSIS
OF A HIPPOCAMPUS MODEL
BY
Duane G. Leet
A novel system component called the functal can be set to
realize any one of many different functions. A functal net is an
interconnected array of functals, function generators, and delays.
Some fundamental time-domain properties of these nets are developed.
A functal net model of recurrent inhibition as found in the
CA3 sector of mammalian hippocampus is presented. The model
contains a rank of functals, which are somewhat like adaptive threshold
logic units, and a rank of function generators, which are threshold
logic units. The two ranks are interconnected through delays, and the
function generators inhibit the functals. The only assumption on the
connectivity between ranks is that, for each element in the first rank,
there exists at least one direct circuit path from that element through
some element of the second rank and then back to itself.
The most important characteristic of the model's input-output
transformation is that a single input can be transformed into a
sequence of outputs. This sequence terminates, for a given input,
with the continuous repetition of either a single output or a sequence
of outputs. Some properties of the model's output sequences are
derived, and an algorithm is deve10ped for generating, for any
particular N-functal net, all output sequences which that net could
possibly produce.
A trainable functal net is one in which the functions realized by
its functals are under the control of an external structure called the
trainer, which Operates according to a specified algorithm. Both
the trainer and the trainable functal net are part of a new canonical
system called a functal system, some fundamental prOperties of which
are discussed.
Tl
model of
certain c
training
reahzed
function
aslong
ficafion
7
ismau
selectb
derive:
Duane G. Leet
The CA3 model is incorporated into an automate. theoretic
model of the hippocampus that is designed to take advantage of
certain of the CA3 model's properties. There does not exist a
training algorithm for this model that can always change the function
realized by one of its functals to any other arbitrarily specified
function. But an algorithm is given that can produce defined changes
as long as the parameters of,the CA3 model meet certain speci-
fications.
The function realized by a functal system model whenever it
is placed in a new environment is called the initial function. The
selection of initial functions is discussed, and an algorithm is
derived to select them automatically.
A THEORY OF NEUROMIME NETS CONTAINING
RECURRENT INHIBITION, WITH AN ANALYSIS
OF A HIPPOCAMPUS MODEL
BY
(9
Duane C? Le et
A THESIS
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
DOCT OR OF PHILOSOPHY
Department of Electrical Engineering and Systems Science
1971
on]
ott
an
me
tua
lei: 5H4
enc
cm
the
of'
ACKNOWLEDGEMENTS
The author gratefully acknowledges the aid and encourage-
ment given by his academic advisor, Dr. William Kilmer, not
only on matters related to the research project, but on enurnerable
other matters as well.
The author's father, Gerald Leet, his grandparents, James
and Ruby Leet, his inlaws, Floyd and Irene Layman, and other
members of his wife's family have contributed financial and spiri-
tual support during his graduate studies. The author is also deeply
endebted to the late Dr. Leroy Augenstein for his support and
encouragement.
Finally, this thesis could not have been completed without
the inexhaustible patience, cheerfulness, and sustaining support
of the author's wife, Chris.
ii
HW3X
01x0:
Mix
(X c
H'W(X)
01x0x
f(01x0x) = 1
KC Y
(X covers Y)
iff
s. t.
disc. fcn.
2
”Y"
X-Y
mf
NOTATION
Hamming weight: HW(0101101) = 4
{01000, 01001, 01100, 01101}
{(01000) = 1(01001) = {(01100) = {(01101) = 1
HW(X) 2 HW(Y) and x- Y = H Y ”2
if and only if
such that
discriminant function
{v : v = (v1, v2, ..
integer, 15 i S n}
EV = (v1, v2, ..., vm) : viE‘IO, l, 2}}
v = (v1, v2, vn) : vi€{0,l}}
the cardinality, or number of elements, of the set (1
n 2 __
21:1 yi, Y — (yl, . . . , yn)
2"
1:1
mossy fiber
., vn) and vi anon-negative
xiyi, Y as above and X of the same form
iii
..L3.455:.(.
1
s s u e 11—. 1’9:
I l s s s c o ‘1. at“). )—
AVZAL.L. _
TI ..
.11“. 4.9.
M 2.3.
‘11.}..1525'13 ..ivi LL. “an I.“
TABLE OF CONTEN TS
PTER 1. INTRODUCTION
What is the Function of Recurrent Inhibition?
Hippocampus Morphology
The Expositional Problem
What is the Function of the Hippocampus ?
HHHHO
HA
.1.
..2
..3
..4
HAPTER 2. FUNCTAL SYSTEMS
1. An Informal Description of the Functal System
. 2. The Functal
3 The Functal Net
4 The Concepts Of State and Output Foundations
5. Properties of Output and State Levels
5. 1. Properties of State Levels
5. 2. Properties of Output Levels
6 The Target Table Generator
7 The Target Table
8 A General Training Structure and Algorithm
8. 1. Movement Within Foundations Under Input and
Function Change
8. 2. The Operation of the Trainer Under Input or
Function Change
2. 9. The Trainer, Target Table Relationship
2. 10. Measures of Functal Net Performance
2. 11. The Functal System Analysis Problem
PPPPPPPPPNPO
5"
CHAPTER 3. A FUNCTAL SYSTEM MODEL OF THE
HIPPOCAMPUS. PART 1
3. 1. Introduction
3. 2. The Input and Output Sets of the Hippocampus System
Model
3. 3 The Input Buffer
3. 4 The CA3 Sector Net
3. 4. 1. Introduction
3. 4. 2. The Pyramidal Cell Model: The Pyramidal
Cell Logic Unit
3. 4. 3. The Basket Cell Model: The Basket Cell Logic Unit
3. 4. 4. The Connectivity and Delays
3. 4. 5. The Operational Algorithm
3. 5 The Output Buffer
CHAPTER 4. PROPERTIES OF THE CA3 SECTOR NET
4.1. Introduction
4. 2. The Output Sequence Set of a PCLU
iv
«lupwu-H—I
10
12
12
l3
13
16
I7
18
l9
19
20
23
24
25
27
27
27
27
29
29
29
33
35
37
37
38
38
42
CFAI
HIP?
5.1.
5. Z.
3. 3.
5. 3:.
1..
CH
6.1
6:
.1... _ '22.... -1.| 1.41..“—
L1... .
Cslqlqi
\fiIFIII J
4. 3. The Output Sequence Set of the Hippocampus Net
4. 4. Rules for Successful CA3 Sector Net Training
Using Algorithm 4. l. 2
CHAPTER 5. A FUNCTAL SYSTEM MODEL OF THE
HIPPOCAMPUS. PART 2
5. l. The Target Table
5. 2. The Trainer
5. 3. The Error Correction Information for the Change of
Function Controller
5. 4. The Read/Write Head and Its Controller
HAPTER 6. THE INITIAL CONDITIONS PROBLEM
I. The Phase Concept
2. Phase 1: Target Table Training
1.
2.
3.
C
6.
6.
CHAPTER 7. DISCUSSION
7. Summary
7. Comments on the Neuroscientific Aspects Of this Study
7. Comments on the Engineering Aspects of the Study
LIST OF REFERENCES
APPENDIX A. BACKGROUND ON THE DEVELOPMENT OF
THE HIPPOCAMPUS NET
A. l. The Pyramidal Cell Logic Unit
A. 2. The Basket Cell Logic Unit
A. 3. The Connectivity
APPENDIX B. A COMPUTER PROGRAM BASED ON
ALGORITHM 4. 3. l
45
46
53
53
53
57
59
61
61
62
75
75
76
78
81
83
83
86
88
89
‘62-- H;__4;"§_ 1""
Table 3. 3. 1
Table 4. 2. 1
Table 5. 2. 1
Table 6. 2. 1
LIST OF TABLES
The Truth Table for the Input Buffer of the
Functal Systems Model of the Hippocampus
The Computations, Over Several Periods,
of the CA3 Sector Net
The Truth Table for the Change-of- State
Controller
The Function Assignment for Example 6. 2. 1
vi
30
44
56
65
Figure . l. 1
Figure . 2. 1
Figure . 3. 1
Figure 1. 1
Figure . 5. 1
Figure . 8. 1
Figure l. 1
Figure . 4. 1
Figure . 4. 2
Figure . 4. 3
Figure l. 1
Figure . 2. 1
Figure . 4. 1
Figure A. 1
Figure A. 2
Figure B. 1
LIST OF FIGURES
A schematic of a section of the CA3 sector
of the hippocampus.
Dorsal hippocampus and connections.
A phase diagram for the example given in
Section 1. 3.
The functal system surrounded by an environment.
A typical state level structure.
The basic trainer structure of a functal system.
The overall structure of the hippocampus
system model
The pyramidal cell logic unit.
The basket cell logic unit (BCLU).
A general form of connectivity for the CA3
sector net.
A PCLU and its special BCLU.
The training structure of the functal system
model of the hippocampus.
Read-head controller, target table relationship
for a target table sequence.
The pyramidal cell firing rate equations.
The basket cell firing rate equations.
FORTRAN listing for TTABLE.
vii
15
21
28
32
34
36
4O
55
6O
84
87
90
1.1.
and
whic
ceiv
neur
neu:
rec.
the
inhi
C01".
1'16!
Sci
he
8t]
1a)
CHAPTER 1
INTRODUCTION
1. l . What is the Function of Recurrent Inhibition?
Recurrent inhibition can be described in terms of components
and connectivity and interneuronal relationships. The components,
which are neurons, are arranged in two ranks. The first rank re-
ceives inputs from elsewhere in the nervous system and from
neurons in the second rank; it sends outputs elsewhere and to
neurons in the second rank (Figure 1.1. 1). The second rank
receives inputs only from the first rank and sends outputs only to
the first rank. The interneuronal relationship, termed interneuronal
inhibition, holds when a neuron in the second rank decrements the
impulse frequencies of those neurons in the first rank to which it is
connected.
Recurrent inhibition is found in many regions of vertebrate
nervous systems: sensory systems [1 ], the cerebellum [2], the
hippocampus [3], and perhaps the Spinal cord [4, 5 ]. For this
reason, understanding its function should be of interest to neuro-
scientists.
When a neuroscientist speaks of the function of a structure,
he is usually referring to its specialized actions or purposes.
Within this context, a number of functions have been attributed to
structures containing recurrent inhibition, or its close relative,
lateral inhibition:
1. enhancement of contrast [1 ] and the detection of edges
[6 ].
2. blockage of low-level inputs [1],
3. amplification of time -varying signals of certain
frequencies [7 ],
.>00a8ox0 000
03080000 05 8 03000 0033300800 00H. .0200 30w 0 «0 00000 05 00 .fi080 >00> 00 0003000000 000.000 0»
0m800>0 05 000 000300800 0m800>m 0» 0m800>m >0Ou0£0x0 05 503 00m 30¢ 0M800>n 05 000 0.000800
0008 000 00fi080 00 30¢ :00 000—000 00H. .00 000300 3 000 0000 00m 0080.80 >008 $08030 0000000
6008053 no 009800 0&03 0 0000“ 0003 >0008 000 >00 "00 00 095300 000 00030006 3050000000 00H.
.000” 003338 80000000 05 90303800 £080.80 00» mo 000800 05 0... £000 000w 00000 $00 003000 00H.
.mmugm 0200 000300 05 no 00300000 000 000800 05 0» 000 00m800>0 0050 no 00300000 "0080 000 00000
05 00 300000300 0>00 030 00000 0000:. .000000 ~
03 00.." m0w08u 000.00 05 000 83000
05 00030» N00000: 0000.3 000 6000033 >30800>0 .33 >05 "000000 05 no 000800 05 000 00800.?“ 05
.«0 00003 00k. 808me >0 000000000 000 03080000 005 8 0300880 .308 now 0000000 008 60003 H0000
108800 000 8008000 05. .03000 «00 000 0080.80 05 0» 00008 0050 03H. .008u00 0000 000 0>0£ 00.30
333000000 0080 £2,030: 63080000 00» 8 03000 0000 000 0>00 >009 0.500 3080.80 05 no 00300000
H0080 000 "0000 05 500 00 0m088m 83000 05 800m 00008 00H. .hnmiunm 0200 ”0080.80 05 no 00300000
#0080 08 m0080 >03 .305 0005 000 00308 05 m0 000 05 a0 8 0800 0000000 3000a 05 80.5 0000a
>0008 008 Am: 008mm 800w 008000 6008000080 05 mo 000000 m<0 05 «0 003000 0 uo 03080000 <
0008000003 05 m0 000000 20.0 05 no 003000 0 no 03080000 0. 4 A A 0003b
"<0 000 flinch OH.
£000 «400
00x00a
x000 £00
~00M800>0
0000C >000:
‘. r 1... .1834
4. selective response to signal patterns flowing in one
direction in a two-dimensional space [8] ,
5. the generation of two periodic signals approximately 180
degrees out of phase with each other from a single input
[9].
6. production of quasi-impulse responses to step inputs [10],
and
7. preferential response to stimuli having certain orientations
[11].
Mathematical readers usually interpret the function of a
structure to be the list of input-output correspondences produced by
it, where the word "list" presupposes only an algorithm (essentially
an ordered set of instructions) that can generate any input-output
pair of the list. With this interpretation, the neuroscientist's
"functions" can be regarded as a list of vaguely defined algorithms,
each of which indicates how a certain subset of the set of all
possible inputs is related to the set of outputs.
In order to avoid confusion over these two meanings of
function, the following convention will be adopted: if "function" is
meant in the neurOphysiological sense, the word ”task" will be used
in its place; if "function" is meant in the mathematical sense, the
word ”function" will be used. This thesis represents the first
attempt known to the author to investigate the function of recurrent
inhibition.
1. Z. Hippocampus Morphology
In general, in order to determine the function of any opera-
tional unit, 'it is necessary to measure its inputs and outputs
simultaneously. The vertebrate central nervous system does not
lend itself to this approach because the inputs and outputs of its
various subunits are for the most part inaccessable, undecipherable,
and apparently highly variable in the frequency domain. Furthermore,
the subunits themselves change rapidly with time.
An alternative approach to the determination of the function
of the operational unit is to model it mathematically or by computer
.1;
is!
mo
the
ion
simulation (or some blend of both). The hippocampus is well suited to
this approach. In particular, a wealth of both morphological and neuro-
physiological data exists on it (see Kilmer [12], References and
Appendix B), which makes component modeling comparatively easy.
The hippocampus also has a highly stylized connectivity and the CA3
sector clearly exhibits all of the known indicators of recurrent inhibi-
tion (see Figure 1.1.1); thus its circuit organization is easily carica-
tured. Two kinds of inputs (ignoring the commissural fibers) and their
origins, plus two kinds of outputs and their destinations are known to
exist (see Figure 1.2.1). Thus, the inputs and outputs of any model are
defined and their characteristics can be compared with the available
hippocampal electrophysiological data. In summation, the hippocampus
is the neural structure of choice for an investigation by mathematical
model and computer simulation of the function of recurrent inhibition.
1. 3. The Expositional Problem
The following example points up the difficulty of communicating
the principles of circuit actions for a neural net of the complexity
found in Figure 1. l. l and of concisely describing the net's function.
Consider the neuron net shown in Figure l. l. l, ignoring all of
the direct pyramid-to-pyramid connections. Assume all activity in the
net is allowed to die out, and then apply an input to the net sufficient to
cause P3 to produce a moderate number of pulses per second (fire at
a moderate rate) and to cause P5 to produce a large number of pulses
per second (fire at a high rate). If this occurs at time to (Figure
l. 3. l), and the leading edges of both trains of pulses require the same
time to reach the basket cell rank, then both B3 and B4 will be
affected at time t1; suppose B3 responds by firing at a moderate rate
and B4 responds by firing at a high rate. Assuming these pulse trains
require the same time to travel to the pyramidal cell rank, both P3
and P5 will be affected at the time t2; suppose P3 reacts by completely
turning off and P5 reacts by decreasing its output to a moderate rate.
At some later time t3 these changes will be felt by the basket cells;
as a result suppose B3 turns off and B4 decreases its output to a
moderate rate. At time t4 these changes will be felt by the pyramids;
as a result suppose P3 begins firing at a moderate rate again and P5
a .. ...uiflWuElIIIIw.
63933 noon, on: ousmw 35 so??? Eoum AN: uoEflM 00m amen—op norm .nowmmsomwp
uqosvomndm E ”Ema ”33.3me no 52¢ Hm? m93. $wo~ofi§o§ so comma— .mmonm oops”. 35 ponoflfiuma soon as: msmgmoommE
on» has» 302 5.2 on”. 5 mousuosuum usfio on mcofioogoo m”: was msagsoommg fimmuop 05 mo oBmanom <
.mnofloogoo was msagmoomafi Hmmuofl A .N A ousmfim
mdu>m sundown? mw>
A
-
(N-
«I noon» umo>H<
_l.||.|ll.|.||..| |u_
“MGHOM HstmmMEEOU r H WSQENUOQQH
IHOMHGHGNIQHQ .m
_
_
5.98m
Housmmwgoo _ \L/
uuomnoucm _
. -umoa wasps m>m35mm
weapon. _ &
>um aEEme _ ..umfloo ad GM
f was _ nouoom nommmnom nouoom gram _ . xouuoo, transom
unmanaoo . N <0 v m
nosuous< mmoz 5m
. _ unmuotonm C
_
_
Edumom
o
v n.-
66" Hen-r..- —
Firin;
Rate
Pyrami
Cell PE
Basket
Cell 84
Pyrami
Cell P3
Basket
Ce11 B:
mpfibmua 3033...
time
@9330 ”5&me uuuuuu 1II
333.3 303?
omnmso Banyan uuuuuu I. u .. ..
mpwgmfwm mucous
ownwso aoxmmn ...... .
muoxmmn 303mm
..'1
omcmno Bacon?“ .. .. u n .i n. .1
mvgmntwm 303mm
omamno noxmmn- u :lll ..uu-
muoxmsn muoomwm
owcmno Beams?“ .1
""-'I‘A
time
time
time
a---
--ll
-|-J
,-"‘
IIII'II-Ul'l] |||||||||||| 1"-..
. ........... .ILw ................
|||||||||| ]l"""|'l'-|n
Firing
Rate
Pyramidal
Cell P5
-""-'--'--|'-l
l
l
Basket
Cell B4
UIOII I|Illll.ll"l.l|l.llllll4
IIIIInIL_
Pyramidal
Cell P3
Basket
Cell B3
A phase diagram for the example given in Section 1. 3.
Figure l. 3. 1.
All pyramid and basket numbers refer to Figure l. l.
rernal:
basket
B4 re
paper
for $5
is ur;
exce:
justi
{ECU
sorn
VVhi<
the
remains unchanged. At time t these changes will be felt by the
basket cells; as a result suppoge B3 returns to a moderate rate and
B4 remains unchanged. And so on ad nauseam.
In order to circumvent these nasty expositional problems, this
paper has developed a formal language, called functal system theory,
for systems of the kind exemplified by the hippocampus. The reader
is urged not to become discouraged by what may seem to him to be
excessive formalism in the following chapters; the formalism is
justified by the compactness with which it expresses functions involving
recurrent inhibition.
In addition to modeling the hippocampus as a functal system,
some of the operational principles of the class of neuromime nets to
which the model belongs are also given, along with characteristics of
the output sequences of such nets.
1. 4. What is the Function of the Hippocampus ?
An hypothesis of the primary task of the mammalian hippo-
campus has been proposed by W. Kilmer and T. McLardy [12].
Previously, Kilmer and W. McCulloch [13] proposed that the task of
the mammalian reticular formation is to decide the basic mode of
behavior of an animal. A mode might be to fight, take flight, groom,
mate, or eat. It is plausible to suggest that another structure exists
which takes modal and current sensory information and generates
commands for acts within modes. For instance, if the mode decision
is to fight, another structure may select the tactics or style to be
used. Kilmer and McLardy believe that the hippocampus is part of
this structure, at least during the animal's behavior-formative period.
Functal system theory is used in this paper to provide an
interpretation of the hippocampus's function which supports this
hypothesis. In a few words, the interpreted function is trainable
re cur rent inhibition.
CHAPTER 2
FUNCTAL SYSTEMS
2. 1. An Informal Description of the Functal System
The hippocampus and its associated structures appear to be
related to a theoretical structure called a functal system. Informal
definitions of each of the components and of the overall operation of
the system are as follows (see Figure 2. l. l).
l. The INPUT GENERATOR interprets the present environ-
ment according to its built-in predisposition and produces an input
from a finite set of possible inputs.
2. The INPUT BUFFER, which is under the control of the
TRAINER, performs a combinational circuit transformation on the
input and produces the input to the functal net.
3. The TRAINABLE FUNCTAL NET has these
characteristics:
a. It is an array of three types of elements: functals,
function generators, and delays.
b. Each functal is capable of realizing any one of many
functions. Each function being realized is under the control
of the trainer.
c. A fixed connectivity exists between the elements of the
net (the rules defining the connectivity may involve probability
density functions).
The functal net generates a finite sequence of outputs for a given single
input.
4. The TARGET TABLE GENERATOR has observed the
environment by this time and has constructed a target table.
5. The TARGET TABLE contains the functions that each of
the functals is required to generate. All communication with the
target table is controlled by the READ/WRITE HEAD AND CONTROLLER.
8
.ucofisoufao as >3 povnsouuaa 539$ Honondm 23. A .H .N 93th
a: manougcm 03H.
neutron
sateen
39MB
homnmh
$2 I
8:5 305m _ 3:5
33.90 013 r593
-395.
nod
nHouucoU
was Room nonamuh.
BE,» .
as; n
.83qu
3an
uowusH
no?
u you 00
usmnH
cmnpu
requir
the co
Mfikr
ltrna)
the re
assun
algori
and a
“’he 1'1
Defy.
is the
10
6. The TRAINER compares the desired output with the output
computed by the net and corrects any functals not generating the
required output.
7. The OUTPUT BUFFER is combinational circuitry under
the control of the trainer.
8. The EFFECTOR DEVICES use the output from the output
buffer to allow the entire system to interact with the environment.
It may also be true that this output affects the environment directly.
A formal discussion of functal system theory is presented in
the remainder of this chapter. Throughout the discussion it will be
assumed that the functal net and all its associated structures and
algorithms operate synchronously in discrete time.
2. 2. The Functal
The intuitive concept of a functal is that it is a mechanism
(that is, an algorithm or physical device) which can realize any one
of a finite number (greater than 1) of different functions. If the
domains and ranges of the functions are assumed to be finite sets, and
if time is assumed discrete, then:
Definition 2. 2. l
A functal can be represented over all time by
(H. v.3? . >3)
and at any time t by
om = F1 (Mm. zm. t)
where
a(t)eZ‘., Fi(.)€T , Z(t)ey, and M(t)ep. .
Necessary supporting definitions are:
Definition 2. 2. 2
p. , a finite set, is the controlled mpg; set of the functal. The
elements of IJ. are called controlled inputs. (u is under direct
external control. )
Definition 2. 2. 3
Y = {Z(t) = Zl(t) Zz(t) . . . :Z1(t) is an element of
internal input set}
is the internal input sequence set. The elements of y are called
internal
#-
that car
Definiti
is the i
can re:
Delinit
is the
Defini
Defini
is Ca
empj
mum
SEne
of in
11
internal inputs. (y takes into account possible inputs to the functal
that cannot be directly controlled. )
Definition 2. 2. 4
F1
‘3" = {Fi: pr-* 73}
is the function set. (The function set contains the functions the functal
can realize. )
Definition 2. 2. 5
Z = {O'(t) = H1(t) Hz(t) . . . :Hl(t) is an element of the
finite output setJ'C }
is the set of output seflences.
Definition 2. 2. 6
Any member of 2, 0(t), is called an output sequence.
Definition 2. 2. 7
Any vector element _
i
h1(t)
. i
H1(t) = h2(t)
i
Lhn‘tL
of an output sequence is called an ou_tput sequence element (or simply
an output).
Definition 2. 2. 8
A component h;(t) of an output vector is called an output
element.
Definition 2. 2. 9
Thesequence
l 2
0'.t = h.t h.t
J() J() J() ,
is called an output sequence component.
The specification of both controlled and internal inputs
emphasizes a basic prOperty of functals: only controlled inputs to a
functal are provided by the input generator; internal inputs are
generated within the functal net. In particular, feedback is one kind
of internal input. If it is present the functal can generate a sequence,
even when there is only a single input from the input generator.
‘ 'I J'“. ‘P
.. ..f
55;).
2.3. 'I
Definit'
the out‘
atthe <
these.
outputs
configt
unit de
delays
the be]
Defini
called
be (1
with t}
with t]
Z ist
even 1
bY C0:
comp<
of z
the CC
Ofthe
2- 4.
under
ASSUr
12
2. 3. The Functal Net
Definition 2. 3. l
A functal net consists of functals plus unit delay elements at
the output of each functal, function generators plus unit delay elements
at the output of each generator, and a connectivity scheme relating
these.
The delay elements are included for two reasons. First, the
outputs of some of the elements in a functal net will be in a feedback
configuration. The standard way to analyze such nets is to insert
unit delays. Second, all physically realizable functal nets will have
delays in lines and elements.
The concept of state plays a fundamental role in understanding
the behavior of functals:
Definition 2. 3. 2
The outputs of the delay elements can be ordered in a vector
called the state vector 06 Q, the state vector set. The ordering will
be Q = (X, Z), where X is the output of the delay elements associated
with the functals and Z is the output of the delay elements associated
with the function generators. X is called the functal state vector and
Z is called the generator state vector.
If the functal net is considered to be a single functal of an
even larger net, then the elements H of the output sequence set will,
are those
f
components of X considered outputs and Hg are those components
by convention, have the form H = (Hf, Hg), where H
of Z considered outputs.
A special kind of functal net is the trainable functal net:
Definition 2. 3. 3
A trainable functal net is a functal net whose function is under
the control of a defined structure called the trainer.
The trainer will be discussed in more detail after the character
of the input-output relationship of a general functal net is revealed.
2. 4. The Concepts of State and Output Foundations
There is a graphic viewpoint which can promote some initial
understanding into the design .and analysis problems for functal nets.
Assume that at some initial time to the vector of functions currently
.4‘. 'l'n—V'fl' _
IM”\ 1 t"-
realized
the statl
Definiti
is calle
Definfi'
combii
For e2
new st
these
Defini
N
trary
Defin
\
funct
l3
realized by the functals is F(to), the controlled input vector is 1(t0),
the state vector is Q(to), and the output vector is H(to).
Definition 2. 4. l
The quadruple
Lu) = < Fm, om, 1m. Hm >
is called the_12_c_:ls_ of the functal net at time t.
Definition 2. 4. 2
The locus L(to) is called the initial locus.
Consider each and every combination of F and 1. Within each
combination, place the net in every possible state in the state set.
For each state allow the net to compute for one period and record the
new state. Construct a standard state table or state diagram from
these data. The resulting representation is given a special name.
Definition 2. 4. 3
A state level is the state structure associated with any arbi-
trary but specified combination of F and I. The notation is 1(F, 1).
Definition 2. 4. 4
The set of all state levels is called the state foundation of the
functal net. .
The fact that the output vector H is a subvector of the state
vector Q can be used to construct an equivalent set of definitions for
the output.
Definition 2. 4. 5
An output level lo(F, I) is the output structure associated with
a combination of any F and any 1.
Definition 2. 4. 6
An ouflaut foundation is the set of all possible output levels for
a given functal net.
2. 5. Properties of Output and State Levels
2. 5. 1. Properties of State Levels
Kauffman [14] has demonstrated the following property for
nets of arbitrarily connected switching elements having fixed inputs:
Definition 2. 5. l
A state with the prOperty that the net remains in the state once
it is entered is called an gquilibrium state.
‘u' - V—annr.u.
I.) \ 2.; n. 3.34 :7-
Definith
uni—'—
cafled 2
leads h
Proper
pr0per
14
Definition 2. 5. 2
A subsequence of states that is continuously repeated is
called a state cycle.
Definition 2. 5. 3
A subsequence of states with the property that it eventually
leads to a state cycle is called a state run-in.
PrOperty 2. 5. l
Each state of a level has one and only one of the following
properties:
1. It is in a state run-in.
2. It is an equilibrium state.
3. It is in a state cycle.
It is clear that this prOperty is true for function generators
of arbitrary but finite domains and ranges.
A useful relation between state run-ins and state cycles is the
following.
Definition 2. 5. 4
Within a state level, all states belonging to run-ins to the
same cycle plus all the states belonging to the cycle form a set of
states called the state cycle complex.
A typical state level is shown in Figure 2. 5. 1.
One state in each state cycle complex will assume particular
importance:
Definition 2. 5. 5
Any state in a state cycle complex may be designated as a
start state for the cycle.
Property 2. 5. 2
There can be only one start state per state cycle complex.
Proper_ty 2. 5. 3
A given start state will lead to a unique state cycle.
Definition 2. 5. 6
A sequence of states (run-in plus cycle) in state level 1(F, I)
with start state qo will be denoted by C(qo, F, I), and will be
called a state sequence.
An important property of any state sequence is:
i it it \L
r ii: t -
The 31
value c
15
State Space
start state
run-in
start state
/ cycle
g \- equilibrium 8 tate
2
Figure 2. 5. 1. A typical state level structure.
The state space is three-dimensional, with each state being binary
valued.
Prooerty
'1
indicate;
second I
2.5. 2.
for eve:
Definiti
called :
state 1'
{s e
DEADII
is can
D . .
4%
is:
139$
Same
Outpu
D .
Q
two
16
Propertj 2. 5. 4
The second occurrence of any state in the sequence C(q, F, 1)
indicates the completion of the state cycle and the beginning of a
second pass through the cycle.
2. 5. 2. PrOperties of Output Levels
Since the output vector H is subvector of the state vector Q,
for every state cycle there is a corresponding output cycle:
Definition 2. 5. 7
A sequence of outputs which continuously repeats itself is
called an outgut cycle.
There will also be sequences of outputs corresponding to the
state run-ins:
Definition 2. 5. 8
A sequence of outputs which eventually leads to an output cycle
is called an output run-in.
Corresponding to the equilibrium state:
Definition 2. 5. 9
An output which continuously repeats itself is called an
equilibrium ou_tput.
Finally, the definition corresponding to the state cycle complex
is:
Definition 2. 5. 10
Within an output level, all outputs belonging to run-ins to the
same cycle plus all the outputs belonging to the cycle form a set of
outputs called the ou_tput cycle cormlex.
Definition 2. 5. ll
An output cycle plus output run-in in level lo(F, I) with initial
output H1 is an cumut seqmgnce of the net and is signified by
«(HR F, 1).
Of course the similarity between the output sequence of the
functal definition and the above definition is no accident. Indeed,
6 (H1, F, I) = HIHZHB... = F(I, 2).
Now, consider a specific state level l(F, I) and the following
two state cycle 8:
17
(The cycles are listed in the form
ql(tl) q1(tl+d) ql(tl+2d)
q2(t1) q2(tl+d) q2(tl+ 2d)
qndl) qn(tl+d) qn(tl+2d) )
cycle numberl cycle number 2
00000 0000
10011 1011
11001 1101
00000 1111
If q1 and q2 are defined as the outputs, then the sequences
0 0 0 0 0 and 0 0 0 0 0 0
l 0 0 l l 1
outputs are in more than one output cycle and the second occurrence
1 0 l 1 are the output cycles. Note that the 0 and
of g in cycle number 1 did not signal the end of the cycle. These
observations can be generalized in the following properties:
PrOperty 2. 5. 5
Any single output may be in more than one output cycle complex.
Property 2. 5. 6
It is not possible to determine the end of an output cycle
by comparing the current output with previous outputs.
These two properties play a significant role in determining
the complexity of the functal system trainer.
2. 6. The Target Table Generator
A target table can be generated whenever it is both advan-
tageous to do so and conditions permit. This generally requires the
target table generator to know what functions can be trained from
each function the net can realize. In other words, the target table
generator should have available as a reference the following class of
sets:
Definition 2. 6. l
6’ = {1.0! i: 21 i is a convergence set}
is the convergence class.
is the
table
requii
that tl
just c
each t
On the
ofthe
in par
2.7.
Sequel
Called
Sequer.
target
seqUEn
D '-
~£igg£
sequent
18
Definition 2. 6. 2
2! i = {er T : the functal realizing the
function FiGT can be trained to realize
the function Fk}
is the converflnce set of the function F,.
Implicit in these definitions is the requirement that the target
table generator must also have knowledge of the set T . This
requirement should not be taken lightly. In the real system it implies
that the target table generator and the functal net must be more than
just casually related: they must have evolved in a way that allows
each to know what it can expect from the other.
This kind of relationship could come about very naturally
in a neural system if the target table generator structure
grew the functal net to perform a deligated task.
On the other hand, in the design of the artificial system, the design
of the target table generator and the functal net will have to proceed
in parallel.
2. 7. The Target Table
The target table contains a list of output sequences, one
sequence for every possible input to the functal net.
Definition 2. 7. 1
When contained in a target table, an output sequence will be
called a target sequence, with the notation 0*(t).
As with the output sequence, the target sequence is a vector
sequence. The following definitions locate the various parts of the
target sequence.
Definition 2. 7. 2
A target sequence element HJ*(t) corresponds to the output
sequence element of Definition 2. 2. 7.
Definition 2. 7. 3
A taget sequence commonent element, or M: hg*(t),
corresponds to the output element of Definition 2. 2. 8.
Definition fi2_. 7. 4
A target sequence component «1*(t) corresponds to the output
sequence component of Definition 2. 2. 9.
Defh
term
finct
1eng1
conn
Inod:
ataz
Supp
targ
2. a.
2. s.
a1go:
desc
eithe
19
Definition 2. 7. 5
The set of target sequence components for a single output
terminal and over all possible input values to the net is called a
functal section of the target table.
In order to keep the target table as compact as possible, the
length of a target sequence is limited to the maximum length of any
component's run-in plus cycle. Along with this convention, a
modified regular expression "notation is used when explicitly listing
a target sequence. This notation is best defined by example.
Suppose the functal net has four outputs and the components of the
target sequence for some input I are:
0' 1*”) = 0123456456456 .....
0' 24(1) = 0000000 .....
63*(1) = 234523452345 .....
64*(1) = 8722222222222 .....
Then the notation for these is:
61*(I) = 0123456(456)*
_ (72*(1) : 00*
63*(1) = 2345(2345)*
04*(1) = 8722*
As one target sequence, the notation is:
0123 456456456456 456456456456 *
0*(1) = 0000 000000000000 000000000000
2345 2345 2345 2345 2345 2345 2345
8722 222222222222 222222222222
2. 8. A General Training Structure and Algorithm
2. 8. 1. Movement Within Foundations Under Input and Function Change
In this section a general form for a trainer structure and
algorithm is proposed. First, though, it will be necessary to
describe what happens in the foundations when there are changes
either in the inputs to a net or in the functions of a net.
Assume that the initial locus of the net is
20
Leo) = < Ftto), mo). one). Hue) > .
Therefore, the net is in:
a. state level l(F(tO), I(to) ).
b. state sequence C(Q(to), F(to). 100)).
c. output level lo(F(to), l(to) ),
output sequence 0' (H(to), F(to), l(to) ).
Assuming l(to) and F(to) are not changed, (b) and ((1) define the
future of the net.
Now suppose the input is changed and is effective at time t1.
At this time the net is in state Q(tl) and this becomes a new start
state. This means that the net is in:
a. state level l(F(to), I(tl) ),
b. state sequence C(Q(t1), F(to), l(tl) ),
c. output level lo(F(to), l(tl) ),
d. output sequence 0' (H(tl),F(to), l(tl) ).
Finally, suppose the function that the net is realizing is
changed and becomes effective at time t At this time the net is in
state Q(t2) and this becomes a new stari state. Therefore, the net
18 m:
a. state level l(F(t2). l(tll ).
b. state sequence C(Q(tz). F Ai(t) - Mi(t) - Bi(t) ° Zi(t) 2 T1 iff yi(t) : l
Ai(t) - Mi(t) - Bi(t) - Zi(t) 2 T2 iff yi(t) = 2
Ai(t) - Mi(t) - Bi(t) ° Zi(t) < Tl iff yi(t) : 0
where
mij(t) : 1 or 2 implies aij(t) mij(t) : aij(t).
The vectors and constants in the discriminant function have been
given names:
Definition 3. 4. 2
The vector A16 dam is the vector of mossy fiber (mf) weijlgs
for PCLU i.
Definition 3. 4. 3
The vector Bie 0n is the vector of feedback weights for
PCLU i.
Definition 3. 4. 4
The vector Wi : (Ai’ Bi) is the weight vector for PCLU i.
Definition 3. 4. 5
The vector Mic (Tm is the set of mossy fiber (meinputs to
PCLU i. (Mi is a row of the matrix M. )
Definition 3. 4. 6
The vector Zie an is the set of feedback inputs to PCLU i.
Definition 3. 4. 7
The constants T and T e 691 are the 1957131; and pm
1 2
thre 8 holds re spe ctively.
According to the discriminant function, each mf input is
multiplied by a corresponding mf weight and each feedback input is
multiplied by a corresponding feedback weight. (From Figure 3. 4. 1
note that a two is equivalent to a one in this multiplication. ) The total
PCLU contribution is subtracted from the total mossy fiber contribu-
tion (the inhibition effect) and the result is compared with the two
32
Sim
Min) Q vim
Zi(t)
(a) Schematic
yi(t) = FAN) - Mia) - Bin) . ziufl
T
12
where
mij(t) = l or 2 implies
aij(t) mij(t) : aijm.
(b) Discriminant function
Si(t) mij(t) e Ai(t+d) Bi(t+d)
‘ 0 {0,1} Ai(t) Bim
0 {0. 2} Ai(t) Bi(t)
1 {0,1} Ai(t) Bi(t) + ézi(t)
1 {0,2} Aim + AMim Bim
(c) Weight adjustment table
Figure 3. 4. 1. The pyramidal cell logic unit.
33
thresholds.
The other major part of the PCLU is the training algorithm,
which adjusts the weight vector if necessary. The mode of this
adjustment is determined by the septal fiber input.
Definition 3. 4. 8
The scalar function si(t)€(0, 1) is the septal inliut to PCLU i.
Figure 3. 4. 1 summarizes the algorithm. Expressed verbally:
a. If si(t) = l and the mf input has components from the
set (0, 2), then every component of the mf weight vector A having a
nonzero mf input is increased by some fixed amount A .
b. If si(t) : 0, then no change is made in any weight vector.
c. If s,(t) : l and the mf input has components from the
set i 0, 1}, then every component of the feedback vector Bi having
a nonzero feedback input is increased by some fixed amount 6.
It is important to note that the connectivity of the net requires
the feedback inputs Zi to be internal inputs (see Definition 2. 2. 3).
3. 4. 3. The Basket Cell Model: The Basket Cell Logic Unit
The well-known function generator called the threshold logic
unit is used as the basket cell model in the net. Renamed the
basket cell logic unit, or BCLU, the representation is shown in
Figure 3. 4. 2. The definitions of interest are:
Definition 3. 4. 9
[Vim - Him—IT = zim
is the discriminant function of BCLU i, where
I
. 2 ° :
Vi Hi(t) T iff zi(t) l
I
s < . :
Vi Hi(t) T lff zi(t) 0
and where
v..h..(t) : v.. iff h..(t) : 1 or 2
1J 1J 1J 1J
l
O
(t) iff hij(t) = 0 or vij(t) = o.
vijhij
Definition 3. 4. 10
The vector Vic 01 is the vector of weights for BCLU i.
Definition 3. 4. 11
The positive integer T is the BCLU threshold.
34
Hi(t) e 211 (t)
(a) Schematic
T
where
v..h..(t) ._ v.. iff h..(t) : l or 2
ij 1) 13 1]
v..h..(t) = 0 otherwise
1.1 1J
(b) Dis criminant function
Figure 3. 4. 2. The basket cell logic unit (BCLU).
‘4... Iurualliafirw. ...,...nuzwlbj
a... . ...V
T .
35
Definition 3. 4. 12
The vector Hi(t)’ with components from the set {0, 1, 2},
is the vector of inputs to BCLU i.
3. 4. 4. The Connectivity and Delays
The pattern of connectivity in the CA3 sector net (Figure 3. 4. 3)
is an extreme simplification of the connection scheme of the natural
system. The mossy fiber input feeds a rank of PCLUs. At the
output of each PCLU there is a unit delay; the output of these delays
is used as the input to a rank of BCLUs and also as the output of the
net. The output of each of the BCLUs first passes through a unit
delay and then feeds the PCLU rank.
There are two rules that might be used when defining a
specific connectivity. The first is suggested by the CA3 sector
morphology: a PCLU should feed the BCLUs in only a limited surround
of the PCLU, and a BCLU should feed PCLUs over an area several
times as large. The second rule is suggested by the behavior of the
model (as developed in the next chapter): a direct path should exist
from each PCLU i to at least one BCLU and back to PCLU i. If it
is assumed that only one BCLU per PCLU is connected in this
fashion, then:
Definition 3. 4. 13
The BCLU in the direct PCLU i - BCLU - PCLU i path is
called the special BCLU of PCLU i.
As will be seen in subsequent chapters, the extent of both the
trainer and the target table generator's knowledge of a functal net's
connection scheme plays an important part in determining the
operating characteristics of those structures (for example, their
versatility when changing the net's function). In order to emphasize
this point, the connectivity of the CA3 sector net is specified only to
the extent of its trainer and target table generator's knowledge. That
is, it is assumed reasonable for both structures to know about the
Special BCLUs; it is assumed unreasonable to suppose that they know
the first connectivity rule. Therefore, the special BCLU connectivity
rule is the only one assumed for the CA3 sector net.
36
551(t)
Y’ (t) (t- l) = 11 (hi
1v11(t) ‘ l -J.‘\\e 3,1 l e-— l11(t)
532(t)
Mz(t)
O
531(t)
ylu) yltt-l) = hlmi
Iv11(t 'T, j—L,/(' ,. ==. 111(t)
2' (t)
21“) I
zI(t) ‘ 9 HI“)
Figure 3. 4. 3. A general form of connectivity
for the CA3 sector net.
37
A more complex and seemingly more realistic connection
scheme for a CA3 sector model is presented in the Appendix. It is
suggested that part of the reason for the complexity of the connec-
tivity in the natural hippocampal system is to overcome the restraints
placed on the natural trainer's activities because of its lack of
knowledge of the hippocampal structure. This observation appears to
present a paradox, but perhaps the explanation is that, after a certain
critical level of connection complexity, more complexity tends to elim-
inate the need for detailed knowledge on the part of the trainer and
target table generator; they can deal instead with generalities.
Finally, a comment on the delays. It may be that there has
been a significant oversimplification in the placement and magnitudes
of the model's delays. Unfortunately, a more complex arrangement
would remove the behavior of the model from the realm of the author's
existing intuition.
3. 4. 5. The Operational Algorithm
In order to discuss computational prOperties of the net it is
necessary to be specific about the order in which the computations
occur. This order is:
Advance the state.
Compute the new outputs of the PCLUs.
Compute the new weight vectors of the PCLU.
4:.pr
Compute the new outputs of the BCLUs.
3. 5. The Output Buffer
The computation of the output buffer obeys the following truth
table:
him ¢(t)
0 0
1 0
2 1
In addition, if the output buffer input CFO = 0, then all output buffer
outputs are zero.
Presumably the natural structure which performs this function
is the CA1 sector. This should not, however, be taken as the full
extent of the functional SOphistication of this area.
CHAPTER 4
PROPERTIES OF THE CA3 SECTOR NET
4. 1. Introduction
The properties of the CA3 sector net presented in this chapter
are important in two ways- First, the trainer and target table
generator designs depend, to a large extent, on the computational
properties of the functal net they control. Second, the properties
constitute an analysis of the function of recurrent inhibition as it
occurs in the net.
A necessary preliminary assumption concerns the major role
played by the special BCLU in net computation.
Assumption 4. l. l.
The output of the special BCLU of PCLU i is assumed
to be nonzero whenever the input to the BCLU from PCLU i is
nonzero.
The most basic CA3 sector net property is the following.
Property 4. l. l.
Two M = 0 inputs place the hippocampus net in the zero state.
Proof: Suppose the net is in some state Q = (H, Z) and has a PCLU
output Y and a BCLU output 2;. Furthermore, assume that the
mossy fiber input matrix M = 0. The ope rational algorithm of the
net outlined in Section 3. 4. 5 says that the next time period will see
the state of the net become Q = (Y, Z'a) , the PCLU output become 0
and the BCLU output become 2%). If the mf input remains zero for
the next time period, then the state of the net, the PCLU output, and
the BCLU output will beecme Q = (0, 2.3), 0, and 0 respectively.
The state of the net for the next time period will be Q = (O, 0). QED
Therefore, any time a zero state is desired it is only necessary to
apply a zero input for at least two time periods.
The structure of the hippocampus and intuition made it difficult
to justify the existence of the start state table, the start state table
38
39
generator, and the decision components of the general functal system.
By giving the change of state controller the capability to apply a zero
input to the net (through the input buffer) and making the following
assumption, it was possible to entirely eliminate these troublesome
components from the hippocampus system model.
Assumption 4. l. 2.
The only start state of a target sequence will be the zero state.
Based on this assumption, the following orthodox trainer and
target table generator operating algorithm was defined.
Algorithm 4. 1. l.
1. Assume the function realized by the net is also contained
in the target table. Change the table to a new function which is
contained in the convergence set of the original function.
2. Place the net in a zero state by applying two successive
zero inputs.
3. When a conflict between the computed and the desired output
of any one PCLU is detected, modify the net by increasing the mf
weights if the gene rated output is lower in magnitude than the desired
output, or the feedback weights if the gene rated output is higher in
magnitude than the desired output of the PC LU (using the Weight
Adjustment Table of Figure 3. 4. l).
4. Reset the entire net to a zero state. Recompute the output
sequence and go to Step 3 if an error is detected.
5. Training is successful when this output sequence and all
others are generated correctly.
In addition, the original method of evaluating and improving the
performance of this algorithm was defined to be: Maximize the inter—
section between each convergence set of the system and the function
set of the net.
As the following example illustrates, both of these definitions
proved to be unworkable because the convergence class of the net
cannot be defined.
Example 4. l. l.
Let “k = 0 2 2 l 1 (l l)* be an output sequence component
of the function being realized by the PCLU of Figure 4. 1. 1. Suppose
the corresponding target sequence component is changed to
4O
H.
XJ
Figure 4. l. 1. A PCLU and its special BCLU.
M is the mossy fiber input vector, Z is the feedback input vector
from BCLUs other than the special one, and ij is the input to the
BCLU from PCLUs other than PCLU j.
41
O'k* = 0 2 2 0 0 (0 0)*. According to Algorithm 4. 1. l the feedback
weights will be increased when the error at the h2 pair position is
detected. (The general sequence notation is a'k* = 0 h1 h1 h2 h2 h3
h3 . . . ) Since the special BCLU existence is assured by Definition
3. 4. 13 and its output is assured of being nonzero whenever the input
from its PCLU is nonzero, the training will succeed at least for the
h2 pair. Successful training for the next pair, h3, cannot be
guaranteed, however,” since there is no assurance that either one
of the following conditions is true: _ , 5 ,
l. The Zxk feedback input from BC LUs other than the special
BCLU are nonzero.
2. The input to the special BCLU from other PCLUs is sufficient
to cause the special BC LU output to be nonzero, even though the input
from PC LU k is zero.
In conclusion, it is not possible to say whether or not any function
containing (rk’l‘ is in the convergence set of any function containing a'k.
The definition of a new algorithm and method of evaluation was
based on two prOperties of the net discovered while evaluating
Algorithm 4. l. l. The first is implied by the previous example: If
the atom of a target table function is changed, then the atoms and
elements following it in the sequence cannot be predicted. (It is
important to note that this statement does not imply anything about the
atoms preceding the altered atom. )
The second property involved a consideration of whether or not
successful training can be guaranteed if only one atom of a target table
function is allowed to change. The following example demonstrates
that in some cases it would be necessary to make multiple atom
changes in order to insure successful training as defined in Algorithm
4. l. 1, Step 5.
Example 4. l. 2.
Consider a net in state Q = (H, Z) = 0. The disc. fcn. of a
silngle PCLU k is Ak - Mk - Bk- 2;, which reduces to Ak - Mk for
h . A well-known property of disc. fcns. of this form is: If
M113 M17; and hl(M11() = a, then hl(M]:) 2 a. This implies that if
1
h *(Mllc) = a , then hl*(M12() Z a. Therefore, changing one atom in the
first element of a target sequence will generally require changing atoms
of several other target sequences.
42
The problem of defining multiple atom changes is equivalent to
the function set definition problem and the latter can be solved in two
steps:
1. Develop an algorithm for generating target sequences.
2. Develop an algorithm for constructing the entire target
table from target sequences.
It has not been possible to develop the second algorithm. Even if one
was developed, however, it is unlikely that an algorithm of such
apparent complexity could be imitated by a nervous system. This is
especially true in light of the reasonableness of the following algorithm
and method of evaluation:
Algorithm 4. l. 2.
1. Assume the function realized by the net is also contained
in the target table. Change one atom pair hkhk in the table.
2. Place the net in a zero state by applying two successive
zero inputs.
3 When a conflict between the computed and the desired output
is detected, modify the offending PCLU's disc. fcn. by either (a)
increasing the mf weights if the generated output is lower in magnitude
than the desired output or (b) increasing the feedback weights if the
generated output is greater in magnitude than the desired output.
4. Reset the net to a zero state. Recompute the output sequence
and go to step 3 if an error is detected in any target sequence element
through the element containing the original alteration. Otherwise,
training is considered successful.
The method of evaluation and improvement was to determine the atom
alteration rules which, when used during step 1, would make it possible
for this algorithm to succeed.
4. 2. The Output Sequence Set of a PCLU
The output sequence set of an arbitrary PCLU in the CA3 sector
net has some important properties which will ultimately allow the set
of output sequences of an N PC LU net to be completely generated. The
properties are also interesting in their own right.
43
Property 4. 2. 1.
If a net is initially in state Q = 0 , then the output sequence
of any PCLU i for any input Mi will be of the form
Ui(Mi) = o h1 h1 h‘2 h2 h3 h3
Proof: The proof can be summarized by Table 4. 2. l,which traces the
state of the net, the feedback input Zi , and the computed output bl;
through several time periods.-
Picking up the action at tl , the input Mi extracts an output
of yl from the PC LU. Since the input to all the BCLUs (H) is still 0,
their output is collectively 0. Therefore, the new state formed for the
t2 computations will be as shown.
For t2 the input to the PCLUs in general and for the PCLU i
in particular is no different than it was for tl . Therefore, the output
will not change. The input to the BCLUs has changed, however, and a
new collective output of Zl' should be expected. The state shift leaves
(H1, Zl) as the state for the t computations.
During t3 the feedback3input to the PCLUs can be nonzero for
the first time. This is reflected in the change in the PCLU i output.
The BCLU inputs have not changed, so their output remains the same
and the output of the BC LUs is different.
It is clear that such a pattern will continue as long as the input
to the net or the functions computed by the functals do not change. QED
Property 4. 2. 2.
The output sequence component a'k(Mk) = 0 h1 h1 h2 h2 . . .
generated by PCLU k must satisfy the following set of inequalities:
l 2 3 4 5
(1) h2h,h,h,h,.
(2) h2 s h3, h4, h5, 116, .
(3) h3 2 h4, h5, h6,
(4) h4 s h5, h6, h7, .
(5) h5 2 h6, h7, h8, .
(Note that the k subscript has been suppressed. )
Proof: The predicate for h1 is [Ak ' M13 . The predicate for any
other j > 1 is [Ak ° Mk - Bk ' ZiJ . Clearly (I) is true. Therefore,
44
Table 4. 2. l.
The Computations, Over Several Periods, of
the CA3 Sector Net
period state of net feedback input output delayed output
Q = (H, Z) Zi Yik hi
to (0, O) O 0 0
t1 (0, 0) o yl 0
t2 (H1, 0) o yl hl
t3 (H1, Z2) Zi2 y2 hl
t4 (H2, Z2) Z12 y2 hZ
t5 (H2, Z3) 2.: 3'3 h2
t6 (H3, Z3) 2: y3 h3
45
HW(H1) 2 HW(Hj) for all j and Hj 3 H1. Since the BCLUs are
threshold functions, HW(Z2) Z HW(Zj) and Zj D Z2. As a result
B - 25 s B . z2 and h‘2 s hj for all j)! 2. Therefore, HW(HZ) s
HW(Hj), j )6 2. Any PCLUs which fire for h2 will certainly fire for
hj, so H2 D Hj . This completes the proof of (2).
Continuing, H'W(Z3) s HW(Zj) and Z3 D 23, j2 4. Therefore,
B - 235 B - zJ and h32 hJ. HW(H3)2 HW(HJ) and so on. QED
k k
4. 3. The Output Sequence Set of the Hippocampus Net
The following property was implied in the proof of Property
4. 2. 1.
Property 4. 3. l.
The CA3 sector net has output sequences of the form
0'(M) = 0H1H1H2H2H3H3...
Recall that in general the repetition of a subsequence in an out-
put sequence does not imply that the subsequence is a cycle. The CA3
sector net is nearly an exception, but the argument for it being an ex-
ception is purely academic, as can be seen by the following property.
Property 4. 3. Z.
If 0'(M)=0H1Hl...H1H1... HJHJ...
and H1 = HJ (j > i), then H1 H1 H1+1 H1+1
is a cycle.
Proof: Assume a state (Hi-l
, 21) produced an output Yi. This will
become Hi during the next period and the new state will be (Hi. Zi).
The next state will be (Hi, Zi+l). Similarly, assume a state
(de, Zj) produces an output Yj. This will become Hj during the
next period and the new state will be (Hj, Zj). The next state will be
(Hi, 25“). If Hi: H5, the 21“: 23+1 and (Hi, 2””) = (Hj, 23'“). The
two equal states mark the boundaries of a cycle. QED
Properties 4. 2. 2, 4. 3. l, and 4. 3. 2 can be combined in an
algorithm which exhaustively lists all possible output sequences of a
CA3 sector net containing N PC LUs .
Algorithm 4. 3. l.
1. Generate an output H1 from the set of 3N possible outputs.
2. Generate an output H2 from the same set.
46
3a. If the sequence HlHlHZH2 satisfies Property 4. 2. 2, go
to 4.
3b. Otherwise, go to 2 until every possible output candidate
for H2 has been tested. Then go to l and repeat until every possible
output candidate for H1 has been tested.
4. If the sequence HlHlHZHZ satisfies Property 4. 3. 2, add
the sequence to the list of output sequences and go to 3b. Otherwise
K = 3 and continue.
5a. Generate an output for HK from the set of possible outputs.
If the sequence HlHlHZH2 . . . HKHK satisfies Property 4. 2. 2, go to
6.
5b. Otherwise, generate another output for I-l'K and test again
until the set of outputs for the K-th element in the sequence has been
exhausted. Then gene rate a new output for HK-l and reinitialize the
set of outputs to be tested for HK. The algorithm terminates when
all possible outputs for H1 have been tested.
6. If the sequence HlHlHZH2 . . . HKHK satisfies Property
4. 3. 2, add the sequence to the output sequence set and go to 5b.
A version of this algorithm with the ability to generate all
possible target sequences for a net with N PCLUs was programmed
on the CDC 6500 computer (see Appendix B). Since it would be pro-
hibitively expensive to allow the program to generate all possible
target sequences, a representative sample was taken for several
values of N and for target sequences with the first element containing
all twos and the second element containing all zeros (to give target
sequences of maximum length). Sixteen was the longest output run-
in length found (for N=5), with the length increasing slowly with in-
creasing N. Only output cycles of length 4 and equilibria were found;
there were approximately equal numbers of each.
4. 4. Rules for Successful CA3 Sector Net Training Using Algorithm
4. l. 2.
The results presented in the previous two sections, along with
those below, are sufficient to deve10p the rules which assure success-
ful training using Algorithm 4. l. 2. The key word in the following
property is "guaranteed. "
47
Properpy 4. 4. 1.
Using Algorithm 4. l. 2 and its associated success criterion,
training of the hippocampus net is guaranteed to be successful if and
only if the following rules are obeyed. (The subscripts have been
omitted for simplicity. )
Rule 1: If 0'(I) = 0 h1 h1 0"(1) , then changes in h1 are made
according to the following table.
1 l
h h * T2 _ T1
1 provided A < —N—- , N the number of
0 2 mf inputs to
PCLUj
l 2
Rule 2: If a'(I) = 0 h1 hlhzhzo'U). then changes in h2 are
made according to the following table
h1 h2 h2*
T2 ' T1
2 0 1 provided A < —-——1—\I——— , N the number of
2 0 2 mf inputs to
PCLUj
2 1 2
2 2 0 T2 _ T1
2 2 1 provided 6 < T— , L the number of
2 1 0 feedback inputs
to PCLU i.
1 l 0
Rule 3: If 0'(I) = 0(2200)(2200)>=< 1th1 0"(1) and h1 = 1, then
h1* = 0 or 2.
Rule 4: If «(1) = 022(0022)* hlhl 0"(1) and h1 = 1, then
h‘*
= 0 or 2.
Proof:
"Rule 1: Assume the net is realizing the function in the target
table; change the atoms h”. Clearly, if h” is increased to 2, then
h1 can be increased to 2 by increasing the mf weights and training
*
will be successful. If h1 is increased to 1, it is necessary that an
increase in the mf weights not force an output of bl: 2. The condition
12' T1
A < T
of the disc. fen. will be less than the T2- T1 gap. Note that h1 can
will prevent this from happening, since any one increment
never be decreased, since the feedback input for h1 will always be
zero vector. "
48
Rule 2: The table associated with Rule 2 defines the changes
that can be made in the second pair of atoms of a target sequence com-
ponent with guaranteed success. The changes in but are dependent on
h , since this output element defines the upper bound on any change.
If h'2 must be increased from 0 to 1, then the same A limit must be
observed as was defined in the proof of Rule 1. There is, of course,
no problem if h2 is increased to 2 (assuming h1 = 2). But note that
the alternative h1 = l and h2 is increased from 0 to 1 has been
omitted from the table. Any attempt to increase the disc. fcn. to
produce h2 = 1 under these conditions may inadvertently produce
h1 = 2. Since hl cannot be decreased, the training would have to be
considered a failure.
In general, H2 is the first output element associated with a
nonzero feedback input. The existence of the special BCLU guarantees
that if h1 is nonzero, the feedback input vector is nonzero. This in
turn guarantees that the disc. fcn. of the PCLU j can be decreased by
increasing the feedback weights. This is the justification for the
inclusion of the last four entries in the table under Rule 2. Note that
a change in h2 from 2 to 1 requires a condition on 6 . This condition
prevents the disc. fcn. from dropping from a value above T2 to a
value T1 or lower with a single increment of the feedback weights.
Rule 3: This rule summarizes the changes that can occur with
guaranteed success when the atom altered is h1* , i Z 3 and odd.
Suppose h1* is increased. From Property 4. 2. 2, the bound on this
increase is determined by hi-Z. The possible changes are:
hi-Z hi hi*
(a) l 0 1
(b) 2 O l
(C) 2 0 2
(d) 2 l 2
For alternatives (a), (b), and (c) h1 = 0 implies hk = 0, all
even k < i (using Property 4. 2. 2). If any of these changes are made,
then, when the error in the output is detected, the reaction of the
trainer is to increasethe mf weights. In doing so, it is entirely
49
possible that some of the disc. fcns. of the even elements will be
inadvertently increased over the T1 threshold. When the PC LU
generates the incorrect output sequence element upon reinitialization,
the response of the trainer is to increase the feedback weights. The
possibility exists that this will force the disc. fcn. of hi to fall below
the desired threshold. To correct this, the mf weights are increased
again, creating the situation where the even elements may again become
incorrect. The trend is clear and the conclusion is that success cannot
be guaranteed if any of changes (a), (b), or (c) are made.
From the information given and Property 4. 2. 2, the output
sequence component associated with alternative ((1) is of the form
«(1): o (2 2 h‘z‘2 h )(2 2 hk hk)* 1 1 o"(i)
\_V_J
h‘ hi
where k < i and even, ha and hke (0,1) and Property 4. 2. 2 holds.
If hk = 1 for any even k < i , then the situation is the same as in the
other three alternatives: there is the possibility of unstable training.
Therefore, all sequence components are eliminated except those of the
form suggested by the rule itself. The crucial step in the proof is to
demonstrate that the fatal trainer instability of the other alternatives
. does not occur.
Let the j-th PCLU generate a'j(I) and change hi* to 2. When
the change is first detected by the trainer, the mf weights are increased
to produce the correct value of the disc. fcn. for h, D1(tl ) 2 T2.
However, as in the previous alternatives, D k1(t )> — Tl may be true for
some even k < i. In order to compensate for this error, the trainer
increases the feedback weights, thus decreasing the disc. fcns. until,
in particular, Dk(t2) < T] . So far the script .is the. same as in all of
the other alternatives. Note, however, that originally hk< hi, k < i
and even. Since 2; 3 Z? k < i and even, hi> hk implies HW(Z}) <
HW(Z;<). The important prOperty is the strictly less than of the
Hamming weight relation. This implies that the change in the disc.
fcn. for hi is strictly less than the change in the disc. fcn. for hk:
Di(tl) - Di(t2) < Dk(tl) - Dk(t2)
50
If Di(t2) < TZ , the trainer will attempt to compensate by increasing
the mf weights again. This time, if conditions are right, i D k3(t )
will be less than Dk (tl ). If Dk (t3 ) is still greater than Tl , the
compensation in the feedback weights need be no greater than the
compensation for Dk (t2 ), and it can be less. If D (t4 ) is still less
than T2 , the mf weights will be increased less than the increase that
occurred during the computation of D 1(t3). Eventually Di (tn ) 2 T2
wlhile at the same time Dk(tn) is not increased enough to force
h to be incorrect.
To complete the proof, note that any changes in the k-th
component, k S i, are corrected before the change can affect the other
PCLUs. Therefore, the other sequence components through target
sequence element i do not change during the training for the k-th
component.
Now suppose hi, i Z 3 and odd, is decreased. The bound on
the decrease is determined by hi-1 and the possible changes are
(again from Property 4. 2. 2):
i-l i
(a)
(b)
(C)
(d)
Hooos‘
NNNt—D‘
HOHOD‘
Alternatives (b), (c), and (d) can be eliminated in short order as
successful training candidates. In all cases hk = 2 , k < i and odd,
2i: Since the mf weights increase in "quantum jumps, " it would, in
general, not be possible to recompute D1(t1) exactly; the value actually
computed may range from a quantum higher to a quantum lower. If it
is the former, then it is possible that the difference D1(t3 ) - Di(t2) Z
Di(t1) - D1(tz). In this case, the feedback weights would be required
to increase the same amount as before to correct h However, the
next time hthe mf weights are increased, the increment required for a
correct will be even less than before. Eventually this extra
negative weight will be great enough that the contribution of the mf
weights will be less than the contribution of the feedback weights, no
matter what the magnitude of the quantums, and the proof will proceed
as outlined.
51
and it is entirely possible that Z? = Z} for at least one of those k's.
If this is so, then any attempt to decrease Di by increasing the feed-
back weights decreases Dk by the same amount. Therefore, hk
will become incorrect at the same time h1 becomes correct. The
trainer will respond by increasing the mf weights, but the effect is
felt equally by both hk and hi. The result is training instability.
Alternative (a) implies output sequence components of the form
0'(I) = 0(hlh100)(hk hkO O)* l l 0"(I)
h1 hi
where k < i and odd and h1 , hke (1, 2) , along with Property 4. 2. Z.
If any of the hk or h1 is one, then training instability may occur.
This leaves only output sequences of the form given in the rule state-
ment. In order to reduce hi, the feedback weights are increased, and
all of the disc. fcns. Dk, k < iand odd, are reduced. Perhaps some
will be reduced to below T2. Consequently, the mf.weights will be
increased to compensate, with the possibility that D1 is forced to a
value above Tl . Fortunately, a property of the same nature as
described in alternative (d) of the previous set exists to prevent
training instability: Since Z}; 3 25.1 if h1< hk for all k originally, then
HW(Z. ) > HW(Zj ). Therefore, the Dk will not be decreased as much
as D , and eveJntually D1< T1, while DkZ T2 for all k.
Rule 4: The final rule summarizes the changes that can occur
with guaranteed success when the atom altered is h1 , i 2 2 and even.
If h1 is increased, then the upper bound is determined by hlfll and
the possible changes are:
3 O
hi-l 1 hi*
(a)
(b)
(c)
(d)
NNNH
HOOD
NND—‘I—l
Successful training for the (a), (b), and (c) alternative cannot
be guaranteed, since h1 = 0 implies that hk = 0, k < iand even.
52
The output sequence components accompanying alternative ((1)
are of the form:
a-(I) = 022(hk hk22)>-'< 1 1 «'(1)
“W"
hi h1
where k is even, hk, his (0, l). The subset of sequence components
where bk 2 l for any k can be immediately eliminated, leaving
sequences of the form given in the rule. Successful training is
guaranteed for these by the same argument as was used for (d) in the
first set of alternatives in Rule 3.
If hi is decreased, then the lower bound will be determined by
hl-2 and the possible changes are:
i-Z hi
h h *
(a) 0 l 0
(b) O 2 l
(c) 0 2 0
(d) l 2 1
Successful training for alternative (b), (c), and (d) cannot be
guaranteed since h1 = 2 implies hk = 2, k< i and odd.
The output sequence components accompanying alternative (a)
are of the form:
0'(I) = o 2 2 (hk bk 2 2)* 1 1 0"(1)
L—V'J
hi hi
where k is even and hk
, his (1, 2). Again the subset of sequence
components where hk = l for any k can be eliminated, leaving
sequences of the form given in the rule. Successful training is
guaranteed for the remainder by the same argument as was used for
(a) of the second set of alternatives in Rule 3. QED
.3». .4m
. , .. flurrsfluflu“
(Alt. . 19V
CHAPTER 5
A FUNCTAL SYSTEM MODEL OF THE HIPPOCAMPUS. PART 2..
S. l. The Target Table
The previous chapter noted that if a net is realizing the
function in the target table and then one pair of atoms is changed,
the output function of the net after training could then differ greatly
from the function in the table. Consequently, if orthodox functal
system training techniques were used, that is, if the entire new target
table had to be realized by the net, training would be unstable and the
net would be essentially useless. The following assumption summarizes
a target table form different from the one originally defined in
Section 2. 7 which helps to circumvent this problem.
Assumption 5. l. l.-
The target table can contain a set of target sequences for
each input. The interpretation given to each set of target sequences
is: Any output sequence not contained in a set for a particular input
is considered to be harmful to the entity of which the hippocampus or
its model is a part. Those output sequences which are target
sequences are either neutral or beneficial to the entity.
The target table is a conceptual device which makes explicit
the relationship between the natural system and its environment. It
is not intended that a physical structure exist to hold the table. All
neuroscientific interpretations of target tables must comply with this
fact.
5. Z. The Trainer
Algorithm 4. l. 2. has been modified to be compatible with the
new target table concept. The new trainer Operating algorithm for
one time period is outlined in Algorithm 5. 2. l and the trainer structure
associated with it is given in Figure 5. Z. l. The following is a
description of the algorithm.
53
54
.Efiflomfim wsflmuomo Hogan”. 93. A .N .m 85303.1».
d0. mans”:
:ofloouuoo can
QZH 35350 35?
long on”. £59:
unouuso onoum
.3ng
xomnvomd no was
Honfio ommonofi
o» nofimEuofih
Gowuoounoo omD
.mmw ummm
1516935000
1.8.20 now
.mpmmE coca
ooqd>p<
oZ
.mmG
ummmufiunouuo
can mood;
wououm Sm. .330
oozodvom 03m
numooom mm o» fiasco
Bow 0» poodponm
oonodwom
mm
6.3m.»
3.35 on m on—
usmuso uow
OZ
m.
m03d>
pogo? of
mo >5 on. H360
.5350 can ”.59.:
uqonndo
ou<
o
ummm E
wouusooo
nouuo
mam
mow
QZH
.oonoddom
nomad» a
mo unoaoao
umufl um poo:
memo» 003nm
ouou on. use
mo Baum
ammom
mo?
HmJ, MC M', M'e (SJ. and
hIIM') = 2} .
If 0k: 4), continue. Otherwise, select Xk from the set:
{M: HW(M) = K-l, hl(M) = o, and MC xk_ }
l
and go to Z.
4. For each M, h1(M) = hl(M) i> 1. STOP.
Example 6. Z. 3
Let N = 6.
67
Step 2. .31 = {000001, 000010, 000100, 001000, 010000,
100000} .
RULE is inconsequential for this set. Suppose all members of 51
are assigned a 0 output.
Step 3. K = Z, 092 : (Pl - 51, m1 = 4). 02 is not empty.
Let X2 = 000001.
Step 2. :32 has 15 elements. Only the elements in the set
{100001, 010001, 001001, 000101, 000011} satisfy RULE.
Suppose hl(000011) = 2; the remainder are assigned 0 outputs.
Step3. K: 3. (PB: 6’2- 82- (RZ. 02
X3 = 000101.
Step 2. Only the elements in the set {100101, 010101, 001101}
satisfy RULE. Suppose hl(001101) : 2; the remainder are assigned
is not empty. Let
0 outputs.
Step 3. K = 4. 6’4 = (P3 - <93 - 633. 03 is not empty. Let
X4 = 010101.
Step 2: 84 = { 110101 }. This is also the only element which
satisfied RULE. Suppose hl(110101) : 2.
Step 3. K: 5. 0’5: 6’4 - 84 - (R4: s. STOP.
The ability of the system to train the net to realize an
”Algorithm 6. Z. 2" function depends on the following conditions being
satisfied.
Conditions 6. 2. l
1. The net is initially generating the trivial function, with
all W = 0.
2. The function generated by Algorithm 6. Z. 2 is in the target
table.
3. The mf input vectors are presented to the net in order of
increasing Hamming weight.
4. Each input is held for as long as is required to train the
net to generate the correct output.
In addition, there is a fifth condition consisting of two relations
between the values assigned to A, TI' and T2, that requires a more
lengthy discussion. One of these, relating A and TI’ is particularly
complex, and the following property is presented in an attempt to ease
68
the shock of the more general result. Note that the sets 5k defined
in Algorithm 6. Z. Z are required, but since they must be computed
anyway, this is not an inconvenience. Also, once training is complete
for the 1nputs 1n j' no other 1nputs W111 requ1re a tra1n1ng sessmn.
PrOperty 6. 2. 2
For every PCLU in the net, if
(a) Conditions 6. 2. 2 are satisfied,
(b) JN - J+1A Z T where J is the lowest K for which
Z,
317;”).
$312< = {M:M€SK and hl(M)=2},
and N is the dimension of the mf input to the PCLU.
N-J+l .
(c) (J-1)JN"J z 11" < Tl/A.
i=1
(d) |5§| = N- J+l,
then Algorithm 5. 2. 1 will successfully train the net to realize the
function in the target table.
Proof:
Let
,1; = {M:HW(M)=K and hl(M)=2},
.1; = {M:HW(M)=K and h1(M)=0},
0 N 0
H = U H1
i=1
N
H2 = U H32
. 1
121
A necessary and sufficient condition for a function generated by a
PCLU is:
A ° M < Tl, Meuo (1)
A'M 2 T Metz (2)
Z,
The proof consists of developing an expression for the largest
A ° M, Mep. 0 over all functions generated by Algorithm 6. Z. 2
obeying (d). It will be used to construct relations between T1’ T2,
69
and A such that the satisfaction of relations (1) and (2) is insured.
Example
Let N = 5, J = Z, and X2 = 00001. Then the function has
5: = p: = {00011, 00101, 01001. 10001}.
After training for the first vector in p. g, the weight compo-
nent(s) ax corresponding to the nonzero components of XJ will have
avalue CA, C an integer, where
J CA 2 T2. (3)
C represents the number of training trials required to drive the
discriminant above T2.
Example
After training is complete for 00011, the mf weight vector
A will be A = (0, 0, 0, 1, l)CA.
Training for the second vector in p. g benefits from the
previous training, since the discriminant at the start of training will
already have a value (J-l)CA. Therefore, the increment required
of the apprOpriate weight components is l/J(CA ). Of course C
must contain J as a factor.
Example
After training is complete for 00101, A = (0, 0, %, 1, %)CA.
At the completion of all training, the components ax have
a magnitude:
N-J+1 1 .
a : CA 2 J '1. (4)
X .
1:1
Example
After training is complete, A = (é, %. i" 1, -1—85- )CA.
In order to add A an integral number of times to a weight
component, it is necessary that
c = JN'J.
Furthermore, in order for the training to be successful, it is
(5)
necessary that both X and the input of highest Hamming weight
J
assigned a zero output, which will always be l-X produce dis-
JD
criminants less than Tl' But since the weight components ax are
incremented every time any weight component is incremented, and
70
the number of components ax is at least equal to the number of
other components incremented during any one training session, the
discriminant for XJ will be at least as large as the discriminant
for l-XJ. Therefore, it is necessary that
(J-l)ax < T (6)
l
or, N-J+l l-i
(J-l) CA 2 J < Tl' (7)
i=1
Example
With c: 25‘2 = 8, A = (1, 2, 4, 8, 15)A.
Note that A - (l -XJ) = A ~XJ = 15A.
Relations (3), (5), and (7) are enough to insure that training
will be successful if the functions are of the kind discussed so far.
They can be used to compute the values to be assigned to A, T
1’
and T2 of the PCLU before training begins. QED
Example
16A 2 T2. (3)
15A < T1. (7)
T2 = 104, T1 = 9.8 x 103, and A = 650 satisfy these inequalities.
If an Algorithm 6. Z. 2 function does not obey (d), then the
expression for the largest A ° M, Map. 0 can be awarded to either
A - XJ or A - (l-XJ), as the following examples demonstrate.
Example 6. 2. 1
Let
,5: = {(000011)}
2
e83 = 4»
2
(S4 : 4’
.32 = {(111101)}
5
The mf weight vector after training is: A = (— 1, %)CA.
Therefore, since X2 = (000001),
A- (1-x2)= 9/5 CA > A- x2: 6/5 CA.
71
Examle 6. Z. 2
Let
s; = {(0000111), (0001011)}
6: = {(0110011), (1010011)}
The mf weight vector after training is:
A = (1, 4, 5, 16, 48, 69, 69)C/48 A.
Therefore, since X3 = (0000011),
A- (1-x3) = 74/48 CA < A- x3 = 138/48 CA.
Therefore, for the general Algorithm 6. Z. 2. function,
max { (1-1) ax, A- (1-xJ)} < T1
The following prOperty includes the Specific values for this expression.
PrOperty 6. 2. 3
For every PCLU in the net, if
(a) Conditions 6. Z. 2 are satisfied,
(b) J CA 2 T2, J as defined in Property 6. 2. Z,
(c) max{(J l)ax, -.(1xJ)} < T1, where
2A
IeSJ I 1‘1'I82I
ax = CA EJ J' +
i=1 2 "7
457:] ISII _
x Z: IIj J 2: I
l j i=1
see see
C1 C2 -'
Z
8
I JI l-i 1_J|5 I
A- (l-XJ) : CA 2 J + J1
i=1
2
— 57t| '51.) .
x z: (1-k+1) n j J 2: z."
!,k j i=1
see see
C3 C2
C1 -- This sum is over all subscripts I of5f where
1 2 J+1 and 5: 7! .1).
C2 -- This product is over all j, J < j < k such that 5? 7! ()1.
72
C3 -- In addition to the I defined in C1, k is the largest
k' for which i, ,1! 4) and yet k' < 1.
2 2
I5I-1 IS I
J IIK K
K
where the product is over all subscripts of 5; greater than J.
((1) C=J
Proof:
At the completion of training for 5 g:
2
a (5 ) = CA 2: J '1
x J i=1
2 2
A . (l-XJ) : D($J) : ax(8J)'
If [8;] had been one greater, say due to some input Y,
then the amount of increase required in the discriminant A ° XJ
would have been 2
-||
JJ JCA.
This quantity would have been divided among the J - lax weight
components and one other weight component whose value had remained
zero up to that time.
The next input requiring training is in 3 120 K > J. It differs
from Y only in having more than one other weight component which
has remained zero. Therefore, the increase required to attain
J CA is divided among K 2components, and the ax increment is:
IS JI .
CA (J/K) J- (1)
If there is another element in 3 120 then it will differ from the
preceeding vector in only two components in the same way two vectors
are different in 8;. Therefore, the discriminant of the new input
before training is short of J CA by (1). If this value is divided evenly
among the K weight components associated with nonzero input
components, then the increment to any one weight component is:
2 46%|
CA(J/K ) J . (2)
2.
In general, after the completion of training for 5 K
73
Z
a(5) = a(5)+CAJ-J J 2 '1
x K X J ..
1-1
Each of the weight components selected by XK - XJ (there are K-J
of these) are increased by the same amount as the ax
the sum of all other increments to all the remaining weights is equal
to the ax increment. Therefore
2
3 2 Is I
2 2 'l J| K -i
D(8K) = D(,SJ)+ (K-J+l) J-J 2 K CA.
i=1
If another 5 Z, L > K, is not empty, then, by the same
reasoning as for the 8; case, the necessary total increment for
the first input of this set must equal:
-52 -l _52
K[J-KIK| J|J|]CA.
This quantity is divided among L components. Therefore, the ax
increment is 1/L of this. In general, after training is complete for
this set:
2 2
-|8 |—1 1-|5 |
24,551) = ax(612<)+CAK [K K J J
2
ISLI ,
x 2: L'1
i=1
and
2 z 48;) 1463,)
D(SL) : D(5K)+(L-K+1)K J
2
151,) ,
x 2 L“.
i=1
By an extension of this argument, the expressions at the
completion of all training are those given in the statement of the
property.
The property is proved if a technicality involving the integer
C is cleared up. The smallest quantity a weight component can be
increased by is A. In order for C to contain all factors that might
occur during a training session and thereby allow an increase of A
and no more, C should contain all of the factors given in (d). QED.
74
Example
The computation of (c) for Example 6. 2. l.
ax(CA(2O + 20 (no product term) 5'1) = (1+ 1/5)CA.
ax = 6/5 CA.
Therefore,
(J-l) = (2.1)ax : 6/5 CA.
a
x
A - (1-Xj) : CA(1+ 20(5-Z+l) (1/5) ) = 9/5 CA.
Therefore,
max{(J-1) ax, A- (l-XJ)} = 9/5 CA.
Example
The computation of (c) for Example 6. 2. 2.
(D
II
2 l-i 1-2 2 -i
CA (2 3 + 3 (no product term) 2 4 ).
x i=1 i=1
ax = 69/48 CA.
Therefore,
(J-l) ax = 2ax = 138/48 CA.
A- (l-XJ) = CA {1 +1/3(4-3+1)(1/4+1/16) }
a 74/48 CA.
Therefore,
max { (J-l)ax, A- (1-xJ)} = 138/48 CA.
CHAPTER 7
DISCUSSION
7. 1. Summary
An automaton model of the CA3 sector of mammalian hippo-
campus is presented. The connectivity between the PCLU (the py-
ramidal cell model) rank and the BCLU (the basket cell model) rank
is left unspecified except that a direct PCLU-BCLU-PCLU loop is
required for each PCLU. It is assumed that whenever the output of
a PCLU's delay is nonzero, the output of its special BCLU is also
nonzero. The input to each PCLU is a vector Mi with components
having values from the set {0, l} . The output of the model is a
time-sequence of vectors of the form
0’(Mi) = 0H1H1H2H2H3H3. . . ,
with each vector HJ having components hit with values from the set
{0, l, 2} . Assuming each nontrivial input is separated by a zero
input to clear circulating quantities left over from the previous input,
the output sequences are shown to have these properties:
1. Each sequence terminates in either an equilibrium or
a cyclze. 3
4 5
-hk, hk’ , etc.
3. Iin=Hj, j > i, thenHlI-ll....HJ-1HJ-lisacycle.
IV
a. at... sit.
An algorithm is developed to generate all possible output sequences
of any model containing N PCLUs.
The characteristics of a training structure for reshaping the
output sequences of the foregoing model are also presented. This
structure is supported by a target table containing a set of allowed
output sequences for each input to the model. It is assumed that
75
76
when the system is placed in its environment for the first time, the
function realized by the model is contained in the target table. In
order to insure this, a special training session is held before the
model is placed in its environment. An algorithm (Algorithm 6. 2. 2)
is developed to generate the function placed in the target table for
the special session. If certain parameters (the mossy fiber and
feedback weights) are set correctly (to zero) at the beginning of this
session, the function realized by the model at the completion of
training is the function in the target table.
After the system is placed in its environment, desired changes
in the model's function are registered by changing the target table.
The trainer compares the output sequence generated in response to
a net input, M, with the sequences in the target table. If no match
can be found (which implies a change in the target table has been
detected), a marker is set. The next time M occurs as the net input,
the output sequence up to the point of the fault is generated, and then
a training session is triggered. It is proved that the training session
is guaranteed to "succeed" if and only if both the change in the target
table and the selection of some of the model's parameters (A, 5, T1’
and T2) are in accordance with certain rules (PrOperty 4. 4. 1). To
"succeed, " the output sequence must be the same as the target table
sequence only up to and including the element containing the change.
It is understood that the outputs following the subsequence just
described, as well as any other output sequence of the model, may
be altered by this training session. The model's new function may or
may not be the same as the functions in the target table. If it is not
the same, then, more training sessions are required.
A number of other ancillary results on the time-domain
behavior of CA3-like automata were also obtained, both analytically
and by computer simulation.
7. 2. Comments on the Neuroscientific Aspects of this Study
As sume that the hippocampus is a memory bank containing
transformations of single inputs into output sequences, and that its
task is to make act decisions. Furthermore, assume that a trainer
is available for changing the output sequence that any input is trans-
77
formed into and that it operates in the manner described in Chapter 5.
The following observations might now be of speculatory interest to
neuroscientists.
The results of Section 6. l on training phases, together with
Property 4. 4. 1, suggest an increase in both the capability of the
trainer and the complexity of the hippocampus‘s transformations as
it matures. At birth, and during phase 1, a single input is related
to a single output; that is, the relevant output is not sequential.
At this stage, the trainer can increase the firing rate of an output
but not decrease it. As the hippochus matures, and in particular
as the basket cell rank begins to make connection with pyramidal
cells, the outputs of the hippocampus can become sequential in nature,
involving oscillations. The trainer now has the ability to decrease
the output rate, but at the risk of forcing the output into oscillations.
The trainer cannot yet suppress these oscillations. This capability
is achieved only when the basket cells have made connections with a
sufficient number of pyramidal cells.
A second observation is related to the assumed ability of the
natural system to avoid training instabilities. Recall that in the
model, successful training can be guaranteed if and only if certain
rules are followed when altering the target table and certain relation-
ships are obeyed when specifying the CA3 sector model's parameters.
But even then undesirable changes can occur in other output sequences.
In fact, it is possible that: (1) either these changes cannot be
corrected; or (2) as each change is corrected, another Inismatch
occurs. Such training instabilities might be dangerous to an animal.
A third observation involves the problem the trainer has in
selecting the output to be retrained when a mismatch occurs. As
mentioned in Section 5. 3, one approach would be to select the most
“uncertain" PCLU. Another approach, involving training all PCLUS
at once, might also be used.
A related observation involves the knowledge a hypothetical
natural target table generator has of the connectivity of the natural
functal net. From computer simulations, it appears that the more
information the target table generator has about the connectivity, the
more freedom it has in making changes in the target table that are
78
guaranteed realizable by the net. On the other hand, the more
connectivity knowledge the target table generator has, the greater
the information that must be genetically stored and the greater the
chance for a connectivity error to occur during growth. In the
author's opinion, the weight of evidence supports only the most
general kind of connectivity knowledge on the part of the natural
target table generator, and hence supports a limited function changing
capability with safety.
The final observation pertains to the code employed by the
natural system to convey act information. If the hippocampus is
indeed an act computer, there must be a direct relationship between
behavior and the hippocampus's output. Since the behavior of a
mammal often consists of essentially a stimulus-directed Markovian
sequence of actions, each output of the hippocampus might well be
related in a nontrivial way to its preceeding output. In other words,
a hippocampus output associated with a certain behavioral act on one
occasion may be associated with a different behavioral act on another
occasion. The original function of the hippocampus would have to be
compatible with this, as would the hypothetical target table generator
when it decided on changes in the hippoc ampal output function.
7. 3. Comments on the Engineering Aspects of the Study
The functual system theory developed in this report introduces
a new perspective for understanding interconnected arrays of variable
function nonlinear function generators (functals). Useful applications
of this theory may arise in fields other than neurocybernetics.
It is generally ac'cepted that the nervous system combines
memory and logic in the same location in an extremely effective way.
The Kilmer-McCulloch Retic model, the Kilmer-McLardy hypothesis
of the task of the hippocampus, and the hippocampus model presented
in this report suggest a partial organization of a robot controller
which takes advantage of this prOperty. Consider the design of the
controller for a moon rover. The controller can be imagined as a
hierarchy of subcontrollers with the apex occupied by the Retic, which
79
commands the mode of the rover. As an example, suppose one of the
_ modes is "proceed with the search. "
The rover would receive information on its environment
through its sensory transducers. A reasonable choice of transducers
for a moon rover might be a 3-D television camera, temperature and
pressure sensors (for internal state monitoring), and tactile sensors
(on probes, shovels, and bumpers). The data from these would be
fed into processors designed to extract certain kinds of information.
Some of these may be assigned the task of processing data for input
to the hippocampus system.
The hippocampus occupies the next level of the hierarchy; it
computes the acts within a mode. For example, the acts within the
"proceed with the search" mode might define the direction and speed
of the rover and the search mode of its camera system. The acts
associated with an input configuration would have to be programmed
on earth according to the best information available. Once on the
moon, however, if either a Situation occurred which was found to be
harmful to the rover or an unexpected situation occurred, then the
hippocampus would be retrained. From the hippocampus the act
command would be passed on to lower levels where the actual motor
command sequences would be generated.
There are many problems yet to be solved while pursuing the
details of any hippocampus system design for a robot. Most of these
are analogous to problems yet to be solved in the natural system.
Among these are:
(l) the definition of the code assigned to each output;
(2) a determination of whether the code is context-sensitive
or context-free;
(3) the definition of an initial function for a net which affords
the robot maximum protection and versatility;
(4) the specification of the connectivity of the net (Do usable
connectivities exist which increase the freedom of the
trainer? );
(5) the specification of the trainer rules to (a) guarantee
successful training, (b) select the PCLU to be trained,
80
(c) select the direction in which the PCLU is changed,
and (d) allow the new output to fit smoothly into the act
sequence.
10.
ll.
12.
LIST OF REFERENCES
Von Bekesy, G. , Sensory Inhibition (Princeton University Press,
1967).
Eccles, J. C., Ito, M., and Szentagothai, J., The Cerebellum
as a Neuronal Machine (Springer-Verlag, New York, 1967).
Eccles, J. C. , "Postsynaptic inhibition in the central nervous
system, " The Neurosciences (Gardner C. Quarton, Theodore
Melnechuk, and Frances O. Schmitt, eds. , The Rockefeller
University Press, New York, 408-426, 1967).
Wilson, V. J. , "Inhibition in the central nervous system, "
Scientific American 214, 102-110 (1966).
Scheibel, M. E. and Scheibel, A. B. , "Spinal motorneurons,
interneurons and Renshaw cells. A Golgi study, " Arch. Ital.
Biol. 104, 328-353 (1966).
Maturana, H. R. , Lettvin, J. Y. , McCulloch, W. S. , and Pitts,
W. H. , "Anatomy and physiology of vision in the frog (Rana
pipiens), J. Gen. Physiol. 4_3 (No. 6, Pt. 2), 129-175 (1960).
Ratliff, F. , "On fields of inhibitory influence in a neural
network, " Neural Networks (E. R. Caianiello, ed. , Springer-
Verlag, New York, 6-23, 1968).
Barlow, R. B. Jr. , and Levick, W. R. , "The mechanism of
directionally selective units in the rabbit's retina, " J. Physiol.
(Lond.) 178, 477-504 (1965).
Wilson, D. M. , and Waldron, I. , "Models for the generation of
the motor output pattern in flying locusts, " Proceedings of the
IEEE pp, 1058-1064 (1968).
Ratliff, F., and Mueller, C. (3., "Synthesis of "On-Off" and
"Off" responses in a visual-neural system, " Science 126, 840-
841 (1957).
Hubel, D. H. , and Wiesel, T. N. , "Receptive fields, binocular
interaction and functional architecture in the cat's visual cortex, "
J. Physiol. (Lond.) 160, 106-154 (1962).
Kilmer, W. L. , "A circuit model of the hippocampus of the brain, "
AFOSR Scientific Report, Division of Engineering Research,
Michigan State University (July 1970).
81
13.
14.
15.
82
Kilmer, W. L. , "The reticular formation: Part 1, Modeling
studies of the reticular formation; Part II, The biology of the
reticular formation, " AFOSR Scientific Report, Division of
Engineering Research, Michigan State University (February 1969).
Kauffman, S. A. , "Metabolic stability and epigenesis in randomly
constructed genetic nets," J. Theoret. Biol. 22, 437-467 (1969).
Purpura, D. B. , in Basic Mechanisms of the Epilepsies
(Jasper, H. H., Ward, A.A., and Pope, A., eds., Little, Brown
and Co., Boston, 1969).
APPENDIX A
BACKGROUND ON THE DEVELOPMENT OF THE HIPPOCAMPUS NET
A. 1. The Pyramidal Cell Logic Unit
The pyramidal cell model as originally conceived was the set
of continuous firing rate equations shown in Figure A. 1. In this
figure, Equation 2 says that the firing rate of model pyramidal cell j
at the axon hillock at time t, yj(t) is a linear function of xJ.(t) only
when xj(t) is in the range from 0 to a .. If xj(t) is less than
mYJ
zero, then .(t) = 0. If x, t is reater than a ., then .(t) is
YJ J( ) g myj YJ
equal to the maximum value of amyjayj°
The function xJ.(.) as defined in Equation 1 consists of Six
terms: I
1. Z) 6.. .. a.. t-T ..) zt-T..
1:1 ji in 31' A31 ( 31)
This term represents the effect of the firing rates of the basket cells
on the firing rate of the pyramidal cell. To explain the concepts which
were used to develop this term, assume synaptic contact is made
between basket cell i and pyramid j. At time t-Tji the basket cell
fired at a rate zi(t-'rji). This signal traveled through various
collaterals to bouton j, i, arriving there at time TAji’ altered by
an amount in' (Note: By convention, if there is no connection
between basket cell i and pyramid j, then in: 0. ) At or near the
bouton, the signal is modified by the memory process ajiu-TAji)’
which is defined in Equation 4. Finally, the Signal passes through
the dendritic arbor and soma of the pyramidal cell as an inhibitory
post-synaptic potential and arrives at the axon hillock at time t,
having been altered on the way by an amount €ji'
K
2. Z}
k=l
This term represents the effect of the septal fiber firing rate on the
O'jk sk(t-'rsjk)
83
84
X(t) = -€yA(t-TA)z(t-'r) + 0’s(t-‘rs) + 0M(t-'TM)
+ 8Y(t-tx) - F(t) (1)
Y t) - t) t) (t) )T (2)
( — (y1( . y2( . y,
where
0 . S 0
xJ(t)
:: < .
yj(t) aijj(t), 0 < xj(t) amyJ
a .0. ., x.(t) Z a .
1711'] VJ J mYJ
I:
l"(t) : \II So exp [-% (t-w)] x(w)dw + F0 (3)
A(t) — 1+ 111 if M” ( )d
_ ( - )exp - T 0 exp[- 'T—flyz w-‘rz w
(4)
Figure A. 1. The pyramidal cell firing rate equations.
The expressions are for J pyramidal cells, I basket cells, K
septal fibers, and N mossy fibers. The dimensions of the vectors
are: X(t), l"(t), \II, 6, l" , o. , c1, and C:Jxl; Z(t):Ix1;
0 mx x
s(t) : le; M(t) : le. The dimensions of the matrices are:
A(t), A, II, TA, 7, e, y:JxI; 0, 7x:JxJ; TM, 9 :JxN;
'r, 0' : JxK.
85
firing rate of the pyramidal cell. The memory effects between the
septum and the cell are assumed to be constant relative to the
basket to pyramidal cell memory. This will also be true of both
the mossy fiber and other pyramidal cell inputs discussed below.
N
. 23 0. - .
3 n=1 jn mn(t TMjn)
This term represents the effect of the mossy fiber input on the firing
rate of the pyramidal cell.
J
4. E (3. y (t-‘r
[:1 j! I
This term represents the input from other pyramids and the possible
xj 1)
feedback from pyramid j itself.
5. I‘. t
J( )
This term is the variable threshold defined by Equation 3. This
expression is an attempt at a simple linear continuous equation for
the kind of firing rate dependence on the input rate threshold above
which nerve spikes are generated: the threshold increases as the
firing rates of the inputs to the neuron increases in the recent past.
The equation is a convolution of the potential function with an
exponential decay. Thus, at some time t the threshold is made up
of a constant term plus an infinite number of terms of the form
f(w) exp{ - 1/7 (t-w)} o s w s t.
Therefore, the value of the potential function which occurred at time
w = 0 will have decayed the most, since it would have the value
f(0) exp (- t/T);
and the value of the potential function occurring at time w = t will
have decayed not at all, since it would have the value
f(t) - 1 = f(t).
Equation 4 is an attempt to give the pyramidal cell model a
memory, where memory can be loosely defined as a device for storing
records of events which have occurred in time previous to the present.
The aji-th entry expresses the concept that the memory
process becomes larger as the basket cell i's firing rate zi(t-'rz) in
the recent past becomes larger and approaches 1 in the limit: that is,
86
88
t
_ , 1221 _ a
A — So exp I: )‘ji in zi(w iji)dw large,
exp [-A/Tji] -> 0 and aji(t) -' 1
If the basket cell i's firing rate zi(t-'rz) has been very small in the
past, then a.i(t) approached some minimum value nji: that is, as
the A expression defined above becomes small,
- .. -’ t -*
exp ( A/TJI) l and aji( ) lIJ1
The PCLU as defined in Chapter 3 is an extreme simplification
of this continuous model. Some more of the more important simplifying
assumptions are:
1. There is a constant threshold.
2. There are no inputs from other PCLUS.
3. The septal input controls the magnitude of A and is not of
primary importance in the determination of the pyramidal cell firing
rate.
4. The memory has no decay.
5. Most time lags are omitted.
A. 2. The Basket Cell Logic Unit
The terms in the basket cell firing rate equations, Figure A. 2,
are analogous to terms in the pyramidal cell firing rate equations.
Z(t), Equation 2, is the basket cell firing rate vector. It is expressed
in the same form as Y(t), with azi being the proportionality con-
stant and amzi being the maximum permissible value of di(t)°
D(t) is the basket cell firing rate potential vector, and it is
analogous to X(t). The first term on the right hand side of Equation 1
is of the same form as Equation 1, Figure A. 1, term 1: ¢ corres-
ponds to 6; A correSponds to y; G(-) correSponds to A(-). The
second term in the expression, (Z(t), is the threshold for the basket
cells. Its Equation 3 is analogous to Equation 3 of Figure A. l. The
last term in the expression, 2;, is the constant firing rate potential
vector for the basket cells, and it is analogous to C of the pyramidal
cell expression. The memory expression, Equation 4, is of the same
form as the memory expression for the pyramidal cells.
87
D(t) = <1> A G(t-TQ) Y(t-TR) - G(t) + 1; (1)
Z(t) = (21m. 210) )T. (2)
where
0 di(t) S 0
7‘1") = l “zidi(t" 0 S di't' < Clmzi
a .0. ,, d.(t) Z a Z.
_ mm m 1 m 1
t 1
(Z(t) : {2150 exp [- R (t-w)-] D(w) + 90 (3)
I:
_ l. - .I_)t‘w -
G(t) _ l + (p -1) exp {g 30 expl: v J AY(t 'rv)dw
(4)
Figure A. 2. The basket cell firing rate equations.
The expreséions are for J pyramidal cells and I basket cells. The
dimensions of the vectors are: D(t), Z(t), (Z(t), 91. 90, g, and
p:lxl; Y(t) : Jxl. The dimensions of the matrices are: (b, G(t),
TV: go “'9 V31x-J; A3JXIo
TR, TQ,
88
It is clear from the BCLU model presented in Chapter 3
that some radical simplification of this model has been made. The
major additional assumption for the BCLU over and above those
presented in the previous section is that there is no memory process.
A. 3. The Connectivity
As originally conceived, the connectivity of the hippocampus
model was based on the concept of a card. A card was defined as
(1) all pyramidal cell (PC) models connected to one septal fiber,
plus (2) all basket cell (BC) models which receive inputs from the PCs
of the card (it was assumed that a BC did not receive inputs from two
different cards), plus (3) a cell in CA1 which received inputs from
every PC in the card. The output of the card was the output of this
last cell. The communication between cards was accomplished by
BC collaterals to the PCS of other cards.
This concept was modified to the connectivity described in
Chapter 3, with one septal fiber per PC, because it seemed possible
that a card could be modeled as a single PC.
APPENDIX B
A COMPUTER PROGRAM BASED ON ALGORITHM 4. 3. 1
Figure B. l is a CDC 6500 FORTRAN EXTENDED (MSU)
listing of the program TTABLE and its subroutines. This program
generates all possible output sequences of a CA3 sector net model
containing N PCLUs. It does so according to Algorithm 4. 3. 1.
Note that one data card is required in order to specify the number
N; the format of this card is 10X, 15. The output sequences are
printed in rows of ten; the format for a typical sequence is demon-
strated by the following, which is an actual output sequence generated
by TTABLE with N = 5:
PCLU l- 1 0 l 0 l 0 l (The output sequence component for PCLU l.)
PCLUZ-ZOlOOOO
PCLU3-2021222
PCLU4-2010101
PCLU 5 2 0 0 0 81-9 0.
cycle
89
90
441
.1.-- wt.
011%,
oLJIUFJLfi Jidr.
.Lezu::sr .rLrC a
.mpisafl7flu5
:. 3 es 213:5: :
Am othv UDULCII 4440
.e .sLC Lpszso nnso
3nzoq H rd Dfi ~:L
{3 fig AZLEUJA 02V flip tweeeeooeoU
an :_ :3 AL.jm.zoL. u—
.s .KDL. smmioxl Anew
:m 3. :3 AA.3n.z3L. um
AN .22.. Neural; nqdu
Au .1.. eaozso aneo
Znioq H 4* CM CO
1 m: .ZLTLJL :ZDJLF Lib $.1xuzuDooeooeeoU
0L?:. £7;. bififa DHIb
N 7.3Mmru>.....
2.. DZLCICJUJ 3D Duo: Ado-coo
oL2401 Z t.~3 QMZ m31£quallatooooe
4 rtm _un u34u7320 ~71—23 at. mu.:1ZDU Eqrosrl warp-o...
._:1.:c ._312L. Annie» quoaaa
.mqmafie s8 were: zul_.u IBN. H.15— .Ldoooooooou
z» 3. :9 44.34.4300 mm
.4 .4340 :mmx211 4440
:4 :0 :9 .~.14.43_. um
AN .4340 Newlarl 4440
.0 .040 .3324: 4440
3n: .4 u 44 44 ~:4
.—7_uc..l.._1...+ EFL .uIb mdoooooooou
:0 :0 :0 44.34.4340 mu
.0 .1340 :nuljrl 4440
:c :_ :3 44.34.1340 mu
.5 .rcq0 manlsrl 4440
A: .40. .33200 4440
am? .4 u 44 as 03
_242444 rhm wt. u~........0
an 3. Do .4.34.13~0 mm
.m .4340 ¢0M1ox1 4440
:n 2. :9 .4.34.1344 ma
.n.43_0 Mommarm 4440
.n.2_0 #33249 4440
3m? .4 u Q~ 3m CO
.ZATUJJ Ih¢ Ut— LdooooooooU
:3 a. :o ...au.13«0 mm
.4 .1340 :nu1311 4440
:7 a. so .~.1m.4340 ma
.3 .2340M3w1311 4440
.r .000 P33209 4440
an: .4 n 04 a: 0:
.242444 31w 4:» m4ooo.....0
an :0 :9 .4.14.z:44 um
An .4340 tmwlsxl 4440
92
[CL71J211_E
22m
.\\¢nf441 *.N..¢ A
3 1.... ..wf 04.25.30,. 241:... ...._.n.ox.3..uo:.~0Haifan. ~CN
.n..<3.0_4 .4340 mou1211 4440
.1 .2: #33240 4440
:n2 .4 n t. av co
.:_1 4.41420: 1: m. .242044 tpL 4t. 04........0
so 3. :5 ...34.4340 m.
.r .cc_0 :nmlazl 4440
83:80 "4 .m 0.33,...
93
.447nzzeun
120
d H 13H NH
11C .3Z Lfl _?UEy40 ?UZ ufp La quIooooooooU
?r3_mra : u (O.
.c: p. .142444 :43 4:. .4 014:........u
MD?._ZO0 cm
x. 3. 3: ..4v.a..3:...4..2..0_39.0u.
:u :. 33 .u.au.tvu.
x. 6. 3; ..t....3:...4..32....30..u~
2.4". :M c: S
u u a u 4:4 4 u z u :2 OH
142733 Zu>4 Z4 13. 414:........0
.4 3. so
4 I i u 42. u I i u :2
r4L;:? :33 74 131 410t........0
:. :. as ..u\.4+:...34..m\z.0u.
.1412): :33 :4 r: 74>4 ?4 n. 3 .4 44A :. z0ut0........0
0
13.. 2 . .=~.c.0p33. 23:200
0
2401147.904LP 3. N.M.: >br41311 r404114\420~331 nqr_o.......0
.S.<3~.VDM1011 4?.»33yr3m
94
zrnpmx
4:24.220 0:
.32.~zc0 am
4: n 0: o.
.4 .54 .440 LH
14 u .4. .40 .DOH
fi$4..LIU(—Hm.v.
fi\0( u 4!
2 .~ u 4 am so
4 I q u 0:
0
.4 I .-,,..4t.......{ 4...... ...LL. r...r..._.3 n. .3sz. : m..41....2.43 42:345.. hark...oo...u
wt... 7. . ..4~.:~0_T43. .;3¢é:00
, - . u
.7... .3324: 42..3:xr:m
..2m
.4 H £O~
..24:.44 n::.>4r1 .21 44334 ,3: n4:: .24¢.44 .24x130 4:. m. 414t........0
4324.?00 n.
zxapuLa .70~3?.3n 4440
a u 13.
.0245044 p::_>4:1 .24 n4czz. .242444 .zurxa0 .0 4:4:........0
p.37; 0 ECU OH
: - - I n. a. as ..4....::. .07. .2..._3:..u.
2.4". 2. oo
, 4...". n4 :3
4 I. .>_ H «H
- i - 0
1.... .3 . .44.:3FDD. 7552:0
II. .l ,":l:.
r4424344n21w~414243 42..).
x»... .0r4Lszm n44411< my..::x n.:.........0
0
.z.¢3.0rnu.311 474.33r13m
.4283
u~.m onsmwm
95
:20
.\\x44417100 M04
.4 n344l 3.m4.* Ir4474:34r 4:143.¢.49M\\4r44b4;tfim 434
.4434444zlcu 3:4
4.:44.xu4:4.44.a 3401*.«n4445LOu com
0
- 4324.230 on
.34.4nfi..f.44r3—n44.4.ocm _?411
i i 2.4n4 3M 33
ms4 .2441
4 . 43:14 N 42x44 m4
.4 H 47,.TQH
.4.434 42431 «4
n4.u4.u4 .134 I .zr144m4
4 u 232 44
:rzpmrfi 4 + 532 u 532
44 s. cc .34.49.:324m4
4324.200 94
.:.4un..w.44.3:44 . .2 32.44r34n4 . :94 . 54433020
.:u{:44r r44»r 32424434: . :4 ..4. I .4 .134n4 .3 3132 0
.?: 24 4074:24n .1L.3: 2401 :.I4 4L. 1: n.242444 z 4:. 00441 0
£94". 04 33
- 0
\4\42114.\4\ZDZ db