UNSIVER SILTY l|||llllllllllllHlllllllllllllllllllllllllllllllllll 3 1293 008990 This is to certify that the thesis entitled DESIGN AND DEVELOPMENT OF HARDWARE AND SOFTWARE ENVIRONMENT FOR TESTING CMOS VLSI ARTIFICIAL NEURAL NETWORK CHIPS presented by Anuj Soni has been accepted towards fulfillment of the requirements for M. 51 degree mm 5:45 )3)??th i9 g B Major professor Date Czjsfiz/ 0-7639 MS U is an Affirmative Action/Equal Opportunity Institution l r \ LIBRARY Michigan State University p ,1 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. ll DATE DUE DATE DUE DATE DUE Jijw f i; ’ l s l MSU Is An Afflrmdlva Action/Equal Opportunity Institution czbhcmptmopd DESIGN AND DEVELOPMENT OF HARDWARE AND SOFTWARE ENVIRONMENT FOR TESTING CMOS VLSI ARTIFICIAL NEURAL NETWORK CHIPS By Anuj Soni A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Electrical Engineering 1992 ABSTRACT DESIGN AND DEVELOPMENT OF HARDWARE AND SOFTWARE ENVIRONMENT FOR TESTING CMOS VLSI ARTIFICIAL NEURAL NETWORK CHIPS By Anuj Soni This work involves the designing and development of testing environment for arti- ficial neural network chips. It is of utmost importance to build appropriate hardware and develop interactive software to efficiently test the neural chips. A good testing environment can not only protect the chip, but it can also deliver high resolution for measurements. An analog interface circuit was designed, which can be used to apply 32 different analog voltages and can also measure 3‘2 different analog voltages. Different kind of digital and analog neural chips can be tested by using this interface. Some features and capabilities of 50 neuron dendro-dendritic architecture chip were explored. The various experiments for measuring the transient response to predict the convergence speed and other characteristics of the 50 neuron chip were performed. For efficient VLSI layout, a need for a new network architecture called double square is discussed. The hardware and software development for testing the 81 neuron chip based on this architecture is also emphasized. ii ACKNOWLEDGMENTS I would like to express my sincere gratitude to my advisor, Dr. Fathi. M. A Salam, for his inspiration, encouragement and constructive criticism through out the course of this research. I appreciate and thank Dr. Timothy Grotjohn and Dr. Hassan Khalil for being in my graduate committee. I wish to express my sincere thanks and appreciation to my family and friends for their prayer and encouragement. Last but not least, I would like to appreciate the efforts of my labmates, without them, this would have been a difficult task to finish. iii Contents 1 Introduction and Background 1.1 The Neural Systems-A Biological Neural Network ........... 1.2 Artificial Neural Networks-Some Basic Structures ........... 1.2.1 A Binary Valued Neuron ..................... 1.3 A Simple Model .............................. 1.4 Models of Artificial Neural Networks. .................. 1.4.1 Feedforward Model ........................ 1.5 Implementation of Neural Networks ................... 2 Feedback Neural Model 2.1 Electronic Implementation of The Hopfield Feedback ANN Model 2.2 Synaptic Weights ............................. 2.3 Testing Environment for The Three Neuron Hopfield-Type Chip . . . 2.3.1 32x32 Analog l/O channel Interface ............... 2.4 Experimental Results of The Three Neuron Hopfield-Type Chip . . . 3 Dendro-Dendritic Neural Network Architecture 3.1 50 Neuron CMOS VLSI Chip With Digital Learning .......... 3.2 Transient Analysis of The 50 Neuron Chip ............... C51 4003 10 13‘ 15 16 16 20 22 23 23 3.3 A Real—Time Application Using The 50 Neuron Chip as a Pattern Classifier .................................. 3.3.1 Storing Two Characters in each Category ............ 3.3.2 Storing Three Characters in each Category ........... 4 The Double Square Architecture 4.1 The Need for a New Architecture .................... 4.2 Interface Circuit and Software Environment .............. 4.3 Testing of Double Square Chip ...................... 5 Summary and Conclusions r ‘ 3.1 Summary ................................. 5.2 Conclusions ................................ * List of Tables 4.1 4.2 4.3 Results of the 3 neuron Hopfield-type chip ............... 21 Test results for a single hexagonal network configuration ....... 35 Test results for a single square network configuration ......... 36 Test results for a double square network configuration ......... 37 vi List of Figures 1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 3.4 4.1 4.2 4.3 A typical neuron in the human brain .................. A processing element in the neural network ............... Two layer feedforward artificial neural network ............. Schematic of a Hopfield-type 3 neuron circuit ............. A neuron with the self feedback ..................... Schematic of the 32x32 analog I/O channel interface .......... Layout of PCB for the 50 neuron interface circuit ........... Transients for the 50 neuron chip .................... Results for 2 patterns stored in each class ................ Results for 3 patterns stored in each class ................ Square and Hexagonal Connections ................... Schematic of the DSQ interface circuit ................. Port selection circuitry for the interface ................. vii 24 26 29 30 34 39 40 Chapter 1 Introduction and Background Neural networks provide a unique computational architecture to address problems, that are intractable or cumbersome with traditional methods. These new computing architectures are inspired by the structure of the brain, which is radically different from the computers that are. widely used today. Neural networks use massive paral- lelism, that rely 011 dense arrangement of interconnections [14]. The term artificial neural networks is coined from the network of nerve cells in the brain. Although the biological details are eliminated in these computing models, the artificial neural networks retain enough of the structure to provide insight into the biological neural processing. Neural networks provide an effective and efficient ap- proach for a spectrum of applications involving pattern recognition, pattern mapping, pattern classification and many others. Artificial neural network architectures are different from traditional processor computers. Traditional machines are capable of performing hundred or more ba- sic commands such as addition, subtraction, multiplication and many others. These commands are executed sequentially after a certain clock period. In contrast, neural network processing units may do in only one step or a few steps [30]. A summation is performed at the input, and incremental changes are made to the parameters as- sociated with the interconnections. The processing capability of a neural network is judged by the number of interconnections update in some reference time, whereas for Von Neumann machines, standard benchmarks are setup and instructions executed per second are recorded. This thesis is organized as such, chapter 1 covers some background and fundamen- tal material on neural networks. It includes a biological overview of neural networks, a discussion on feedforward neural architecture, back propagation algorithm and im- plementation aspects of neural networks. Chapter 2 starts with the introduction of hopfield feedback neural architecture and then it is followed by a brief discussion of electronic implementation of hopfield neural model done by Wang. The overview about the hardware testing environment is done later in the chapter and finally the results of the hopfield 3 neuron chip are discussed. Chapter 3 covers the introduction of dendro-dendritic architecture and a brief discussion about the 50 neuron chip based on this architecture. Some definitions and experimental results on the transient behavior of the neural chip are diSCUSSed in the subsequent sections. The capability of the 50 neuron chip as a pattern classifier is also elaborated in this chapter. Chapter 4 describes the new chip with 81 neurons and a different architecture, called as double square architecture. A comparative study of different architectures such as hexagonal, square and double square for implementation in VLSI is also done. A brief discussion about the design of an interface circuit to perform the communication between the 81 neuron double square architecture chip and PC follows afterward. Chapter 5 discusses the summary of the whole research done and the need for a testing environment for neural chips. The future perspectives about the testing Prc-Synnpuc' Axon Dendri/ Cell Body Poet-S Axon Figure 1.1: A typical neuron in the human brain of chips is also briefly discussed in the conclusions. 1.1 The Neural Systems-A Biological Neural N et- work Neural network architectures are motivated by the different structures of brain nerve cells. A neuron is the basic anatomical unit of the nervous system [9]. A human brain has approximately 1011 neurons of different kinds and about 10” synaptic interconnections between them [31]. A schematic of a typical nerve cell in the human brain is shown in figure 1.1. A tree like network of fibers called dendrites are connected to the cell body or soma. The nucleus is located inside the cell body and various metabolic activities takes place in side the soma. An axon extends from the cell body, which eventually branches into strands and substrands [31]. The ends of these strands and substrands are transmitting ends, called synapses, to other neurons. The axon of a typical nerve cell is connected to a few thousand neurons via synapses. Generally, the transmission signals come from the dendrites to the cell, and the cell body perform the summation on all the incoming signals. The effect of this summation raises or lowers the potential inside the soma. The neuron fires, if the aggregated potential becomes greater than some threshold potential. The generated pulse—like signal is called the action potential. The action potential depends upon the intensity and the duration of the aggregated signals that excite the neuron [30]. 1.2 Artificial Neural Networks-Some Basic Struc- tures 1.2.1 A Binary Valued Neuron McCulloh and Pitts[1943] [l7] purposed a simple model of a neuron as a binary threshold unit. The output of this neuron model was categorized to two logical values, 0 and 1 respectively. The model was simply based on the summation of the inputs to a neuron from other neurons, and this neuron outputs a one or zero according to whether the summation is greater or less than a certain threshold. McCulloh and Pitts proved that this model neuron was capable of performing any computation that an ordinary digital computer can, though not necessarily as fast and conventionally. Although, this model realized the threshold functionality of the predicted biological neuron behavior, it didn’t capture the complexity or the known non-linearity of the actual neuron. 1.3 A Simple Model wjl PROCESSING UNIT j 2‘. ,0 RI?) O CUNNECI'ION O INPUTS STRENGTH “'J'i oumrrs Figure 1.2: A processing element in the neural network A typical processing element in an artificial neural network is depicted in figure 1.2. Multiple inputs from other processing elements/neurons come to the processing ele- ment as shown in figure 1.2, and these inputs are connected via a variable connecting strength to the processing element. In figure 1.2 the scripts w], w; ....... wn are asso- ciated with the strength of the interconnections. The processing element performs a weighted sum on the inputs and uses a nonlinear threshold functionf, to compute its output. The output result is sent further along the output connections to the target processing element. 1.4 Models ‘of Artificial Neural Networks Presently many researchers are trying to realize the current knowledge of biological neural system through the artificial neural networks. and they have come up with some models with certain topologies, learning algorithms and other characteristics [1, 2. 6. 10] . In recent years. many different architectures have been proposed and w ' \4.’ x2 ‘6 ’1 i O O , x. s. Q Figure 1.3: Two layer feedforward artificial neural network implemented [1 , 8, 25]. These artificial neural network models can be classified broadly into two categories viz., feedback and feedforward. The feedback model is discussed in detail in the subsequent chapter and the feedforward model is briefly reviewed in the following subsection. 1 .4.1 Feedforward Model Single layer or multiple layers are used between the input and the output nodes for computation [6]. The basic feedforward multilayer ANN (artificial neural network) architecture is.shown in figure 1.3. Figure 1.3 depicts a two layer feedforward architecture. The input is applied to the first layer, called the input layer and is fed forward to the last layer, called the output layer. Any layer in between the input and the output layer is called the hidden layer, and there can be more than one hidden layer. These layers viz., input, hidden and output are cascaded to form a feedforward artificial neural network. The governing static equation for each neuron/processing element can be written as IJ' = SA: 1113',in + gj), where Y, is the output of the i-th neuron, 81].) is a nonlinear monotone non- decreasing sigmoidal function, w], is the interconnecting synaptic weight from j-th neuron output of the previous layer to the i-th neuron input, at, is the i-th output of the neuron unit from the previous layer, and dj is the threshold bias at the input of the j-th neuron. The feedforward neural net can be programmed through back error propagation rule presented by Remulhart etial. [12]. The Back propagation learning algorithm implemented into software, involves a forward propagating step followed by a back propagating step. Both the forward and the backward steps are performed for each pattern presented during the training. The forward propagating step begins with the presentation of an input pattern to the input layer of the network, and continues as activation level calculations propagate forward through the hidden layers. In each successive layer, every processing element sums its inputs and then applies a sigmoidal function to compute its output. The output layer then produces the output of the network. The backward propagation step begins with the comparison of the network’s out- put pattern to the target vector, and then the difference is calculated. The backward propagation step computes the error for the hidden units and changes the weights, starting with the output layer and moving backward through the successive hidden layers [12]. In this step, the network corrects its weights in such a way as to decrease the observed error. Kl In the backward propagation algorithm [12], the error value 6 is computed for all the processing units and weight changes are calculated for the interconnections. The error value 6 can be computed for the output layer by the following expression. For unit jin the output layer, the error value is : 51:01- ‘17)!!!qu where 1']- : target value for unit j (11- = output value for unit j g’ (:r) = derivative of sigmoid function g 5',- 2 weighted sum of the inputs toj For the hidden layer, the error value of j is computed as : 51 = [Z (Sis-we] 9'(5'J‘) k In this case, a weighted sum is taken of the 6 values of all the units that receive output from the unit j. The adjustment of the interconnecting weights is done by using the 6 values of the processing unit. The connection weight adjustment is done as follows : Aw]; = ”(SJ-a,- where 77 is the learning rate. For more details, see [12]. 1.5 Implementation of Neural Networks Implementation of neural networks is done usually by software simulators and hard- ware VLSI chips [9, 11, 25]. Computer simulation programs can be run on Von Neu- mann machines. With software simulators, one can implement any arbitrary model of a neural network. Various choices of the input, output and other capabilities can be flexibly simulated and tested by using these simulators. On the other hand, hardware implementations can accelerate the speed of response. Hardware can be designed using analog computing technologies or a combination of analog and digital. Various custom logic chips [26] and enhanced logic memory chips have been fabricated for the neural network implementation. The main constraint in neural network chips is the perceived need for a large number of interconnections. The space taken by the synap- tic interconnections is usually the bottleneck for the design of the chip. Recently, new architectures and design have produced efficient neural network chips. Chapter 2 Feedback Neural Model Many researchers have proposed different models for the neural network system. The Hopfield neural network model is one of the predominant and most well known feed- back circuit [1 — 5]. This model has been extensively used in associative memory and optimization problems [3]. Hopfield presented a circuit consisting of nonlinear graded-response model [2] neu- rons, organized into networks with effective symmetric connections. In early studies, biological McCulloh-Pitts neurons [17] were modeled as logic decision elements de- scribed by two valued states. In general the McCulloh-Pitts neural model did not. capture the important aspects of high interconnectivity between the neurons and the analog processing. The Hopfield feedback neural model has a simple topological configuration and is represented with binary or analog inputs and outputs. Each neuron has a char- acteristic of continuously differentiable sigmoid nonlinearity [2]. The output of the neurons are fed back to the input via synaptic weights. The synaptic weight between the i—th and j—th neuron is denoted by Tij- The following set of coupled nonlinear differential equations describe the dynamics 10 of an interacting system of N neurons. These equations determine the change in the neuronal state variables with time under synaptic circuit influences [32]. u, 04: nv——+t an alt F, J J R,- where V]- = gJ-(uj) and R; is a parallel combination of the input resistance and the resistance used to model synaptic connectivity. The Hopfield model employs Hebb’s rule [13] as a learning algorithm. The Hebb rule is simply stated by Hopfield [1] as M Q=wa, 5:1 where Tij is the synaptic weight from the output of neuron jto the input of neuron i, V,- is the i-th bit of stored memory 3, and M is the number of stored memories. The above mentioned equation can also be written in the matrix form as [34] M T T=Zwva 5:] if further learning is required with additional desired pattern V“, then the above equation can recursively be written as [34] 0T 1 Tnew : old + V V( a where, Tum and Told are the new and old weight matrices. The dynamical behavior of the feedback neural net can be captured by using the energy function of the equation 2.1 [34]. The first integral of the energy function can 11 be written as [34, 33] v, E = —% ZZTUMVJ- + Z Rf‘f 5',-‘1(l/')dV — Z 1,-V, (2.2) i j i 0 i and thus the dynamical equation of the feedback ANN model can be rewritten as du, (9E -'— = ——o 2': " (11 av;- ( 3) The derivative of the energy function with respect to time is (1E _ aEdu, a“ — m? _ aEdl/idu, — ,a—ma _ L% .62 '2 _ {Cidui (ll/i _ Ids-(w) 1e {Cg (lit, 0V,- 0, |/\ . . . . (13" u- . As C,- 2 0, and S',(u,~) IS a monotone increasmg function, thus fil—Q Z O. Tlns I system hence works like a gradient system. The energy function decreases along the trajectories and its time derivative equals zero at (ii—g = 0 ,which is the equilibrium point of the system [34, 33]. 2.1 Electronic Implementation of The Hopfield Feedback ANN Model An electrical circuit for the Hopfield model can be implemented by using an oper- ational amplifier as a neuron body, resistances as synaptic weights, and Axons and dendrites replaced by transmission wires [32]. The amplifiers used to model neurons are characterized by sigmoid, and monotonic input-output relations. An excitatory synapse connection is given the output from the positive terminal, and the inhibitory synapse takes the output from the negative terminal of the amplifier [5]. Simple nMOS transistors can be as used as programmable synaptic weights in the VLSI implementation of certain class of ANN [35]. One can not theoretically rule out the possibility of oscillations or chaotic dynamics when gate voltages are not constrained by symmetry but one can ensure the presence of stable equilibria [35]. The VLSI implementation of three neuron Hopfield—type model was done by us- ing CMOS double inverters as the neuron body[24], and the nMOS transistors as the conductance element. The network has 2N2 fully connected nonlinear synaptic weights. A model employing three neurons thus has 18 synaptic weights. This layout was fabricated by MOSIS using n-well 2pm CMOS process, with a 2.2x2.2 mm" chip area in a 40 pin DIP package. The schematic diagram for the 3 neuron Hopfield type feedback network is shown in figure 2.1. The well behaved monotonic nondecreasing sigmoid function can be implemented by using two CMOS inverters in series. The advantage of using a CMOS double inverter is its simplicity in design and lesser number of transistors, which can save significant area in VLSI implementation for building large networks. A simple neuron with feedback realized by two CMOS inverters in series posses two stable equilibria [24]. Figure 2.2 shows a simple neuron with a self feedback, where u,- and v,- are the 13 r. C -—l [:‘l‘ —4 -+ —+ —l -l T33 .T33 ’1‘322 [1'32 b T31 .1315 —4 —1 ti —| -l —l —l 123 ’ ’12?» [122 [me [121 [mu —l —l —+ —+ -l —4 T13 [T1135 T12 ’T12 ’T11 Tllb Lulu E? 9:. i5”? 3E2; ,éi HEEL E Figure 2.1: Schematic of a Hopfield-type 3 neuron circuit 14 A Ui rail: fig“ R ERG ““1]? "l _[__ F‘h—OVfbi Figure 2.2: A neuron with the self feedback respective input and the output of the i-th neuron. In a biological neural system, the input u, lags behind the instantaneous output V,- of the other cell due to the input capacitance C.- of the cell membrane, the trans- membrane resistance It, and the conductance Tij- This natural phenomenon of input capacitance and transmembrane resistance of a biological neuron is compensated via parasitic capacitance and resistance of the MOS device. 2.2 Synaptic Weights It is of utmost importance that the synaptic strength of interconnecting weights amongst the neurons be variable. This changeability in the strength of synapses can be implemented via nMOS transistors [15, 9]. The conductance can be varied by changing the gate voltage of the nMOS transistor. Although the nMOS transistor has nonlinear transfer characteristics, the dynamic behavior of the system can be supported with theoretical analysis analogous to [20. 21]. 2.3 Testing Environment for The Three Neuron Hopfield-Type Chip Testing involves a careful application and measurement of different voltages to and from the chip. To control different voltages on the chip, an analog interface circuit was designed [29]. This interface can be operated by using any IBM compatible PC. The analog interface has a capability of applying 32 different analog voltages in the range of -5 to +5 volts, and it can also measure 32 different analog voltages sequentially. This interface in itself is very compact and versatile as it can be used to test different types of analog and digital neural chips. The design and working of this analog interface circuitry is discussed in the following subsection. 2.3.1 32x32 Analog I/O channel Interface A computer performs all its operation in digital logic levels. All the I/O operations of the computer bus and other control signals are in the binary form. Thus, a DAC is required to perform the digital to analog conversion. The schematic of this interface is shown in figure 2.3. The working of this interface is elaborated on as follows: The digital data from the computer data bus is down loaded to a 128x8-bit static RAM by keeping the sRAM in the READ mode. The data ranging from 00000000 to 11111111 is stored in 32 different memory locations of the static RAM. The data bus of the static RAM is also connected to the digital to analog converter. After loading the data, the mode of operation of the static RAM is changed to WRITE, and the 8-bit synchronous counter is initialized, which feeds the data to 32 sample and hold 16 Adm” 8‘“ \ c z SYNCHRONOUS DECODER 0"” COUNTER ‘ l \l ‘ MEMORY l , LATCHES “—5 ( SRAM) DECODER Data Bus 4 D I Data ara f Address 1 T Data ADC .— Amoc . DECODBR ”1“: Analog [Dara ANALOG ’ l . .._1 Analog Data SAMPLE & HOLD ‘— NEURAL SAMPLE & HOLD Analog Voltages Figure 2.3: Schematic of the 32x32 analog I/O channel interface 17 circuits sequentially. The function of the counter is three fold viz., 1. To select the memory address location starting from 0x00, 2. To select the. sample and hold circuits using 4x16 decoders, 3. To select the analog data path from DAC (digital to analog converter) to sample and hold circuits via analog multiplexer. These three operations are done simultaneously by the counter at a frequency of 500Hz. The refreshing of capacitors connected to sample and hold circuits becomes easier and independent of the computer by using the synchronous counter and the static RAM. A separate circuitry using a crystal oscillator is employed to generate the clock for the synchronous counter. Similar to the. input side of the circuit, there are 32 sample and hold circuits at the output side of the interface to read the analog signal from the chip. An ADC (analog to digital converter), at the output read side of the circuit is used to convert sampled analog signal to digital data, and then this data is read by the PC. Following is a brief description of the latches used in the circuit. The various latches in the circuit serve different purposes. The numbering assigned to the latches is in accordance with the port address assigned to them from the computer. 1. Latch O: This latch is used to enable and disable the initial condition from the interface circuit to the hopfield three neuron chip. 2. Latch 2: Latch 2 is used to send the counter initialization control word. The synchronous counter (SN74LS590N) pins connected to this latch are C (output enable), CCKEN (counter enable) and CCLR (counter clear). 3. Latch 3: Seven signals viz., CS1, CS2, CS3, CS4, CS5, MEMR and MEMVV respectively of the static RAM are controlled through this latch. 18 Cit 10. Latch 4: This latch addresses the sRAM when the data is loaded initially into the memory. After loading the data, OE (output enable) of this latch is disabled and the counter is initialized. Latch 5: The purpose of this latch is to load the data from the PC into the static RAM. Latch 6: This latch enables the sample and hold circuits fixed at the output side of the interface circuit to sample the analog signal. Latch 7: Latch 7 addresses the analog multiplexer fixed at the output side of the circuit to selects the analog data path from sample and hold circuits to ADC. Latch 8: Latch 8 controls the various control pins of ADC. The different sig— nals controlled are START, OUTPUT ENABLE, ALE (Address Latch Enable), ADD C, ADD B and ADD A (Address pins of inbuilt multiplexer). Latch 9: This latch enables and disables the OE (output enable) pins of latch 4 and 5 respectively. Latch 10: This latch serves the buffering between the computer data bus and the output of the ADC. The binary data from the output of ADC is latched into this latch and is then read by the computer. 19 2.4 Experimental Results of The Three Neuron Hopfield-Type Chip Numerous experiments were performed to explore the different stable equilibria pos- sessed by the three neuron Hopfield chip. The results are compiled in table 2.1. The different weight matrices (voltages to the gate of MOS transistors) for exci- tatory and inhibitory connection strength were applied to obtain a different number of stable equilibria. For each set of the weight matrix, all the steady state conver- gence were observed for eight different initial condition ranging from logical 000 to logical 111. The voltages applied to the chip for the weight matrices and the initial conditions were in the range between 0 to 5 volt. As there are only three neurons in the chip, there can be a maximum of eight stable equilibria. The results of this chip shows that for different set of weight voltages (to the gate of MOS transistor) it is possible to obtain the desired number of stable equilibria. Tutti: Tuna-ix glib: (Exam) Gambia-y) E This 000 001 010 011 1C!) 101- 110 111 4.14.0 0404 Mug [nag l 010 010 010 010 010 010 010 010 an 0.40.4 0.40.4 4.74.041 [4.04.33 [and 2 100 011 011 011 100 011 011 011 414.: 0.40.4 4143 0.404 [30.433 0404i? 3 001 001 110 001 110 001 110 111 0.4 0.4 ? uu 0.104 [“ua [Mug ,4 000 001 110 001 110 110 110 111 0.404 4.404 4.40: 0404 Eng [rug 3 M 001 (I!) 001 100 101 100 111 0.4 .404 «0.4 M04 [nag [tug 0. 000 001 M our 100 101 110 111 0.10.: 0.4 414.0 0.404 [gag 1:34” 7 000 001 M 011 100 101 110 111 0.1 0.4 410.: 0.30.1 ' . [0.44.333 [Mug 8 (In 001 010 011400 '101 110 111 0.30.4 0.40.4 ‘ Table 2.1: Results of the 21 3 neuron Hopfield-type chip Chapter 3 Dendro-Dendritic Neural Network Architecture Dendro-dendritic feedback neural architecture purposed in [15] has the same dynamic properties as a gradient continuous-time feedback neural nets. This model is based on motivation from the biological neural nets, which has dense interconnectivity between the neurons. The maximum number of interconnections including the self-feed back of the neurons are n(n+I)/2, where n represents the number of neurons. The symmetry in the synaptic weights is derived naturally, as there is a single element connection from unit ito unit j, thus every neuron is connected to every other neuron via synaptic weight. Each neuron, which is the processing element, is implemented simply by a CMOS double inverter with an nMOS self-feedback transistor [15, 24]. The CMOS double inverter has a well behaved sigmoidal characteristic, which represents the typical predicted behavior of a biological neuron. The connecting weights were implemented by simply using nMOS transistors. The strength of the interconnected neurons can be adjusted by varying the gate voltage of the nMOS transistors. 22 3.1 50 Neuron CMOS VLSI Chip With Digital Learning A 50 neuron chip based on the dendro-dendritic architecture with digital learning has been successfully implemented [8, 23]. The interconnecting weights can be pro- grammed either by hardware learning or can be assigned externally from the computer to the chip. The synaptic connection can have a strength of either logical high or low. These logical levels can be assigned any analog voltage values from the external pins provided on the chip. This neuron chip has 1225 synaptic programmable weights, which can be stored in the on-chip flip-flops. An interface circuit on the PCB (printed circuit board) has been designed to realize the communication between the chip and the computer. The layout of the board is shown in the figure 3.1. This PCB can be inserted into any IBM compatible PC. This PCB has a separate port selection circuit to avoid any conflict between the other processes performed by the computer. The industrial standards were well followed while designing the PCB. 3.2 Transient Analysis of The 50 Neuron Chip Various experiments were conducted to analyze the transient response of the 50- neuron dendro-dendritic architecture neural chip. Different observations such as rise time, decay time, settling time, delay time and percent maximum overshoot were analyzed from the transient response of the chip. These transients were observed on a HP 1631A logic analyzer monitor. The logic analyzer was operated in the analog mode with external trigger ON for initializing the period of transient response to be measured. 23 The experimental setup simply constitutes the use of the logic analyzer in the above mentioned mode and a few changes in the software to apply different kind of initial conditions to the chip. Initially a pattern was learned, and then the initial condition for testing the steady state convergence was applied, such that the initial condition differed from the desired state by a hamming distance of one. The transient response of the neuron with the different testing initial condition from the steady state was observed on the monitor of the HP 16331A logic analyzer. The plots for different experiments for the transient analysis are shown in the figure 3.2. The various timings and their definitions [28] are discussed below. Delay time : The delay time is the time required for the step response to reach 50 percent of its final value. The delay time from the transient response is 23.5 ns. Rise time : The rise time is the time required for the step response to rise from 10 percent to 90 percent of its final value. The rise time from figure 3.2 is 69.5 ns. Decay time : The decay time is the time required for step response to decay from 90 to 10 percent of its final value. The decay time from the transient plots is 51.5 ns. Settling time : The settling time is the time required for the step response to decrease and stay within a specified percentage of its final value. The figure used here in these experiments is 5 percent. The settling time from the transient analysis is 194.5 11s. Percent maximum overshoot : The maximum overshoot is the largest devia- tion of the output over the step input during the transient state. The maximum over shoot is often represented as a percentage of the final value of the step response. It can be analytically represented as; maximum overshootaf100% percent maximum overshoot : _ final value finalog W ....... Uaicmg (or finaloq Trigger-“-..? . .-+-b Sample Per 10:! [m 2 Runs Magnification 50.00 n9/div Haqntfy flheuc 51000 nS/samnle Cursor Haves 69.50 as o to x ° ’1‘ 5.50 v. f 1, » .\ ~ ~ I l \../’1“‘-"" ““ I t 2.513 xi. _ If 1 I . ..... 1; l 1: . a __ _‘;1/ ‘3 . . l ' -mg . _,. t. . ..... Analog W ...... Uaiting for 9023109 Trigger ...... o—t—o Sample Period LN _ . P 2 Runs . flagnification ii .3 '0.00 ns/div Magnify about i L 5.;30 ng/samnle Cursor Moves 1 113.5 ns 0 to‘x 0 X, . .......... 3.50 V l I , : {W asav '..'.. 1 . . .1] l . c _ __ .40 ' » 1 '0.30V' 1.. .6 In; , ........ ““8109 W— ----- U attire for 110810:} Trigger ------ 1 am 50.130 «stow 5.13313 also/sample 123.5 as o to A Sample Period flagniflcation ”ago 1 f9 FI'o-JUt Eucsor Moves . a X 5.50 v : E' I 1 i .j 1 Mum-SW I 2.50 12’ . 'l 1', . I ........... l \ i -l \ . = . {1fo 1 M..— ....... _...—' Figure 3.2: Transients for the 50 neuron chip 26 percent maximum overshoot for the 50 neuron chip is : 13.89% 3.3 A Real-Time Application Using The 50 Neu- ron Chip as a Pattern Classifier A real time application using the 50 neuron chip as a pattern/character associator has already been demonstrated [7]. Pattern classification deals with class/category recognition of patterns, with numerous patterns belonging to different classes i.e pat- terns with the same features are assigned to one class and there are a number of such classes. This kind of situation arises, when there are a number of correlated patterns (patterns having same kind of features) stored in a particular category of a system having a number of different categories and it is not required that the steady state response for some perturbed initial condition is the particular pattern, but it is always desirable that, the output of the system should give a response by matching pattern to its particular category/class. This application of the chip becomes more apparent when it can be defined as a kind of mapping from a large number of patterns at the input of the system mapped to a particular class/category (having same features) at the output of the system. Different experiments with two and three patterns in each group with different categories and with varied hamming distances were performed. The interface circuitry and the software to control the chip were adjusted such that, while learning, the pattern were specified with particular categories assigned to them and while testing, the initial condition was applied only to the neurons to which the patterns were specified, and the output was read from the neurons assigned for classification. In the experiments performed, 40 out of 50 neurons were used to specify the patterns 27 and rest of the 10 neurons were used to assign the categories for those patterns. 3.3.1 Storing Two Characters in each Category Two patterns in each class with resembling features were stored. A total of three categories were formed. The character C and O, V and Y, and I and T were stored in three different categories, thus a total of 6 different characters belonging to three different categories were stored in the neuron chip. The testing of the chip was done by applying patterns with some noise of different hamming distances. The result of this experiment for character Y is shown in figure 3.3. The neural chip exhibits a good property of character classification. Similar to this experiment any patterns other than characters can also be tested. 3.3.2 Storing Three Characters in each Category One more pattern with resembling features was added in each class to explore the capability and capacity of the neural chip. The categories and the patterns for this experiment were as such ; Class 1 : Characters C, D and O Class 2 : Characters U, V and Y Class 3 : Characters I, J and T Thus, a total of 9 patterns were stored in the network. The testing of the network for different hamming distances was done. The results for different patterns belonging to all the three classes are shown in figure 3.4. In figure 3.4 pattern T belonging to class 3 was distorted upto hamming distance of 10. It can be seen from the results that, the chip was able to classify the distorted IO 00 2 3 4 5 1 s Efiflfifim Ppgmppm 0.91er Stat. [nut-1 Stat. Stood» Stat. F WEE-m No. of Initial stat-s - 41 No. of not“ I I7 wm]m:mmflmj;;; fi:®mgmfigm Figure 3.3: Results for 2 patterns stored in each class 29 3 4 5 6 7 3 2 3 fififlflfl@fifi “BE: 0a.1r.d Stat. Initial Stat. Staadu Stat. Figure 3.4: Results for 3 patterns stored in each class 30 pattern with 80.49% success. Some of the patterns which were not well classified with the neural chip, can hardly be categorized by human eye. Thus the neuron chip shows an excellent property of pattern classification. 31 Chapter 4 The Double Square Architecture 4.1 The Need for a New Architecture As the number of neurons increases, the constraints on the VLSI implementation of the circuitry becomes more rigid, due to the limiting factors like chip area, routing, number of pins and many others. For the case of dendro-dendritic architecture with digital learning, the number of interconnections required were n( n+1 )/2 [23], where n is the number of neurons. Thus if the number of neurons on the chip are increased, the number of interconnections for the dendro-dendritic architecture increases signif- icantly and thus limits the VLSI implementation aspect of neural networks. Hence a new architecture is required for an efficient VLSI layout with minimum implementa- tion constrains. As discussed in the chapter 3, in dendro-dendritic architecture every neuron is connected to every other neuron and thus all the connections were global. To reduce the number of synaptic connections between the neurons, different kinds of locally symmetric connections can be employed. Local interconnections refers to the connections between the neurons in the adjacent neighborhood [36]. The 50 neuron dendro-dendritic architecture neuron chip [19], discussed in the 32 chapter 3, has a facility to assign the weight configuration externally from the com- puter by changing the mode of operation of the chip from learning to the weight assignment. Thus, the capability and capacity of different kinds of network configu- rations can be tested and analyzed. A few different locally connected network con- figurations like square, hexagonal, double square and double hexagonal were tested. The connection of a square and hexagonal configurations for a 49 neurons are shown in figure 4.1 respectively. It can be seen from figure 4.] that, the neuron in the center of square configuration is connected to 8 other immediate neighboring neurons in the single layer, where as in case of hexagonal, it is connected to 6. A similar calculation is done for the two adjacent layer connection network for hexagonal and square structures. In the double layer hexagonal structure, each neuron is connected to 18 other neurons in adjacent two layers and for the double layer square structure, each neuron is connected to 24 neurons in the adjoining two layers [36]. Different patterns / characters were used to measure the capability and the capacity of the chip. The tested results for various network configurations are shown in tables 4.1, 4.2 and 4.3 respectively. The comparative study shows that the double square type of network configuration exhibits good retrieving capability and can store large number of patterns as compared to other configurations. Based on the experiments performed, a chip with 81 neurons and a double square network architecture with 011-chip digital learning was designed [36]. This CMOS chip was fabricated by MOSIS using standard chip size for a 2pm process. The chip size comes in the small category assigned by MOSIS and the maximum projected size is 4.6mm x 6.8mm. The 81 neuron chip is enclosed in the standard 108 pin PGA frame. 33 .. V V VAYAYAV ‘1 ~ 141414141414 mummy IIIIIIIII Hamming distance (Complement) l 7 A 37 o 43 o B 28 0 4o 1 c 29 o 42 o D 23 o 43 o a 30 o 44 2 F 33 o 44 2 G 32 o 42 o a 33. 0 4o 0 I 35 1 45 o J 32 o 41 o x 29 o 43 o L 23 o 42 o M 30 o 42 1 N 30 o 42 1 o 24 o 39 o p 29 o 43 0 Q 28 o 38 o a 30 o 42 o s 25 o 41 o r 30 o 45 0 U 25 o 41 o v 32 o 37 o w 28 o 44 . o x 31 o 44 0 Y 37 o 40 o z 31 o 43 2 o 25 o 38 o 1 - 38 o 40 o 2 23 o 44 o 3 25 o - 4o 0 4 33 o 43 0 5 24 o 42 0 6 25 o 42 o 7 4 35 o 41 o 8 25 o 40 o 9 25 o 41 0 Table 4.1: Test results for a single hexagonal network configuration No. 0! Succule- with Hamming distance (Complement) 7 A 40 1 50 0 B 33 O 47 3 C 35 0 46 0 D 34 0 48 3 B 35 0 50 4 F 39 0 ‘ 50 3 G 33 0 47 1 H 39 0 50 0 I 43 3 50 0 1 40 O 49 O K 39 0 50 ’ 0 L 39 O 50 0 M 37 0 48 1 N 37 0 48 1 O 30 0 46 o P 37 0 49 3 Q 35 O 45 o R 38 0 49 3 S 31 0 45 o r 41 0 50 0 U 33 0 48 0 V 37 0 37 0 W 33 0 48 . 0 X 38 0 48 0 ‘ Y 41 0 50 0 Z 37 0 49 4 O 33 O 46 0 1 , 44 3 47 0 2 34 0 50 l 3 31 0 45 0 .4 45 4 47 o 5 31 0 47 3 6 31 0 46 o 7 40 0 49 2 8 29 0 44 0 9 31 0‘ 46 0 Table 4.2: Test results for a single square network configuration 36 No. of Successes with No. of SW with ' Hamming distance (Complement) l 7 A 50 50 50 50 50 B 50 50 3 50 50 C 48 2 2 50 50 D 50 49 O 50 50 B 47 44 3 50 50 F 47 44 3 50 50 G 49 49 2 49 49 H 46 46 1 50 50 I 48 2 2 5O 50 J 48 2 2 50 50 K 47 46 1 50 50 L 48 48 l 50 50 M 48 48 3 5O 50 N 48 48 3 50 50 O 50 50 2 50 50 ~ P 49 48 3 50 50 Q 50 50 2 50 50 R 48 48 3 50 50 S 50 50 2 50 50 1' 47 45 1 50 50 U 48 48 l 50 ‘ 50 V 48 48 1 50 50 W 48 48 1 50 50 X 46 46 1 ' ' 50 50 Y 47 47 1 50 50 Z 48 48 1 50 50 0 50 50 3 50 50 1 - 50 48 48 50 50 2 48 48 0 50 50 3 50 50 0 50 50 4 50 50 50 50 50 5 49 46 3 50 50 6 50 50 46 50 50 7 48 46 0 50 50 8 50 50 3 50 50 9 ‘50 50 3 50 50 Table 4.3: Test results for a double square network configuration 37 4.2 Interface Circuit and Software Environment An interface circuitry and software environment was designed and developed for test- ing the functionality and performance of the fabricated chip. The schematic. of this interface circuitry is depicted in figure 4.2 The interface designed is used to control the input and output of all the 81 neurons. The interconnecting weights and some other logical control signals use to change the mode of operation of the chip are also controlled by this interface circuitry. A port selection circuitry is also introduced in this interface to avoid any bus contention with the computer data or address bus. This port selection circuitry is shown in figure 4.3. The interface circuit is designed by using several 74LS373 octal latches to control the input, output of neurons and also some other control signals for the double square ANN chip. Two 4x16 (74L8154) decoders are used to enable and disable the write and read latches and another 4x16 decoder is used to select the control signal latches, which controls various signals such as feedback, high, low, data, assign, ff—enable, PB and learn/direct. An interactive software in Borland C++ demonstrates the capability of specifying a pattern to be stored or tested via computer key board. After assigning a user specified pattern, this pattern is converted into binary values of 0 or 1. These binary values are then assigned to the latches sequentially by using the I/O subroutines of Borland C++. After feeding in data to the input latches, the output enable of these latches are simultaneously enabled and thus the pattern specified from the keyboard can be learned by the chip. Similar to the learning procedure, testing of the chip for perturbed patterns can be done by changing the mode of the chip from learning to testing. 38 ——> PORT A0 “' A4 *5 DE D SELECTION Address CO ERS - CIRCUIT ' C antral Signal 8 I _ CONTROL-SIGNAL DATA-WRITE LATCHES LATCHES Y DATA-READ 1.».er <— NEURAL ...__J Figure 4.2: Schematic of the DSQ interface c1rcu1t :39 (AddrmEnablc) Al-fa 3". 741.8688 , A2- . A83 9—w 3 E B9‘ ToAddmspms 5 8-bit [‘07 fl ofDecoder s Mamimde L 32 8 A1 ...... A9-Addreuaus Addressutch _ 82. ..... ..B9-me8wimh 08_ A0 = Plum DemBue ' 10W Thu. |_‘ Deanne 8 Dan-mm _ m_ Plum A0 Dull- 13‘ 11mm Dense: ' i 08 Dem-Realm - 3_ PmmPCB i' +5V Denna: A0 Figure 4.3: Port selection circuitry for the interface 40 4.3 Testing of Double Square Chip The double. square chip was fabricated by MOSIS. The results of this chip were not comparable to the desired ones and it was realized later that there were some errors in the MAGIC layout of the chip. Various experiments were conducted to finally conclude the failure of the chip. It was found that only ‘25 out of 81 neurons were responding for some given initial condition and the rest of the neurons didn’t give any response. The connections between the bonding pad and the pad frame were also tested as that could have been a possible fabrication problem, but it was found that all the connection were well fabricated. Some other trouble shooting experiments were also performed on the chip. Finally the MAGIC file was examined carefully by Dr. Y Wang (PhD MSU 1991) and it was found that the layout of the chip had some connection problems. The modified design file has been sent to the MOSIS for fabrication and the fabricated chip is expected to be at MSU by September 1992. 41 Chapter 5 Summary and Conclusions To make neural computation more powerful in terms of speed, efficient hardware designing and chip layout is necessary. The extensive parallel processing capabilities of neural chips can play an important role in the success of future high performance and low cost computing machines. 5.1 Summary In this work, the emphasis is laid on the design and development of a testing environ- ment for the CMOS VLSI artificial neural chips. An efficiently designed software and hardware environment adds a positive perspective for testing any chip. Only a well designed testing environment can predict the validity and correctness of the results. The hardware design should be such that every precaution, such as noise in the sig- nals and any kind of loading on the chip to be tested should be strictly avoided. An optimally designed testing environment can significantly improve the testing speed for the chip. W’hile designing the 32x32 analog interface, all the near future needs for the testing 4‘2 of analog and digital neural chips were well considered. The sample and hold circuits were used to avoid any kind of loading on the three neuron hopfield chip as these circuits have a high input impedance of the order of tara ohm. The use of the synchronous counter and the static RAM helps to refresh the capacitors connected to the sample and hold circuits, thus there is hardly any fluctuation or decay in the analog signals. This interface also demonstrates the feasibility of refreshing on-chip capacitors. A PCB for the 50 neuron chip interface circuit was designedi [33]. This PCB has industrial standard dimensions and can be inserted into any IBM compatible com- puter. This board serves as an additive advantage to the computer, as it enhances the capability of the computer in the area of pattern recognition or some other advanced neural applications. The modifications by Mr. Hwa Joon Oh in the software has increased the versatility in the applications of the neural chip. For the double square architecture, a new circuit with 112 digital channels for read and write was designed. This interface can be used with different digital chips with more number of neurons. A separate port selection circuitry, which acts as a buffer between the computer bus and the interface circuit, avoids any damage to the computer. 5.2 Conclusions As a whole, this work lays a basic groundwork for designing and development of testing environment for CMOS VLSI neural chips. Further advancements in the design of hardware, such as improvement in the speed of testing can be employed. Various features can also be added to the hardware and software to test and measure the different characteristics of the neural chips. In the future, even the small testing 43 circuits can be introduced inside the chip so as to minimize the response time of the neural chip and thus the efficient interfacing can be done. 44 Bibliography [1] J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proceeding of National Academy of Science U.S.A., vol. 79, pp. 2554-2558, April 1982. [2] J. J. Hopfield, “Neurons with Graded Response Have Collective Computational Properties like those of Two-state Neurons,” Proceeding of National Academy of Science U.S.A., vol. 81, pp. 3088- 3092, May 1984. [3] J. J. Hopfield and D. W. Tank, “Neural Computation of Decisions Optimization Problems,” Biological Cybernetics, vol. 52, pp. 141-152, 1985. [4] D. W. Tank and J. J. Hopfield, “Collective Computation in Neuronlike Circuits,” Scientific American, pp. 104-114, December 1987. [5] J. J. Hopfield, “Artificial Neural Networks,” IEEE Circuits and Devices Maga- zine, pp. 3-10, September 1988. [6] R. Rosenblatt, Principle of Neurodynamz'cs, New York, Spartan Books (1959). [7] Y. Wang and F. M. A. Salam, “Experiments Using CMOS Neural Network Chips As Pattern / Character Recognizers,” 1991 IEEE Intemational Symposium on Cir- cuits and Systems (ISCAS), Singapore, June 1991. [8] F. M. A. Salam and Y. Wang, “A Real-Time Experiment Using a 50-Neuron CMOS analog Silicon Chip with On-Chip Digital Learning,” IEEE Transactions on Neural Networks, Vol. 2, no. 4, pp. 461-464, July 1991. [9] C. Mead, Analog VLSI and Neural Systems, Addison Wesley, 1989. [10] F. M. A. Salam and Y. Wang, “A Learning Algorithm for Feedback Neural Network Chips,” 1991 IEEE Intemational Symposium on Circuits and Systems (ISCAS), Singapore, June 1991. [11] F. M. A. Salam and M. R. Choi, “An All-MOS Analog Feedforward Neural Circuit With Learning,” 1990 IEEE International Symposium on Circuits and Systems (ISCAS), May 1990, pp. 2508-2511 45 [1‘21 [13] [141 [151 1161 [171 [181 [191 [‘20] [‘21] [2‘21 [‘23] D. Rumelhart, G. Hinton, and G. Williams, “Learning Internal Representations by Error Propagation,” in Parallel Distributed Processing, vol. 1, eds. D. Rumel- hart and J. McCleland, MIT Press. D. 0. Hebb, The Organization of Behavior, Wiley, New York, 1949. R. P. Lippman, “An Introduction to Computing with Neural Nets,” IEEE ASSP, pp. 4—22, April 1987. F. M. A. Salam, “A Model of Neural circuits for Programmable VLSI Imple- mentation of the Synaptic Weights for Feedback Neural Nets,” 1989 IEEE Inter- national Symposium on Circuits and Systems (ISCAS), Portland, Oregon, May 1989, pp. 849-851. F. M. A. Salam, “New Artificial Neural Models: Basic Theory and Character- istics” 1990 IEEE International Symposium on Circuits and Systems (ISCAS), New Orleans, Louisiana, May 1990, pp. 200-203. W. S. McCulloch and Pitts, “A Logical Calculus of the Ideas Imminent in Ner- vous Activity,” Bulletin of Mathematical Biophysics, 5, 115-133, 1943. F. M. A. Salam, N. Khachab, M. Ismail, and Y. Wang, “An Analog MOS Imple- mentation of the Synaptic Weights for Artificial Neural Nets,” Analog Integrated Circuits And Signal Processing, an international journal, Kluwer Academic Pub- lishers, Vol. 2, 1991. Y. Wang and F. M. A. Salam, “Design of Neural Network Systems from Cus- tom Analog VLSI Chips,” 1990 IEEE International Symposium on Circuits and Systems (ISCAS), New Orleans, Louisiana, May 1990, pp. 240-243. F. M. A. Salam, Y. Wang, and R. Y. Choi, “On The Analysis and Design of Neural Nets,” IEEE Transactions on Circuits and Systems, vol. GAS-38, no. 2, pp. 196-201, February 1991. F. M. A. Salam and Y. Wang, “Some Properties of Dynamic Feedback Neural Nets,” in the session on Neural Networks and Control Systems, the 27th IEEE Conference on Decision and Control, December 1988, pp. 337-342. F. M. A. Salam, R. Y. Choi, Y. Wang, “An Analog MOS Implementation of the Synaptic Weights for Feedback/Feedforward Neural Nets,” Proc. of 32nd Mid- west Symposium on Circuits and Systems, Champaign, Illinois, August, 1989. Y. Wang and F. M. A. Salam, “VLSI-design and Testing of an Analog Pro- grammable feedback neural circuit,” Memorandum No. MSU / EE/ S 89/09, De- partment of Electrical Engineering, Michigan State University, East Lansing MI 48824-1226, 29 November 1989. 46 [24] F. M. A. Salam and Y. W’ang, “Simulation , experiment, and VLSI-design of an analog programmable feedback neural circuit,” Memorandum No. MSU/EE/S 89/02, Department of Electrical Engineering, Michigan State University, East Lansing, MI 48824-1226, 2 February 1989. [25] M. A. Sivilotti et. a1., “VLSI Architectures for Implementation of Neural Net- works,”AIP Conf. Proc. 151, 408, 1986. [26] A. Agranat, C. Neugebauer, and A. Yariv, “A CCD Based Neural Network Integrated Circuit with 64K Analog programmable Synapses,” the proceedings of IJCNN, 1990, pp. II-551-555. [27] F. M. A. Salam and Y. Wang, “Neural Circuits for Programmable Analog MOS VLSI Implementation,” Proc. of 32nd Midwest Symposium on Circuits and Sys- tems, Champaign, Illinois, August, 1989. [28] BC Kuo, Automatic control systems, Prentice-Hall, 1988. [29] Anuj Soni,“32x32 Analog I/O channel interface”, Project Report of EE495, De- partment of Electrical Engineering, Michigan State University, East Lansing, MI 48824—1226, March 1991. [30] Judith E. Dayhoff, Neural network architectures : an introduction, Van Nostrand Reinhold, New York, NY, 1990. [31] John Hertz, Anders Krogh, and Richard G. Palmer, Introduction to the theory of neural computation, Addison-Wesley. [32] Tarun Khanna, Foundation of neural network, Addison-wesley, 1990. [33] Roger Y Wang, CMOS VLSI implementations ofa new feedback neural network architecture, PhD Dissertation 1991. [34] Myung-Ryul Choi, Implementation offeedforward artificial neural networks with learning using standard CMOS technology, PhD Dissertation 1991. [35] I.K. Sethi and A. K. Jain, Artificial neural networks and statistical pattern recog- nition old and new connections, Elsevier Science Publishers 1991. [36] Y. Wang and F. Salam,“Coustom Analog V131 Neural Chips with 011-Chip Dig- ital Learning for Pattern / Character recognition,” The Proceedings of the 11nd International Conference on Fuzzy Logic and Neural Networks (IIZUKA’ 92), Iizuka, Fuknoka, Japan, July 17-22, 1992, pp.501-504. [37] Xou,“Experimental Study of Different Network Configurations,” A report for Artificial Neural Network Laboratories, MSU, East Lansing, MI 48824, 1991. 47 "Illllllllllllllllll