STATEusp‘ACE DESIGN AND opnmzmoN 10F. ' ' “NEAR TlME-lNVARJaANT sstEMs Thesis For The» 0‘39?“ 0? Ph. D. '_ 5 MICHIGAN STATE UNIVERSITY" I ' Charles: M Bacon: 1964‘ :rHEsts This is to certify that the thesis entitled STATE-SPACE DESIGN AND OPTIM IZATION OF LINEAR TIME-INVARIANT SYSTEMS presented by Charles M. Bacon has been accepted towards fulfillment of the requirements for _E_h_._L__ degree in W1 Enginee ring Major professor Date Whi— 0-169 LIBRARY Michigan State University ABSTRACT STATE-SPACE DESIGN AND OPTIMIZATION OF LINEAR TIME—INVARIANT SYSTEMS by Charles M. Bacon State Space concepts, traditionally applied to the analysis of nonlinear systems, are currently being extended to linear systems as well, For many years, the analysis and design of linear time—invariant systems was accomplished by Laplace transform and Fourier transform methods, However, the increased complexity of modern systems has emphasized the need for more effective design techniques, particularly ones which can be easily implemented on the digital computer. This thesis presents a design technique which is applied directly to the state model AX(t) + BE(t), X(O) = X x02) 0 Y(t) CX(t) + DE(t) Matrix equations, called fundamental design equations, are established which provide necessary and sufficient conditions for the state model to have a specified solution. These equa— tions are written directly from the Specified solution and furnish mathematical constraints on the matrices A, B, C, and Charles M, Bacon D. From these equations, the designer can generate a state model having a specified solution. The technique is appli— cable to vector input—vector output systems under either forced or unforced conditions. Any of those excitation func— tions traditionally used in s—domain design may be employed, The relationship between the state model and the s_ domain model is used to extend the fundamental design equa_ tions to the case where the design specifications are given in terms of a desired transfer function matrix rather than a desired time solution. This extension yields matrix equa— tions in A, B, C and D which provide necessary and sufficient conditions for a state model to be equivalent to a Specified transfer function matrix. The fundamental design equations can be programmed directly on the digital computer, This leads immediately to a computer technique for parameter optimization. The least squared—error criteria is used together with the method of steepest descent to achieve optimization, A computer program implementing this technique is included: Several examples are included which illustrate the design and optimization methods. STATE—SPACE DESIGN AND OPTIMIZATION OF LINEAR TIME—INVARIANT SYSTEMS By Charles M, Bacon A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Electrical Engineering 1964 ACKNOWLEDGEMENTS The author is pleased to acknowledge the guidance and encouragement of his advisor Dr. H. E. Koenig during his entire program. He also wishes to thank Dr. Y. Tokad for his many suggestions during the research leading to this thesis. He is also grateful for the financial support rendered by the Bendix Corporation and the National Science Foundation. Finally, the author acknowledges that this thesis, indeed his entire graduate program, would not have been possible without the patience and understanding of his wife, Jeanne. ii TABLE OF CONTENTS Page LIST OF APPENDICES . . . . . . . . . . . . . . . . . iv Chapter I. INTRODUCTION . . . . . . . . . . . . . . . . 1 II. FUNDAMENTAL DESIGN EQUATIONS FROM TIME-DOMAIN SPECIFICATIONS . . . . . . . . 7 III. FUNDAMENTAL DESIGN EQUATIONS FROM S—DOMAIN AND FREQUENCY—DOMAIN MODELS . . . . . . . . . . . 33 IV. PARAMETER OPTIMIZATION . . . . . . . . . . . 59 V. CONCLUSION . . . . a . . . . . . . . . . . . 73 REFERENCES . . . . . . . . . . . . . . - . . . . . . 101 iii LIST OF APPENDICES Appendix A. ALTERNATE PROOF OF SUFFICIENT CONDITIONS B° TRANSFORM MODELS OF LINEAR TIME— INVARIANT SYSTEMS C. PROGRAM GRADNN iv Page 76 84 93 I. INTRODUCTION For over twenty years, the analysis and design of linear systems have been based on techniques derived from operational mathematics and associated transform methods. See, for example [I], [2], [3], [4], [5]. During this period, the frequency reSponse characteristic, root locus plot and the analog computer have evolved as standard tools of the systems designer. Much to the disappointment of the engineer, the sophistication and capability of these tools has not kept pace with the increased complexity of the space-age systems. He has found that often the only accept— able system is the optimum system and that the ”cut and try” procedures of ten years ago Simply do not provide optimum designs. Faced with these problems, the Systems designer has turned to the digital computer and the time—domain model in an attempt to develop more effective design tech- niques. I This thesis is concerned with the application of time-domain models to the design of linear time-invariant systems. The class of physical systems under consideration includes those having performance characteristics (voltages, currents, forces, pressures, diSplacements, etc.) which can be approximated by the equations fut) AX(t) + BE(t), X(O) = X (1.1) Y(t) CX(t) + DE(t) (1.2) The vector time functions X(t), E(t) and Y(t) have dimension n, r and k, reSpectively, and the constant real matrices A, B, C and D are apprOpriately dimensioned.* The time deriva- tive of X(t) is denoted by E(t) and the initial condition X0 is a constant n—vector. Notationally, the set of equations (1.1) are called state equations and denote 2 linear constant—coefficient ordinary differential equations in first—derivative explicit form with initial condition vector X0. The k linear alge— braic equations (1.2) are called the output equations. The composite set (1.1—2) is referred to as the state model of the system and X(t), Y(t) and E(t) as the state vector, output vector and excitation vector, respectively, By comparison, the most frequently used s—domain model takes the form Y(S) = M(S)E(s) (1.3) where Y(S) and E(s) are the Laplace transforms of Y(t) and E(t), reSpectively, and E(s) is called the transfer function matrix. The transforms Y(S), E(s) and M(s) are complex— *Throughout this thesis, upper case letters will denote matrices or vectors and lower case letters will de- note scalars. All matrices and vectors are real (or real— valued functions) unless otherwise noted. valued functions of the complex variable 5. Provided the transforms exist, the s~domain model (1.3) is derivable directly from (1.1—2). The total design problem embraces several separate tasks in which the mathematical model plays a central role. The model and its solution represent, respectively, the physical system and its response. In design, the Specifi— cation of a desired System response equivalently Specifies part or all of the solution of the model. To achieve the design, one must Select the physical system possessing a model which has the Specified solution. The usual order of events iS to first derive the mathematical model from the specified solution and then attempt to realize this model in terms of a physical system. The widespread use of the s—domain model in the de— sign of scalar input-output systems stems from the fact that the form of the time reSponse is directly related to the poles of the transfer function. Thus, given a desired sys- tem reSponse, the s-domain model is quickly established. This property has been extended to multivariable (vector in— put—output) Systems and methods are available for deriving the transfer function matrix from design Specifications [6], [7], [8]. However, the realization of the S—domain model in terms of a physical System is, in general, quite difficult. Even for scalar input—output systems, only one or two physical parameters can be analytically determined by the root-locus technique to yield a set of Specified poles of the transfer function. Except in the network synthesis area, there have been no general analytical techniques ad- vanced for the realization of vector input—output systems. To offset the lack of general analytical methods, the designer has relied on the analog computer for the de— sign of complex systems. However, the system parameters are found by a purely ”cut and try” procedure and thus, it is not possible to achieve true optimal design with respect to many system parameters. In fact, Since the word optimiza— tion implies an analytical method, it is doubtful if the analog computer will ever become effective in the optimal design of complex systems. There is a need for an effective analytical method for linear system design which utilizes the capabilities of the digital computer. This thesis provides the first step in satisfying this need. The choice of the state model (1.1—2) over the s-domain model (1.3) as a basis for such a method is easily defended. The state model provides a more precise description of the system properties than does the s-domain model. Also, efficient techniques have been ad- vanced for formulating the state model directly from the physical system without deriving the s_domain model [9], [10], [ll], [12], [13]. Additionally, the state model is routinely used in the important areas of Liapunov stability theory and optimal control [14], [15]. In Section II, a set of algebraic equations called fundamental design equations (FDE) are introduced. These equations provide necessary and sufficient constraints on the state model for the model to have a specified solution. The FDE allow the designer to derive the state model from the Specified solution. These conditions are generally applicable to both forced and unforced systems with the ex— citation functions assuming any of the usual forms; step, ramp, sinusoid or any linear combination thereof. An addi- tional set of necessary conditions are given which are eas- ier to apply than the FDE. These can be used to quickly eliminate some state models which do not have the Specified solution. Kalman, Gilbert and others recently listed proce— dures for deriving the state model from a Specified transfer function matrix M(s) [lo], [17], [18]. The procedures are limited in two ways: (a) the poles of the entries of M(s) are assumed to be distinct and (b) the matrices A, B, C and D take only restricted forms, e.g., A is always diagonal and may have complex entries. Such forms do not normally occur when the state model is formulated directly from the phys- ical System. Thus, some unknown transformation must be applied to the state variables before these procedures can be used in design. In Section III, an extended set of fundamental de— sign equations are shown to be necessary and sufficient for a state model to yield a Specified transfer function matrix. These equations are not restricted by the multiplicity of the poles of the entries in M(s). Also, completely general forms of the matrices A, B, C and D in the state model can be derived. An analytic technique for parameter optimization is proposed in Section IV. The method uses the digital comput- er and the numerical method of ”steepest descent” to find the set of parameters yielding an ”optimum” solution to the fundamental design equations. The ”optimum” solution is defined by the least squared-error criterion. This tech- nique represents the first step in applying the digital com— puter to optimal design via the state model. Three appendices provide supplementary material in- cluding the computer program used in Section IV. II. FUNDAMENTAL DESIGN EQUATIONS FROM ' TIME-DOMAIN SPECIFICATIONS The theory regarding the existence and uniqueness of the solution to (1.1) is well established [19], [20]. If the r—vector E(t) is continuous for all teiT, T = {thfitftl}, t1 a non—zero constant, and the initial state X0 is finite, then (1.1) has the unique state solution Atx + eA(t'“)BE(u)du (2.1) X(t) = e o 0 At . . . . . where e = (t) is the n x n matrix function satisfying the homogeneous matrix system (130:) = Adet) _, (13(0) = Un (2.2) and Un is the n-dimensional unit matrix. Using (2.1) in (1.2), the output solution of the state model for t' >8 2 V >2 8 V II >8 0 (3.16) C712/,(t)=cc)((t) provided such an equation set exists, are that AGi = GiS (3.17) GiFo = Xoi i=l,2,...,r (3.18) CGi = Ni (3.19) where X . is the ith column of iX’ . The existence of 01 0 (3.16) depends upon the existence of the matrices A and C satisfying (3.17) and (3.19). Proof The matrix differential equation system (3.16) is equivalent to the set of r vector differential equation systems Xi(t) AXi(t)7 Xi(0) : XOi (3.20) Yi(t) CXi(t) 39 i=l,2,...,r, where Xi(t) and Yi(t) are, reSpectively, the ith columns of4§<(t) and<%L(t). From (3.14) and (3.15), we have >< /'\ (.1. v H GiF(t) (3 21) Y (t) = NiF(t) (3.22) Applying Lemma 3.1 successively to each of the 3 vector equation systems (3.20) and their corresponding proposed solutions (3.21) and (3.22), the conclusion follows. Let the s—domain model (3.1) be given and assume the system to be found is characterized by a state model of the form (1.1—2) with XO=O. The structure of the physical sys— tem is implicit in the matrices A, B, C and D and it is proposed that constraints on these matrices be found which are necessary and sufficient for the state model to be "equivalent” to the Specified s—domain model. The term "equivalent” is defined as follows: Definition 3.1 The state model 5((t) Axm + BE(t), X(0) = o CX(t) + DE(t) Y(t) is said to be equivalent to the s—domain model 7(5) = M(5)E(S) if and only if 4O M(s) = C(sU—A)_lB + D It can be shown that there exists at least one equivalent state model of finite order if M(s) satisfies the following condition [16]: Every entry of M(s) is a rational function in 5 having the degree of the denominator finite and not less than the numerator. Theorem 3.1 Hypothesis: 1) T = {t‘tzO} 2) The s—domain model Y(s) = M(S)E(s) (3.23) is given with M(s) = F(s) + R (3.24) of order k>} [N1F(t), N2F(t),...,NrF(t)] (3.25) for all téET where Ni is a k x q matrix, i=l,2,...,r, and F(t) satisfies, for all tETF F(t) = SF(t), F(O) = F (3.28) 41 for some q x q matrix S. Conclusion: Sufficient conditions for the n-th order state model )((t) AX(t) + BE(t), x = o (3.27) Y(t) CX(t) + DE(t) (3.28) to be equivalent to the S—domain model (3.23) are that there exist n x q matrices Gi, i=1,2,...,r, such that AGi = GiS (3.29) ’ GiFO = Bi (3.30) CGi 2 Ni (3.31) D = R (3.32) where Bi is the ith column of B. Conditions (3.29—32) are also necessary for the stated equivalence if the qnvector F(t) is a basis vector which Spans the function Space of the At matrix function e B, i.e., if there exist n x q matrices Gi, i=1,2,...,r, such that eAtB = [G1F(t), G2F(t),...,GrF(t)] Sufficiency: Assume that there exist matrices Gi’ i=1,2,...,r, such that (3.29-32) are satisfied. Solving the state model (3.27—28) by Laplace transforms we obtain the transfer function matrix — _ . —l MO(S) — C(SU—A) B + D (3.33) By Definition 3.1, it must be shown that MO(S) is identical to E(s) defined by (3.24-25). Since D=R by (3.32),it only remains to be shown that N(s) is identical to C(sU-A)_1B. The inverse Laplace transform of N(s) is defined by (3.25) . ~l . and the inverse Laplace transform of C(sU-A) B is .C‘l{C(sU—A)“113} -—- CeAtB (3.34) as shown in Appendix B. Define you = CeAtB (3.35) and note that this is the output solution of the matrix system (3.36) ll >> >3 2 V >§ O v H DU 0y... ya) By Lemma 3.2, it follows that (3.29—31) are sufficient for (3.37) H 0 >3 A (.1. V 09L(t) to have the form ‘@;(t) = [N1F(t), N2F(t) ...NrF(t)] (3.38) 43 when Bi is identified with Xoi° Comparing (3.35) and (3.38) and utilizing the uniqueness of the Laplace transform, we have N(s) = C(sU-A)‘1B and sufficiency is Shown. Necessity: Assume the state model (3.27—28) is equivalent to the s—domain model (3 23). Then by Definition 3.1, M(s) N(s) + R (3.39) C(sU—A)—1B + D (3.40) Equating corresponding powers of s on the right hand sides of (3.39) and (3.40) gives ll C(sU~A)_lB N(s) (3.41) D = R (3.42) and thus (3.32) is necessary. Taking the inverse Laplace transform of both sides of (3 41) gives, by virtue of (3.25), At Ce B = [N1F(t), N2F(t),...,NrF(t)] (3.43) . . At . The k x r matrix function Ce B can be written as CeAtB = COXU) gym.) (3.44) 44 wherec><(t) and.i%(t) are, reSpectively, the state and output solutions to the matrix system <>)((t) AC><(t), (7((0) = B yd) co)<, v2(t)]T, respectively. 50 The relation (3.58), together with“M(s) given by (3.57) defines a desired S-domain model of the network. To determine the element values which yield this desired model, a state model of the network is to be found which is equiv— alent to the s-domain model in the sense of Definition 3.1. The state model of the network with excitation vector V(t) and output vector I(t) is easily derived. The result is T ‘ I‘ ‘ ”’ ‘“ _ ‘ T‘ “ v3(t) 0 1/c3 1/c3 0 v3(t) 0 0 v1(t) d i4(t) —1/14 0 0 0 i4(t) 1/L4 0 v2(t) —_. : + dt . . 15(t) ~1/L5 0 0 0 15(t) 1/L5 -l/L5 ' (t 0 0 0 0 7 (t 1 L —1 L (3.59) il(t) FD l l l Fy3(t) 12(t) 0 O —1 —1J i4(t) 15(t) i (t) _ 6 3 Utilizing the notation of Theorem 3.1 as it applies to the S—domain model (3.57-58) it follows that R=O and therefore, 51 S '1' 7O _2 4/7 S _4 38/7 Taking the inverse Laplace transform of N(S) yields where F _ sin V70 t F(t) = cos V70 t (3.60) l J and 0 7 4 0 —2 —4 N = N = . 1 2 (3 81) 0 —2 —4 , 0 4/7 38/7 Taking the derivative of F(t) defines the system _ —7 sin ‘V70 t 0 \470 0 rsin \W70 t -——— cos V70 t = —V.O O 0 C05 70 t (3.62) 52 F(O) = F = l (3.63) Thus, by Definition 2.2, F(t) is a basis vector with basis system (3.62-63). The state model (3.59) is equivalent to the Spec- ified s—domain model (3.58) if the fundamental design equa- tions of Theorem 3.1 are satisfied for some realizable values of C3, L4, L5 and L6° Note that since F(t) is a At and 3-vector, it may not Span the function Space of e thus, we cannot conclude that the FDE provide necessary con- ditions for the equivalence of the state model and the 5— domain model. However, the reduced necessary conditions can be applied. Thus, from (3.48-49), the state model is equiv— alent to the s—domain model only if the matrices A, B, C and D satisfy CAB. = NiSF i=l,2 (3.64) D = R (3.85) Condition (3.65) is satisfied. Also from (3 61—63) 53 Similarly, Therefore, if (3.64) is to be satisfied, the matrix product CAB must vanish. From (3.59) we have r— ._ ‘— a—q CAB = 0 1 1 1 0 1/03 1/c3 0 0 0 0 0 -1 —1 -1/L4 . 4 —l O l L -l L ”‘50 O /5 /5 .—. 0 Thus, the state model satisfies the reduced necessary con- ditions (3.64-65). Since these conditions are satisfied regardless of the choice of C3, L4, L5 and L6, a general necessary condition on M(s) is established. That is, every L—C network having the topology of Figure 3-1 will have an admittance matrix M(s) only if M(s) yields matrices R, N1, N , S and F such that R=0 and N.SF = 0, i=1,2. 2 O 1 o Proceeding now to the formulation of the FDE, the augmented form (3.56), written for i=2, gives 54 (3.66) O F F 0 R2 N1 N2 R1 in detail or, _ no 15 ----.8 ..... 4-8. 1 1 _ n O l _ O . O l _l O llllllllll LIvIIll]| _ lllllllllllllllllllll _ IJIIIII] r0 r0. 7 l 4 _ _ 0 O 0 _ 4 // g g _ _ _ _ 8 . _ . _ 3 5 % . _ m 0 no _ 2 7 l _ 8 8 _ O O _ _ _ // _ _ _ 4 4 4. _ O l 4_ O 7 O _ O O Ilgw ]]]]] g _ _ — _ 3 ‘I'.3.|_II ||||||||||||||| _ ''''''''''' L'-nl|u-|l' l 4 _ O O _ _ 4 4 g g _ _ _ _ _ _O . 2 2. W O _ O _ 7 A/m l 4.. O _ g g. .0 _ _ l l. W _ _ l 4 _ . _ _ no no _ 0 _ O 0 _ F _ . _ E ,3 ,0 n A _ O M: L L / /" l O O O l l _ O O uuuuuuuuuuuuuuuu _ _ _ r0 r0 _ l 4 A. 5. L% _ no no L L 0 // // // _ no 0 15 5 l l l . l 4 O ................. 4111111- 00 g 0 0 O O _ l l _ _ 4 4 l 4 3 . no C _ III lll.||.l.a.l|z I‘lllll ,/, 0 0 0 _ l l 3 3 l _ _ l 4 _ g g 3 _ C _ 2 2 / O O O _ l O l 4 O l _ ab 00 L4 L5 ” l M l / / . _g g 0 l l O _ O O _ _ _ (3.67) 55 The nonlinear algebraic system (3.67) is solved, subject to the constraint that the network parameters C3, L4, L5 and L6 be positive. Although the system represents 44 nontrivial equations in 28 unknowns, the solution is quite easy to achieve by elimination. This is due to two factors: First, the equations representing CGi = Ni, i=l.2, are independent of the network parameters and thus allow an immediate elimination of 12 unknown entries in G1 and G2. Second, the degree of nonlinearity exhibited by the system is low since only second order cross-products of unknowns appear. The details of the solution are not included but it can be verified that (3.67) is satisfied with c = 0.1 3 L4 = 0.2 (3.88) L = 0.5 5 L6 = 0.25 and I 7 0 5 0i 0 -10/7 10/7 [@1502]: : (3.89) 0 2 0; 0 -4/7 -10/7 I 0 0 4: 0 0 -4 .3 Thus, the network having the Specified admittance matrix (3.57) is shown in Figure 3.1 with element values (3.68). The significance of the matrices G and G2 given in (3.69) l is apparent from the conditions i=1,2 (3.70) F :B. which are satisfied by matrices A and B of the state equa— tions. However, it follows from Lemma 3.2 that the condi- tions (3.70) are necessary and sufficient for the matrix system CX(t) = 430:), 3((0) = B to have the solution 3((t) = [G1F(t),G2F(t)] Therefore, G1 and G2 define the state reSponses of the net- work when the initial states are B and B l 2, respectively, with V(t) = O. 57 Although this example is based on a Simple L-C net- work, the technique used can be extrapolated to the more difficult problem of realizing a network of unknown topology. The only existing technique for establishing equations which relate a general R-L-C network to a Specified transfer function matrix E(s) is to actually derive M(s) from the network while holding the parameters in literal form. Equat- ing the derived form and the specified M(S) establishes non— linear algebraic equations in the network parameters. This system has fewer equations than a corresponding set of FDE but the degree of nonlinearity is greater. One cannot conclude that the FDE are easier to solve but there is no doubt that the FDE are established with considerably less effort for a variety of candidate networks. The specified transfer function matrix M(s) is inverted only once to yield the time-domain quantities F(t) and Ni’ i=1,2,...,r. Then, for each trial network, the state model is formulated with all network parameters in literal form. For an arbitrary network, the state model can always be established with less manipulation than can the corresponding s—domain model. The reduced necessary conditions can be applied to eliminate those networks which cannot yield the given transfer function matrix. This test may also provide necessary values of some network parameters or necessary relationships between the parameters. 58 The parameter values can only be determined by solv— ing the FDE, provided a solution exists. Even when no exact solution exists, there is practical significance in finding the set of parameter values which yield the “best” solution in some Sense. This phase of the design problem is con~ sidered in Section IV. IV. PARAMETER OPTIMIZATION The fundamental design equations in Section II and III define constraints on the State model (1.1-2) which, if satisfied, assure the designer that the state model will have a Specified solution. The desired solution can be ex~ pressed in the time domain or in terms of an equivalent transform model. To be of practical use, the fundamental design equa— tions must help the designer select a physical system having a desired response. This means finding a System which sat- isfies the FDE or, more exactly, finding a system having a state model which satisfies the FDE. Often, the physical realizability of a system will not allow the FDE to be exactly satisfied. Thus, a frequent- ly posed problem in system design is the following: Let a physical system have fixed topology but several unSpecified scalar parameters which are available for variation. It is desired to find the values of these parameters which provide the "best” design in terms of some desired system reSponse. To be more precise, let a physical system be com— pletely determined except for m real scalar parameters pl,p2,...,pm which are available for variation in the design 59 6O problem. Let every parameter pi satisfy pi‘ : pi 5 pi“ (4.1) for some choice of finite constants pi‘2 and pi”. This choice is dictated by the physical realizability of the pa- rameter. T Define the m-vector’? = [p1,p2,...,pm] as the parameter vector which ranges over a subset 77 of the real m—dimensional Euclidean vector Space. The set if is defined as the set of all m-tuples (pl,p2,...,pm) such that pi sat- isfies (4.1), i=1,2,...,m. The set if is called the realiz— able parameter set and is a compact set by virtue of being closed and bounded [22]. Any mathematical model of the system will depend on the parameter vector 7?. Assume the System has a state model for all 77 6.77 of the form 5((t) A(’F)X(t) + B(’P)E(t), X(0) = XOW) (4.2) Y(t) C(T’)X(t) + D(7’)E(t) (4 3) where the matrices A, B, C, D and X0 are functions of the vector 19 . For example, A(7>) denotes a matrix function with typical entry aij(’P) : a.ij(pl,p2’ooo,pm) 1"]:1’27°~'~7n where aij('P) is assumed to be a nonlinear continuous function 61 of its arguments. In general, we assume that all entries of A(7>), B(77), C(77), D(T’) and XO(T’) are continuous func- tions of 7’ for all?>6 77. In fact, the realizable parame— ter set can always be suitably restricted to assure this con- dition. Let the design Specifications on the system reSponse be given either in the time domain or in terms of an equiv- alent s—domain or frequency domain model. From the results of the previous sections, these specifications lead to a set of fundamental design equations which can be written in the augmented matrix form AGl = 325’ (4.4) Note that all matrix functions of 79 appearing in the state model (4.2—3) are contained in the augmented matrix 0%(73). Define the h—vector C; which contains all unknown entries of 631 and (32. Let 6; range over a compact subset [7 of the real h—dimensional Euclidean Space. There is no loss of generality in requiring the compactness of [9 since the phys_ ical properties of the system also require matrices C91 and (3-2 with bounded entries. The parameter design problem now takes the following form: Find the vectors 7706-Z7 and (3665]1 such that (4.4) is satisfied; if not exactly, then in some ”optimum” sense. Let the matrix products 0R(77)(31 and (32.§ have dimension r x q and define the r x q matrix 62 05 .= gamma, - @2 5 (4.5) Consider the scalar function r 9 i=1 j=l where zij is the typical entry ofiZZ The domain of u is €9='firx [1, the cartesian product of the two compact sets TT and [7 and is therefore compact [23]. The range of u is the non-negative real axis. Definition 4.1 The optimum parameter vector 730 is defined as that vector 736 77 which, together with some vector G06 P, minimizes the function u(’P,G—) over all vectors 7D :3 7T and G 6!". That is, M790, 5-0) = min u(7‘3,@) ‘PeTT @eaF (4.7) The state model (4.2—3) with. F): 790 is called the optimum state model and the corresponding physical system the optimum System. The compactness of the set €9assures the designer that there always exists at least one pair of vectors 736;”, 63 and G 6 F for which u('l° ,G) takes a minimum value. This assertion, not proved here, follows from the continuity of u(7’ ,G) on the compact set 6 [23]. It follows from Definition 4.1, together with the properties of the fundamental design equations, that if u(790,C30) = 0, then the optimum parameter vector yields an exact design. If, on the other hand, the optimum parameter vector 770 yields u(‘PO,(3O) = K > 0, then, exact design is not achieved and an important question arises. How “close” is the response of the Optimum system to the Specified re- Sponse? Under these conditions, the r x q matrix defined by (4.5) can be considered as an error matrix but the relation- ship between this matrix and the corresponding timeadomain error in the resulting solution has not been established. However, this need not be considered as a great disadvantage Since the widely-used meansquaredwerror criteria is not di- rectly related to a time-domain error expression. That is, a given value of the meansquared~error.for a particular sys- tem response does not disclose the distribution of the error over the time interval. As a practical alternative, the solution to the optimum state model can always be obtained and compared with the desired solution. If the difference in the two solutions is unacceptable, the designer is at least assured that the proposed physical System must be altered topologically in order to achieve the desired response by this method. This 64 information, although negative in character, is still useful in practical design problems. Also, such conclusions cannot be drawn immediately from parameter design techniques based on the analog computer. Except in simple cases, the most efficient procedure for accomplishing the minimization of the function u(79,é;) is by numerical techniques. The FDE (4.4) can be programmed directly on the digital computer which then forms the func— tion u(7D,C?) and performs the minimization. Appendix C lists the FORTRAN program GRADNN which accepts equations of the form (4.4) and uses the method of steepest descent to find vectors 730 and Go minimizing u(’P ,(3). Example 4.1 illustrates the parameter optimization technique and the use of the program GRADNN. Example 4.1 A physical system, with fixed structure except for three real parameters pl, p2 and p3, has the state model 2 , _- [i ._ xl(t) 30p3 + p2 - 2 13.75 ~40pl xl(t) d -- 58 5 2 5 30 (t) —— x2(t) - ' pl 7 p2 X2 ‘ x3(t) 75p2 - pl(10p3+.5) 36.25 ~80 m X3(t)] —p3—6 ~50 el(t) + —18.3 70 82(t) (4.8) —71.4pl —3O 65 _ _ ,_ _ xl(o) 6 X0 = X2(0) = 2 (4.9) 53“”) -6 1 The system has two inputs with excitation "81(t) 10 E(t) H II fl iv o (4.10) e2(t) .SEJ 0, otherwise The parameters pl, p2 and p3 are to be determined so that the state variables xl(t) and x2(t) have solutions specified by F 7 7' 7 7‘ 7' '7 7 7 xl(t) -1 2 3 [.e"20t 4 _2 1 x2(t) = 3 —10 -l e_20tsin5t + 0 2 t (4.11) -20t X3(t) g31 g32 g33_J e COSSfJ g34 g35 _ __ _ L _. 4 Except as to form, the time variation of x3(t) is unspec~ ified. The real constants g3j. j=l,2,...,5, can assume any finite values consistent with the given Specifications on xl(t) and x2(t). The vector functions Fs(t) and Fe(t) are identified by where Fs(t) Fem Fs(t) F (t) e e«.201: e 201:sin5t = e 20tcosSt 1 It. 0 Fs(t) S F (t) , ei._.e .1 0 0 i i I —20 5 : i l —5 -20 I .....____.__..____.__:_ _______ I :0 i O l [1 I FS(O)- Fe(0) _ _ Fs(t) Fe(t) (4.12) (4.13) 67 [1 P- O F 50 = 1 (4.14) F ____ GO 1 O._J By Definition 2.2, (4.12) is a basis vector for t i 0 with basis system (4.13-14). In addition, the excitation vector (4.10) has the representation E(t) = HFe(t) (4.15) 10 0 1 — 0 .5 t By Theorem 2.1, the state model (4.8—9) has the solu— tion (4.11) for all t 3 0 if and only if the fundamental de- sign equations are satisfied. From (2.43), these are — — —- —._1 — — _ LA B X9] GS 08 0 = L05 Gel SS 0 F50 0 H 0 0 s F _ e eo 0 0 1 or in detail, 68 l 6 3 p _ 4 3 oo .. p l l . _ 7 _ 2 l p p O O 3 O 4. . oo . 5 . 5 2 5 7 l. 2 . p . 3 <3 /0 l _ a3 ) .3 + 3 p 2 O _ l 2 (x 2 l p p O + 5 _ 3 2 p p O 5 3 7 I(fl O m O . . . 2 5. _ 2 3. O 5 g_ _ _ 4. 4 O 3. O O g_l . llllllllll rlslllltll _ _ 3 l 3_ _ 3. g. _ _ O 2. .2 l 3. O _ g. _ _ _ _ _ _ unfil'filnglfinIUUIuDlH \‘hL-a-du‘.->-vul-l_h‘ —20 O O ] 2 2 g35 3 —1 g33 —10 The FDE of (4.16) exhibit the general form (4.17) 0MP) G1 = 625‘ where the parameter vector 73 is a 3-vector defined as (4.18) 69 Let the realizable parameter bounds be .1 5 pi': 10 , i=1,2,3 (4.19) and let the unknown entries in 631 and ng satisfy |g3j| : 100 , j=l,2,...,5 (4.20) In the language of Definition 4.1, the realizable parameter set'7r is compact and the 5~vector 63-, containing g3j, j=l,2,...5, ranges over a compact set [3 . The design prob— lem reduces to finding the optimum parameter vector IF; é Zr which, together with some vector C§b16[3, minimizes the func- tion z. (4.21) where zij is the typical entry of the 3 x 6 difference matrix “’2 = (Anne, .— 825 (4.22) The minimization of u(7D,C§) is accomplished by using program GRADNN listed in Appendix C. This program requires that initial values 73’ and (3‘ of the vectors‘FDand (3 be assumed. Let 70‘ = l (4.23) 70 and take (33 such that g3j=l, j=l,2,...,5. With these values, the initial value of u(77,C;), as calculated by program GRADNN, is approximately 476,000. Using the initial parame- ter values (4.23), the solution to the state model (4.8-9) is approximately xl(t) 8.00 13 50 _22.71 e‘4906t x2(t) = -1 51 —9.79 -40.00 e-3°2tsin 18.1t x3(t) 15 21 —2.80 -40.40 e—3“2tcos 18.1t L. _ ..... __ __ NJ 21.51 -2.19 l + 49.78 -2.37 _t (4.24) 31.02 --3.03__J L_. The solution xl(t) and x2(t) differ significantly from the Specified solutions given in (4.11). An optimization of the parameters by program GRADNN yields the optimum parameter vector '1 F'" 0.5008 7% = 1 5003 (4.25) 0.7002 with -0.0010 -1.9979 G; = 5.0007 (4 20) 0.9930 -2.0003 71 and with u(‘72,<3b) = 0.1370; all values being corrected to four decimal places. The optimum state model is obtained by using the optimum parameter vector (4.25) in the state model (4.8—9). The solution of this model is approximately T __ _ _.___ . l xl(t) -l.08 1.53 3.03 e 19°4t x2(t) = 2.93 -9 72 -0.08 e"20°3tsin5.94t _20.3t, H px3(t)J _:0.18 -2.58 5 35.i_f cosS.94tH 4.00 —1.98 1 + _0.04 2.02 _t (4 27) 0.96 -1 97- The solutions for xl(t) and x2(t) correSpond closely to those Specified in (4.11). Using the CDC 3600 computer, the total computation time for the optimization was three minutes and twenty~one seconds with 900 iterations being executed. The solutions (4.24) and (4.27) were also generated on the computer by using a modified version of program GRADNN. This solution technique is discussed in Appendix C. It is difficult to compare the present parameter optimization technique with existing methods because there is essentially only one alternative approach which exhibits comparable generality and this method is not widely used. 72 This approach first requires the derivation of the S-domain model from the system while holding the parameters in literal form. Next, the time domain Specifications on the output re— Sponse are transformed into the S—domain to yield a Specified S—domain model. Equating the two s-domain models defines a system of nonlinear algebraic equations which must be solved to yield the optimum parameter set. In general, this system contains fewer equations and unknowns than the correSponding FDE but the degree of nonlinearity is considerably higher. Thus, in general, it cannot be concluded that the S~domain method generates equations which enjoy a more efficient com- puter solution. Also, as pointed out in Section III, the formulation of the s—domain model from a physical system, while holding parameters in literal form, requires consider- ably more algebraic manipulation than the formulation of the corresponding state model. Moreover, the computer cannot be used to generate the S—domain model in literal form so that this additional manipulation represents an inefficient use of the designer’s time. V. CONCLUSION The expanding complexity of modern-day Systems has exposed a basic weakness in the traditional design tech— niques. The transform model and the analog computer, once thought to be adequate design tools, now leave much to be desired. In the search for more effective techniques, many engineers have turned to the digital computer. In retrOSpect, it is clear that too much effort has been devoted to implementing existing design procedures on the computer. Consequently, no new design techniques have evolved which are truly general in application, fundamental- ly different from existing methods and which offer promise in extending the role of the digital computer in System design. This thesis establishes an approach to the design of linear time-invariant systems which is fundamentally different from any technique proposed thus far. Design Specifications, expressed either in the time domain as a de— sired time solution or in the S~domain as a desired transfer function matrix, are used to define fundamental design equa— tions. These equations are algebraic and provide necessary and sufficient conditions for the state model (l.l~2) to satisfy the given design criteria. Both forced and unforced 73 74 Systems are treated with excitation functions assuming any form obtained as the solution to a homogeneous constant— coefficient differential equation of finite order. This includes all non_impulsive inputs normally used in Swdomain design. In addition to providing necessary and sufficient conditions, the fundamental design equations can be solved to yield a state model satisfying the design criteria. When no requirements are placed on the form of the State model, the fundamental design equations yield arbitrary forms of the model including those recently Shown by Kalman and Gilbert. If restrictions are placed on certain entries of the state model, the fundamental design equations incorpo— rate these restrictions directly as algebraic constraints on the unknown entries. The restrictions on the form of the state model can be changed at will and without additional manipulation. Reduced necessary conditions are given which are useful in eliminating those state models which will not sat— isfy the design Specifications. These conditions, together with the fundamental design equations provide a new and interesting approach to the classical problem of network Synthesis. For some time, Optimal design has been merely a con~ cept rather than a practical reality. The analog computer yields only ”cut and try“ designs and no acceptable analyt- ical technique for parameter optimization, using the 75 S-domain model, has been proposed. Parameter optimization techniques based on the S-domain model are restricted be— cause of the difficulty in relating the system parameters to the model. On the other hand, the state model iS inher- ently more precise and many system parameters can be carried directly into the model. In Section IV, a parameter optimization technique is proposed which exploits both the descriptive properties of the state model and the iterative capabilities of the digital computer. The technique allows the fundamental design equations to be programmed directly on the computer without further manipulation and the standard numerical method of ”steepest descent” is used to achieve the opti_ mization. A computer program is given which implements this optimization technique. APPENDIX A ALTERNATE PROOF OF SUFFICIENT CONDITIONS This Section contains an alternate Sufficiency proof of Theorem 2.2. The proof rests on the theory of functions of matrices and its application to the solution of linear constant coefficient differential equations [20], [24], [25]. The following lemma is required in the main theorem. Lemma A.l Let the matrix relationship AG = GS (A.1) hold between an m x q matrix G and the Square matrices A and S. Then I! f(A)G Gf(S) (A 2) also holds for any function of the matrices A and S that is representable aS the sum of a convergent matrix power series. Proof Let f(x) be any Scalar function representable as the sum of a convergent power series 00 f(x) = ; knxn (A.3) I120 76 ,..., are Scalar constants. where kn, n=0,l Then 00 HA) = E knAn D:O and 00 f(S) = ann .__l I130 o o . . . where A and S are defined as unit matrices of order m and q,respectively. Since AG = GS, it follows that A(AG) = A(GS) 2 (AG)S = (GS)S or ‘ AZG = GS2 Similarly, A3G : G83 449...; for all n. Thus, knAnG = knGSn for all n and therefore (A.4) (A.5) (A.0) 78 M8 .— :5 :9 :3 C) H 7. :3 Q m :3 or The conclusion follows from (A.4-5). Theorem A.l Hypothesis: 1) T : {t1 0 j t :.t1:}7 t1 a non—zero constant. 2) Fs(t) and Fe(t) are vector functions of order qszunl qe, reSpectively,SatiSfying, l ’1'] Fs(t) Sst(t), Fs(0) — so (A 7) H "rl Fe(t) sepe(t>, Pe(0) .0 (A.8) on T, where SS and Se are Square matrices of order qS and qe, reSpectively and F50 and Feo are constant qs— and qe—vectors, respectively. 3) The r—vector function E(t) is defined on T by E(t) : HFe(t) (A 9) where H is an r x qe matrix. 4) The k—vector function V(t) is defined on T by Y(t) = Nst(t)+NeFe(t) (A.10) 79 where NS and Ne are matrices of order k x qS and k x qe, respectively. Conclusion: Sufficient solution, on T, of i Y(t) are that there exist conditions for Y(t) to be the output AX(t)+BE(t), X(0) = x O CX(t)+DE(t) matrices Gs and Ge such that AG = G S S S S AGe + BH = Gese GSFSO + GGFGO 2 X0 CGS = NS CGe + DH : Ne Proof (A. (A. (A (A. (A. (A. (A. 11) 12) .13) 14) 15) 10) 17) Assume that there exist matrices GS and Ge such that A, B, c and D satisfy (A.13—l7). put solutions of (A.ll—12) are, reSpectively X(t) = e At A(t-u) e t X0 + BE(u)du The general state and out— (A.18) 80 and t Y(t) = CeAtXO+ c eA(t_u)BE(u)du+DE(t) (A 19) It must be Shown that (A.lQ) reduces to (A.lO) by virtue of conditions (A.l3-l7) and the hypothesis. It is first Shown that X(t) given by (A.l8) reduces to X(t) = Gst(t)+GeFe(t) (A.20) Substituting (A.9) into (A.l8) and using (A.l4), X(t) takes the form t X(t) = eAtXO + eA(t_u)BHPe(u)du O or t X(t) = eAtxO + eA(t'u)[GeSe — AGe]Fe(u)du 0 Since eA(t—u) = eAte'Au, it follows that t _ X(t) = eAt x0 + e'Aumese _ AGe]Fe(u)du (A.21) ._ O — Applying (A08) and the property that the matrices A and e-Au commute, the integrand in (A.21), reduces to 81 e‘AuIGese - AGeJFe(U) = e"AuGeSeFe(u) - e‘AuAGeFe(u) (A.22) -Au ' _ -Au e GeFe(u) Ae GeFe(u) The commutative property follows directly from the power . . -Au Series representation of e as 2 2 3 3 e.Au ‘ U - Au + A u — A u + ... (A.23) 2: 3' where U is the n_dimenSional unit matrix. This series con- verges uniformly in any finite interval and is continuous in that interval. Also, the differentiated series converges uniformly and it iS seen from (A.23) that d -Au _ d3. This occurs if and only if the eigen— values of A have negative real parts. In this case, the first integral in (B.28) exists and is independent of time. The second integral represents the transient part of the Solution and clearly approaches zero as t-—e-d>. Thus, the steady-state solution Yss(t) is _ _ (I) YSSCt) = CeAXBe'jWde-tD Eweth (B.29) 92 Comparing (B.29) and (B.24), it follows that —— — a) Y = CeAXBe_JWde-+D E w w O __ and therefore from (B.25) M*(w) = N*(w)'+ D where 00 _ Ax ~ij N*(w) — Ce Be dx o 78%) We note that because of their complex representation, the excitation vector and steady—state output vector defined by (B.23-24) do not have physical significance. In prac- tice, we consider only their real parts, which Specify the steady—state oscillations (of radian frequency w) at the input and output of the system. It is precisely this fact, coupled with the definition (B.25), which allows laboratory measurement of the frequency~response matrix M*(w) of a strictly stable system. APPENDIX C PROGRAM GRADNN The computer program GRADNN, described in this appen~ dix, utilizes the method of steepest descent to find the vectors (7% and (36 minimizing the Scalar function r q 1 1 2 u(73,€-) = _>_ Z zij (C.l) i=1 j=l where 2,. is the typical entry of the r x q matrix lJ OZ: (HRS—1 - (:25 (0.2) and the vector 6; contains the unknown entries of (g1 and @2. The method of steepest descent is widely used in the solution of Simultaneous nonlinear algebraic equations [29], [30]. The method may require a large number of iterations but it has a distinct advantage over other methods (Newtonls Method, for example) in that it always converges. Essential— ly, the method finds the vector X = [xl,x2,...,xn]T which yields a minimum value of the function Xl (k) x2 = x200 )1. 2960): > ((3.3) X (k+1) : X (k) _)(k (560619) n n (fixn where the notation i —£Eb£—l evaluated at X = X(k). dXi denotes the partial derivative &@(X(k)) @xg The Scalar constant )\k is obtained as a root of the equation in A : _.)_ _ a (k) 902mm”) &)\ R(A)_d>\@xl _>\ X1 , O O D (k) _ % AQOCU‘O ., n ax. = 0 (04) Geometrically, the recursion formulae (C.32 transfer a given (k) [Xl(k) X2(k) . (k)]T point X = ..,X on the hypersurface n located on the vector defined by the gradient of (E(X) at the point X(k). The root ;\k of (C.4) minimizes (E(X) along this gradient . . k+1 and thus the condition (@(X( )) < (i(X/ L\) l The value ;\2 is obtained by setting to zero the first two terms of the Taylor expansion of (fi(x(k)-+;\x). Minimizing the quadratic approximation with respect to )\ yields )(2[R( )(2) - 4R< Al) + 3R( ADM = (0.6) AK 4[R( A2) - 2R( Al) + R( AOH This value is taken as the approximation to the root of (C.4) and used in the recursion formulae (C.3), The major disadvantage in the method of steepest descent is that the method may converge to a relative min— imum of @(X) rather than its absolute minimum. If this occurs, and it is detected, then the procedure must be re_ peated with initial values which are closer to those yielding 96 the absolute minimum. From practical experience with program GRADNN, it appears that some difficulties with relative minima can be avoided by selecting initial values of 73, Cil’ <32 and S5 for which u(7p,(3) = 0. This is always possible Since a given value of '7Destablishes a particular state model with a particular solution. This solution determines correspond- ing values of 61, @2 and § . Starting with these values, (31, GE and. S are Slowly incremented toward the desired values while Simultaneously minimizing u(77,é;). The pro- gram listed below can be easily modified to implement this scheme. Program GRADNN can also be used to generate an approx— imate analytic Solution to the State model (l.l~2). The vec- tor function Fs(t), together with SS and F50, can be written directly from a knowledge of the eigenvalues of the matrix A in the state model. The values Se’ Fe0 and H are determined by the excitation vector. Referring to the FDE of (2.43), all entries are known except for those in G5 and Ge' Letting the vector (g-contain these unknown entries, the function u(73,6;) in (C.l) becomes a function of 6% alone. With only minor alteration, program GRADNN can be used to determine the vector 630 Such that u((§b) = 0. This defines the State solu- tion X(t) of the state model from which the output solution Y(t) can be obtained immediately. 97 To use the program GRADNN listed below, one must pro- vide a set of statements defining the matrix function J¥(7>) in (C.2). The 2—dimensional array A denotes aij(79) while the 3-dimensional array AP corresponds to ‘5 a .(79). These "5‘5; l-J statements must immediately follow statement number 42. In addition, a data deck must be provided as follows: The first data card contains five integers which define re- Spectively, the row and column dimensions of dR('P), the col— umn dimensions of €31 and (32 and the dimension of the pa— rameter vector 77. The format of this card is given by statement 10 while statement 20 gives the format of all re» maining data cards. The next group of cards, KN in number, defines the initial (or known) values of the matrices 631 and 532. The 2-dimensional array G stores these values. The particular program listed below requires that (92 be the leading Sub- matrix of 691 or vice versa. All sets of FDE, with the ex- ception of (2.44), satisfy this requirement and the program is easily modified to accommodate this set also. The next group of N data cards determines the 2m dimensional array GT. This array defines each entry of ($1 and 632 as either a variable in the iteration or a fixed con- stant as follows: GT(i,j) = 1 implies that the gij is to be iterated; GT(i,j) = 0 implies that gij is known and fixed. The known matrix:f§, represented by the 2~dimensional 98 array S is defined by the next Set of NQ cards. The last data card contains the initial values of the parameters which are stored in the l—dimensional array P. The output from the computer includes the initial value Of Q which represents the function u(79,éi) in the pro- gram. Also, the current value of Q, together with the arrays A, P and G are printed after each 100 iterations. The total number of iterations is determined, in hundreds, by the index Of the variable MMM. The program automatically replaces any negative parameter value with the value 10_8. Other bounds, both upper and lower, are easily included just above state— ment 144. The program accepts matrices with dimensions up to 20 x 20 and 20 parameters can be included. For problems Similar to Example 4.1, the program executes approximately 1000 iterations/minute on the CDC 3600 computer. Program GRADNN DIMENSION A(20,20), AP(20,20,20), QP(20), G(20,20), S(20,20), 1 Y(20,20), F(20), QG(20,20), GT(20,20), PS(20), GS(20,20) READ 10, N,KN,KQ,NQ,NP 10 FORMAT (512) D0 15 I=l,KN 15 READ 20, (G(I,J), J=l,KQ) 20 FORMAT (8F10.3) D0 25 I=l,N 25 READ 20, (GT(I,J), J=l,NQ) DO 30 I=1,NQ 30 READ 20, (S(I,J), J=1, KQ) READ 20, (P(J), J=l,NP) 33 FORMAT (1H ,7(814.8,2X) 34 FORMAT (1H ,Ei4.8) 42 45 48 50 51 52 53 54 58 59 55 56 57 60 65 75 80 99 D8 D0 08 JJ 1 MM 0 CONTINUE LIST ARRAYS A(I,J) AND AP(K,I,J) Q = 0. D0 50 I DO 50 J K + 1 40 MMM = 1, 10 10.4820. II II N H F‘H ,N ,KQ 4 8H uuwcoha01o l,KN A(I,K)*G(K,J) O 4 K=l, Q + G(I,K)*S(K,J) Y(I,J) = T - s Q = Q + (T—S)*(T-S) IF(JJ) 52,53,51 IF(Q-QS) 55,59,59 mt307H 2 Q2 = Q GR = .5*GR JJ = 0 GO To 135 GR = DS*(Q2—4.*Q+3.*QS) GR = GR/4.*(Q2-2.*Q+QS) IF(GR) 54,54,58 GR 2 .0001 JJ = 1 GO TO 135 GR = .5*GR GO TO 135 QS=Q JJ 3 —1 D0 56 J=l,NP PS(J) = P(J) D0 57 I=l,KN D0 57 J=1,KQ GS(I,J) = G(I,J) IF(MM) 60,60,75 PRINT 65, Q FORMAT (SHOQ = ,EI4.8) MM = MM + 1 IF(MM—lOO) 80,80,145 CONTINUE QD = 0. Do 105 K=l, NP 8 = 0. D0 92 I=l,N D0 92 J=1,KQ T = 0. 91 92 105 110 115 120 130 135 140 143 144 145 200 205 210 215 220 230 235 240 100 D0 91 KK=1,KN T = T + AP(K,I,KK)*G(KK,J) s = S + T*Y(I,J) QP(K) = 2.*S QD 2 CD + 4.*S*S CONTINUE DO 130 KR = 1,N D0 130 KC = 1,NQ IF (GT(KR,KC)) 110,130,110 S - 0. Do 115 I=1,N S = s + Y(I,KC)*A(I,KR) D0 120 J=l,KQ S = s — Y(KR,J)*S(KC,J) QG(KR,KC) = 2.*s QD = QD + 4.*S*S CONTINUE GR = DS*Q/QD CONTINUE D0 140 I=1,N D0 140 J=l,NQ G(I,J) = GS(I,J) — GR*QG(I,J) D0 144 K=1,NP P(K) = PS(K) — GR*QP(K) IF(P(K)) 143,143,144 F(K) = (10.)**(—8.) CONTINUE GO TO 42 CONTINUE PRINT 200, Q FORMAT (5HOQ = ,EI4.8) PRINT 205 FORMAT (16HOP IS THE VECTOR) D0 210 J=l,NP PRINT 34, P(J) PRINT 215 FORMAT (16HOG IS THE MATRIX) D0 220 I=1,KN PRINT 33, (G(I,J), J=1,KQ) PRINT 230 FORMAT (16 HOA IS THE MATRIX) D0 235 I=1,N PRINT 33, (A(I,J), J=l,KN) CONTINUE END END ADD 1 BLANK CARD PLUS DATA CARDS 10. 11. 12. REFERENCES Gardner, M. F., and Barnes, J. L. Transienfisin Linear Systems. John Wiley and Sons, New York, 1942. Guillemin, E. A. Mathematics of Circuit Analysis. John Wiley and Sons, Inc., New York, 1949. Truxal, J. G. Automatic Feedback Control System Synthesis. McGraw~Hill Book CO., Inc., New York, 1955. Koenig, H. E., and Blackwell, W. A° Electromechanical System Theory. McGraw-Hill Book CO., Inc., 1961. Kuo, B. C. Automatic Control System. PrenticewHall, Inc., 1962. Freeman, H. ”A Synthesis Method for Multipole Control Systems,” AIEE Transactions, Vol. 76, Part II, 1957, pp. 28-31. Kavanaugh, J. J. “Multivariable Control Systems Synthesis,” AIEE Transactions, Vol. 77, Part II, 1958, pp. 425—429. Shipley, P. P. ”A Unified Approach to Synthesis of Linear Systems,” IEEE Transactions on Automatic Control, Vol. AC~8, April, 1963, pp. lI4~120. Baskow, T. “The A Matrix, New Network Description,” IRE Transactions on Circuit Theory, Vol. CT-4, NO. 3, September 1957, pp. 117—119. Desoer, C. A. ”Modes in Linear Circuits,“ ARE Transactions on Circuit Theory, Vol. CT~7, September 1960, pp. 2ll~223. Koenig, H. E., Tokad, Y., and Kesavan, H. K. Analysis Of Discrete Physical Systems. McGraw~Hill Book CO., Inc., New York, in publication. Koenig, H. E., and Tokad, Y. ”State Models of Systems of Multi-Terminal Linear Components,” IRE 1nter~ national Convention Record, 1964. 101 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 102 Wirth, J. L. ”Time—domain Models of Physical Systems and Existence of Solutions,“ Ph.D. Thesis, Michigan State University, 1962. Kalman, R. E., and Bertram, J. B. ”General Synthesis Procedure for Computer Control and Single“ and Multi-Loop Linear Systems,“_§gEE Transactions, Vol. 77, Part II, 1958, pp. 602~9. ”Control System Analysis and Design via the 'Second Method’ of Liapunov,” Journal of Basic Engineering, ASME Transactions, June, 1960, pp. 371—400. . Kalman, R. E. “Mathematical Description of Linear Dynamical Systemsj‘Journal of the Society of Industrial Applied Mathematics, Series A; Control, Vol. 1, NO. 2, 1963, pp. 152-192. Gilbert, E. G. “Controllability and Observability in Multivariable Control Systemsf‘Journal of the Society of Industrial Applied Mathematics, Series A: Control, Vol. 1, NO. 2, 1963. pp.128«151. Zadeh, L. A., and Desoer, C. A. Linear System Theory. McGraw~Hi11 Book Company, Inc., New York, 1963. Coddington, E. A., and Levinson, N. Theory of Ordinary Differential Equations. McGrawwHill Book Company, Inc., New York, 1953. Bellman, R. Stability Theory of Differential Equations. McGraw-Hill Book Company, Inc., New York, 1953. Koenig, H. E., and Tokad, Y. “A Homogeneous Equivalent of Nonhomogeneous Linear State Models and Its Application to the Analysis of Continuous and Discrete—state Systems,” (unpublished paper), Michigan State University, 1963. Olmsted, J. M. H. Advanced Calculus. Appleton-Century_ Crofts, Inc., New York, 1961. Kuratowski, K. Introduction to Set Theory and Topology. Pergaman Press, Ltd., London, 1962. Gantmacher, F. R. The Theory of Matrices. V61. 1. Chelsea Publishing Company, New York, 1960. 25. 26. 27. 28. 29. 30. Frame, J. S. “Matrix Functions and Applications,” IEEE Spectrum, March—April, 1964. Goldman, S. Transformation Calculus and Electrical Transients. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1949. Aseltine, J. A. Transform Method in Linear System Analysis. McGraw-Hill Book Company, Inc., New York, 1958. Lepage, W. R. Complex Variables and the Laplace Trans— form for Engineers. McGraw-Hill Book Company, Inc., New York, 1961. Zaguskin, V. L. Handbook of Numerical Methods for the Solution of Algebraic and Transcendental Equations. Pergamon Press, New York, 1961. Booth, A. D. Numerical Methods. Butterworths Scientific Publications, London, 1957. ROOM USE ONLY W ~13 1|H|HH||H|HIMHNHIIHHHHIHHlllHHlHWlHlHlH 21546