‘1‘ I '3' I! ABSTRACT ON THE OPTIMAL SAMPLED—DATA TRACKING PROBLEM By Richard Kuang-tzan Ma The optimal sampled—data tracking problem is formulated and solved using an efficient computational algorithm. The optimization is performed on the number of samples, the sampling instants sequence, and on the order of polynomial approximation to the control law over each sampling interval. This sampled-data control is parameterized by specifying the parameters and order polynomial approximation over each sampling interval, the number of samples, and the length of each sampling interval. Comparisons are made on both control performance and sampling efficiency for con— trol laws with different order approximations and with both periodic and Optimal aperiodic sampling criteria. These results form a basis for analyzing the performance advantages and costs for using higher order control approximations and Optimal aperiodic sampling criterion. Sampled-data controllability and observability are defined for the case where both the number of sampling times and the lengths of sampling intervals are free and considered control variables. The sampled-data system is proved to be observable (controllable) if and only if the continuous time system is observable (controllable). Richard Kuang—tzan Ma A sufficient condition on the sampling time sequence is stated which guarantees the preservation of controllability and observability when the continuous measurements and controls are replaced by sampled ones. The infinite-time sampled-data regulator problem is formulated for the case where both the number of sampling times and the lengths of sampling intervals are considered control variables. The existence of an Optimal closed-loop sampled-data control law is proved for the cases where the number of samples are both finite and infinite. Computational algorithms for calculating the Optimal control are also prOposed. ON THE OPTIMAL SAMPLED-DATA TRACKING PROBLEM By Richard Kuang—Tzan Ma A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department Of Electrical Engineering and Systems Science 1975 To my parents I-Ching and Su—Yuen Ma and my wife Linda Chung—Fan Ma ii Iv a,- u... ACKNOWLEDGEMENT The author would like to eXpress his sincere and deepest appreciation to his major advisor, Dr. Robert A. Schlueter, for not only his constant guidance and encouragement during the pre— paration of this thesis but the invaluable personality training to the author through the years. Gratitude is also eXpressed to Dr. James S. Frame for his Suggestions and help during the period devoted to this work. Thanks are also due to Drs. Gerald Park, Robert Barr and K.Yl. Lee for providing the author profound background in system scixence through their teaching. Mrs. Lee Burkhardt of Statistics is deserving special aCknowledgement for her excellence in typing this thesis and co- oPeration. The financial support from National Science Foundation arKi the Department Of Engineering Research of M.S.U. are also acknowledged . Finally, but by no means least, the author thanks his Parents I-Ching and Su-Yuen Ma and his wife Linda Chung—Fan Ma, whose everlasting love, encouragement make this work possible. iii TABLE OF CONTENTS Chapter I. INTRODUCTION II. PROBLEM FORMULATION III. PROBLEM SOLUTION IV. COMPUTATIONAL ALGORITHM V. COMPUTATIONAL RESULTS 1 Introduction .2 Comparison of Control Approximations 3 Comparison of Optimal Aperiodic and Periodic Sampling Criteria 5.4 Comparisons of OAS and Adaptive Sampling Rules 5.5 Performance, Different System with Different Inputs \II. SAMPLED-DATA CONTROLLABILITY AND OBSERVABILITY 6.1 Observability 6.2 Controllability 6.3 Sufficient Condition for the Singularity of X and X —o —c \FLI. THE INFINITE TIME REGULATOR PROBLEM 7.1 Problem Formulation 7.2 Computational Algorithm ‘VIII CONCLUSIONS AND FURTHER INVESTIGATION BIBLIOGRAPHY APPENDIX iv Page 11 17 33 33 36 49 55 59 77 81 92 106 113 113 117 132 136 141 Table 5-]. 5—22 5-23 5-4. 5-5 5-6 5-8 5-9 5-10 5‘11 S‘IJZ 5—1;} 5—11; LIST OF TABLES Convergence of Powell and Fletcher-Powell Algorithm Computational Results for the Cases with and with- out C01 in Using a Powell Algorithm Control Performance Ratio for Periodic Sampling Information Ratio for Periodic Sampling System Performance Ratio for Periodic Sampling Information Ratio for Periodic Sampling Control Performance Improvement for Aperiodic Sampling System Performance Ratio for Optimal Aperiodic Sampling Control Performance Ratio between OAS and PS System Performance Ratio between OAS and PS Performance of Different Sampling Criteria The Optimal Sampling Intervals Sequence in Tracking a Ramp Trajectory The Optimal Sampling Intervals Sequence in Tracking a Parabolic Trajectory System Performance Ratio for Periodic Sampling System Performance Ratio for Optimal Aperiodic Sampling Optimal Sampling Intervals Sequence in Tracking a Ramp Trajectory Page 23 29 39 42 45 45 48 48 54 54 58 62 65 7O 70 71 Table 5-15 5-16 System Performance Ratio for Periodic Sampling System Performance Ratio for Optimal Aperiodic Sampling Optimal Sampling Intervals Sequence in Tracking a Parabolic Trajectory vi Page 75 75 75 Figure 1 2 1C) 11. 121 1:3 11; :15 LIST OF FIGURES Control Performance for Periodic Sampling Control Performance for Periodic Sampling System Performance for Periodic Sampling System and Control Performance for Optimal Aperiodic Sampling System and Control Performance for PS and OAS with the Step Control Approximation System and Control Performance for PS and OAS with the Ramp Control Approximation System and Control Performance for PS and OAS with the Parabolic Control Approximation System Performance of PS for the System of Example 2 in Tracking a Ramp Trojectory System Performance of OAS for the System of Example 2 in Tracking a Ramp Trojectory System Performance of PS for the System of Example 2 in Tracking a Parabolic Trajectory System Performance of OAS for the System of Example 2 in Tracking a Parabolic Trajectory System Performance of PS for the System of Example 3 in Tracking a Ramp Trajectory System Performance of OAS for the System of Example 3 in Tracking a Ramp Trajectory System Performance of PS for the System of Example 3 in Tracking a Parabolic Trajectory System Performance of OAS for the System of Example 3 in Tracking a Parabolic Trajectory vii Page 40 41 43 47 50 51 52 61 61 64 64 68 69 72 73 CHAPTER I INTRODUCTION Periodic sampling criteria have often been used in industrial cryntrol to simplify the design and analysis. Aperiodic sampling curiteria have become quite practical in both design and control unith the introduction of computers. Therefore, numerous [l3 — 20] armeriodic sampling criteria have been studied in an effort to improve tile: system performance and sampling efficiency relative to a Periodic sampling criterion. Improved control performance with reKiLlced computer memory and communication requirement makes aperiodic SENDI)ling criteria particularly useful for numerical control applica- tiCDrls. The optimal sampled—data tracking problem originated from thfi research on the development of optimal programmed control for maC11“1:i.ne tools [8]. In a computer—aided—manufacturing (CAM) system of the future, a large central computer system will compute and 5store the programmed control for each part. The programmed control wellld be stored and then transmitted at the prOper time to the "“iIIi-computer or controller that monitors and controls a particular maChine tool. Immense data storage and communication facilities are required to accurately specify the cutter path for each part aDdeach machine tool. Since a major commitment in computer and Communication hardware is required to handle machine tool control 1 and since the computer and communication system must also handle material handling, scheduling and inventory control, the programmed control for each part should be specified with as little informa- tion as possible. Therefore, the Optimal programmed control for a machine tool sfliould be designed to not only produce excellent quality parts but zalso minimize the information—handling requirement. Since the control 113 parameterized by specifying the polynomial approximation over each sampling interval and the length of each sampling interval, this nuinimization will be accomplished by selecting both the best control Iajxproximation parameters on each sampling interval and the Optimal sampling intervals sequence. The additional flexibility provided bY' sselecting the order of the polynomial approximation in each Sampling interval and the flexibility of selecting the length of 553(111 sampling interval and the number of sampling intervals promise t“) 13ermit great reduction in data transmission and storage required tc’ (Dbtain a particular tolerance level and surface finish quality. This optimal sampled—data control problem was first f0lfinulated [6, 7] in an effort to Obtain sampling criteria that pro- "i£1e better performance than any periodic or arbitrary aperiodic SanIlpling criteria. Necessary conditions were derived in both papers 13‘1t were never used to obtain an efficient computational algorithm fCDI the Optimal solution. A sequential unconstrained minimization tiichnique (SUMT) has been used with some success in the special case Where the continuous time problem can be transformed to an equivalent discrete time one [9]. An efficient computational algorithm was deveIOped for this optimal sampled-data control problem for the special case where the optimal control sequence can be determined as a unique function of the particular sampling intervals sequence chosen. For this sipecial case, the performance index can be determined as a function ()f this sampling intervals sequence. The Optimal sampling intervals seequence can be found by minimizing this derived performance index. Tfiie Optimal sampled—data control law is then specified by the Optimal ccrntrol sequence which results upon the substitution of the optimal sampling intervals sequence. This algorithm was applied to compute the Optimal sampled— déltéi control law for the regulator problem with constrained [9], Stilt:e—dependent [10], and adaptive [ll] sampling criteria. The echellent performance obtained with very few control changes in- dilléites that the computer memory and system - computer communica- ti<3r1 required to store and transmit the control can be significantly refilllced if the sampling intervals are determined Optimally rather than specified apriori . With the same concept of Optimal sampling for control, the * 0Ptimal sampled—data tracking (and servo ) problem is investigated 1‘1 this thesis. Instead Of assuming a step (sample and hold) control aD‘proximation, the control approximation is assumed to be of poly— r“31111611 form over each sampling interval. The order of the control approximation is varied from zero to two. \_ * By convention, if the plant's outputs are to follow a class of desired trajectories, the problem is referred to as a servo problem; On the other hand, if the desired trajectories is a particular func- tion of time, it is called a tracking problem. Necessary conditions are obtained and are used to derive the control sequence as a function of the sampling intervals sequence. The optimal control law and a derived performance index are then proved to exist and to be unique for any sequence of sampling intervals. The existence of an Optimal sampling interval sequence is finally proved. An algorithmic procedure for computing the Optimal sampled— data control is proposed. This algorithmic procedure extends the prraxzious procedure [9] by not just searching over a sequence Of sampling intervals for a particular number of samples but also 3e21rnzhing over the number of samples required. The sub-algorithm for SEEirwzhing over the sampling intervals, developed by Schlueter [9], is iInplemented with both a gradient and a non—gradient algorithm. ThE! (:omputational results Show that the non—gradient Powell algorithm [33:] is more efficient than a Fletcher-Powell gradient algorithm [32] , i.e. the computational effort and the number of iterations requi‘red to Obtain convergence are less. A cost Of implementation is adjoined to the performance index for tine first time because the optimal sampled-data control was prOVed to be the Optimal continuous—time if a cost of implementation was IIeglected. An optimal continuous—time control is in general sub- OptitDal if a cost of implementation is added. After a search of the liteI‘ature, a particular form for this cost of implementation is adoPted. The computational results show that augmenting the perfor- maUCe index with a cost of implementation not only makes the design Problem more reasonable but also improves the convergence of the COmputational algorithm. A comparison Of performance of an Optimal sampled—data con— trol law with periodic, Optimal aperiodic, and adaptive sampling criteria was made. A comparison of performance was also made for sampled-data control laws with a zero, first and second order control approximation. Comparisons were also made for various sampling criteria - control approximation combinations to determine which combination needs the fewest parameters to specify a control with a given level of performance. These comparisons of control approximations and sampling criteria were carried out on three dif— ferent systems and for different trajectories. These results form a basis for analyzing the performance advantages and costs for using higher order control approximations and Optimal aperiodic sampling Criteria. Sampled-data controllability and Observability are defined for the case where both the number of sampling times and the length 0f each sampling interval are free and considered to be control variables. The sampled-data system is proved to be observable (con— trc>llab1e) if the continuous time system is observable (controllable). Moreover, it is proved that if the system is observable (controllable), it Can be observed (controlled) in q sampling intervals, where q is the order of the minimal polynomial of the plant. A test is proposed to determine whether controllability or observability is preServed for a particular sequence of sampling intervals. The test depends only upon the eigenvalues of the plant and the sampling it‘ter‘vals chosen. Some preliminary results are derived to indicate the condition which must be satisfied for a system which is observable (controllable) to be unobservable (uncontrollable) for a particular sampling intervals sequence. The infinite-time sampled—data regulator problem is formulated .fc3r the case where both the number of sampling times and the lengths c>f’ the sampling intervals are considered control parameters. The ¢e> = E. 0 whezrwe .§(t) e Rn, 3(t) E Rr, y(t) c Rm and ‘A, B, C, are compatible mallrfiices. Initial time t0 and terminal time t are both assumed N fixreci, The design Objective is to maintain the output trajectory Xfiti) as close as possible to a desired trajectory z(t) with “Ullinnum control effort along with minimum communication requirement. Besixics, this cost functional should penalize the system for error or eXcessive control inputs continuously in time, not only at samp 1 ing ins tants . To achieve this Objective, 3 performance index of the form S a J + c (3a) is Chosen where the control performance is measured by J = l _ _ 2 <1(tN) 5(tN), EQUN) §_(tN))> 1 tN (3b) 0 7 and the cost of implementation is assumed to be measured by N-l —BTi C = 2 ae (3C) i=0 uflaere §_HQ are positive semi-definite symmetric matrices not 13c>th identically zero and .3 is positive—definite, symmetric. frtiese matrices are respectively the "weighting factor" for the end ;)c>int error, error energy y(t) - z(t) and control energy. A cost for implementation is adjoined and represents the economic costs for implementing and operating a samPled"data control lii‘fi. This cost for implementation can be considered to represent Clue: cost for transmitting and storing the Optimal sampled-data CHDIItrol law. It is similar in form to the costs for sampling used if! tfhe analytic derivation of adaptive sampling rules [13] and the oF’txtmal periodic sampling rate for a feedback control problem [56]- The sampled-data control law is a polynomial approximation of tzhe true optimal control and is constrained to be piecewise PCK13rnomia1 of the order up to two. The order of polynomial aPPII‘oximation is determined by the tradeoff between the control per— formance and the amount of information to be transmitted. The control is assumed to have the form k X uflt) = ) (4) _ j uji(t ti) t 5 [ti, ti+l i=0 “Wuire k = 0,1,2 represents a step, ramp and parabolic control approximation respectively. The N sampling instants {t, ‘}N-2 are chosen such that 1+1 1=O the sampling intervals ti+l - ti = Ti satisfy O < T. . < T. < T. (5a) 1 min — 1 ‘— 1 max 5(1‘0, T1,..., TN_1) = 2, (Sb) and N satisfies < N.fi N ‘. (5c) These sampling constraints (5a, 5b, 5c) can specify an o;>t:imal periodic sampling criterion if g_(TO, T1,...” TN-l) = Ti - (—‘N——fi—O—) = 0 i = 0,1,...,N-1 On the other hand, a subOptimal periodic sampling criterion can be specified by fixing N = N . = N . min max Similarly, a suboptimal aperiodic sampling criterion can be SPECified by choosing N as g(TO, T1,...,TN_1) = z T. - (tN - t0) = 0 (5d) Finally, the Optimal aperiodic sampling criterion has to Satisfy (5a), (5c) and (5d). The T 's, T 's, N . and N all come from imax imin min max the hardware limitation. The optimal sampled-data tracking problem with polynomial control approximation over constrained sampling intervals can be 10 stated formally as follows: Given the linear dynamic system (1), (2) with polynomial control Iapproximation (4); determine the optimal control sampling intervals s;equence, and the number of sampling intervals In PM. T' = (T -i i=0 — 0’ T 1,... t:11at minimizes the cost functional (3) and satisfies (Sa), (5b), £111d (5c) where u; = (u',,...,g'.). CHAPTER III PROBLEM SOLUTION This tracking problem cannot be solved directly because the aicimissible controls are constrained to be piecewise polynomial. bieevertheless, the constrained problem can be transformed into an ¢e<;uivalent unconstrained one by integrating the differential equa— t:icwn(l) and cost functional (3) over each sampling interval [tai+l’ ti) separately, substituting output equation (2) and finally invoking the control constraints (4). The resulting discrete state equation becomes (derived in Appendix A, B) 351+1 = 91% + 9131 (7) I, = 9. as, whfitre a, = Em) AT. “ 1 9:, = 2“,) = e —i = 2(Ti) 2 (20i"°"P-ki) and 11 variere 12 T. 1 = = _ = 2 Dki Dki(Ti) f0 (Ti t) e— B dt k 0,1, The cost functional in discrete form is 1 1 N-I __ v _ _ ‘ v v v + 2 5N3 5N DNEN + 2 1:0 (gig-iii + Zii-MIUI + 313191 N-I -ETi , - Zhiii - 2912.1) + .1. 0.8 (8) I=O I—l ti = E T j=0 J §=9R9 E=£T9 ___.. v —N z (tN)F.§ ' \ B- Btu-3 Rt2R--2 E“) = BCZR—l BtZk-l BtZk ‘ J 1 CN = _ v ! JO 2Q UN): EON) + Ito g (t)9_ _z_(t)dt] T _ _ i A't . At g1 —_Q(Ti) — f0 e— .9 e— dt T _ _ i A'tA Ni - M(Ti) - f e— gg(t)dc 13 51 = Roi) = foi [3m + g'mg 9mm Ti At —1 = _h_(Ti) = ID _z_'(ti + 0g 9 e- dt Ti gi = g(Ti) = ID E (ti + t)g g Q(t)dt 2(t) = [20(t).---,Qk(t)] Dk(t) = I; (t — x)k c2535x E dx t fixed II M *3 ll Even though _Q and .R are constant, Q1, Mi, Bi are in general, time varying. $1 is nonsingular because it s a fundamental matrix [29]. 9i (31) is positive semidefinite (definite) symmetric since .9 (R) is positive semidefinite (definite) symmetric. The discrete time problem becomes: Given the sampled-data system (7) with specified initial condition, determine the control and sampling intervals sequence [ui}§;é, ‘1' = (TO’T1”°°’TN-1) and N that minimizes the cost functional (8) subject to the constraints (5a), (5b), (5c). The following theorems establish both the existence of an Optimal solution and the structure for the computational algorithm. For any specified .2 and N satisfying the sampling con- straints (5a), (5b), (5c), the existence of an optimal control and 14 an Optimal state sequence are guaranteed if it satisfies the follow— ing Kuhn-Tucker conditions. THEOREM ] (Kuhn-Tucker Necessary Condition) If the sampling constraint T_e [a, 9] holds, then an Optimal solution _ui(T) = Bi and x. (I) = x 1+1 -—i+l eXIst If and only If there exists vectors Bi such that ([37: x, — .x. . . —i+l gl—l —I—1 rm 9-“- u v _ v -91§i + E1131 + $191+1 D1 + ' + ' R u. M.X. QiB . = 0 ‘_I—l -I_1 -— _ ' 1+1 51 for The Kuhn-Tucker necessary condition for the quadratic pro— gramming problem is stated in Appendix C and the above conditions are established in Appendix D. THEOREM 2 For each .3 satisfying ‘3 e [3, b], there exists an unique control law and trajectory sequence. The control law is -1 ' -1 ' -1 ' ' = .. Q - 9., (31 E, + £1 Qifiiflliei + £1 (S1 2151“) (9) 15 the cost functional is S(I,N,k) = J(T,N,k) + C(T,N,k) where J(T,N,k) = J + l x'K x + k'x l ”‘1 I _ __ v _ v v - v _ 2 .Z (5 1315144) 5—1 (51 —1 1+1) (10) 1=O N—l -BT. C(T,N,k) = 2 me 1 i=0 = I §1 (31 + 9151+191) -1 o = _ v —i ‘gi -EiBi-gi ' = -1 :1 -1 2. as + 12.129 (11> and K,, k. satisfy —1 -—1 K=(Q-MR1M)+O'+1[_I_—D,S,',1D1E_c: (14) Proof The existence and uniqueness are proved in Appendix C and the derivation of (9), (10), (11), (12), (13), (14) are in Appendix E and F. l6 THEOREM 3 If the sampling constraints are satisfied, then there exists _ * an Optimal sampling intervals sequence “I . Proof There exists an unique Optimal control and trajectory N—l i=0’ { sequence {31(2)} (1)}E-1, for each T_ satisfying (5a), xi+1 1= (5b), (5c). The cost functional is Obviously a continuous function of ‘I since -91"91’-91’ Mi, Bi’ gi, hi’-Ei’-Ei are continuous matrix functions of I, Therefore, the cost functional 5(2) is * continuous oneicompact set and an Optimal solution _T for this derived problem exists. Q.E.D. Thus, there exists a solution N 1 * it {gi(_1_‘)}. - * N-l 1:0 ’ {51(3 )}i=0 and I for the optimal linear tracking problem with constrained sampling times. This control law is open 100p and pre-programmed since the derived cost function and thus _T* depends on initial state x0 and the entire trajectory _§(t), t 6 [t0’ tNlo This theorem shows that the solution ‘I* to the derived optimization problem (minimize 8(IJN’k) over the set [ExR]) can be used to determine the optimal control and trajectory from the * state equation (7) and the control law (9) after matrices 51(1 ) a and Gi(T ) have been computed for i = N, N - 1, ..., 1,0 from (11) to (14) assuming N and k are specified. CHAPTER IV COMPUTATIONAL ALGORITHM The derived minimization problem min 3(I;N,k) N369 . . < T. < T. (53) 1mm — 1 —' Imax 9 = N,'_T_; satisfying Nmin i N : Nmax (5b) g(TO,T1,...,TN_1) = 9v (5c) can be solved for any given particular values Of both N and k using a sequential unconstrained minimization technique (SUMT). The convergence of this algorithm was proved in [34]. Another level of Optimization can be performed to de— termine both the optimal number of sampling times N* and the optimal order of control approximation k*. This Optimization could be performed by searching the Optimal system performance S(Tf,N,k), which results from solving this derived minimization problem over each N and k satisfying . N < min —- - max k = 0,1,2 17 18 This level of Optimization over N and k can be performed using an integer programming algorithm. This generalized algorithm now has three levels of optimization (1) determine {ufi(I,N,k)}:;3 by solving the Kuhn-Tucker necessary conditions for any (T,N,k) and determine the derived perfor- mance index S(T,N,k). (2) determine the optimal sampling intervals sequence Tf(N,k) for any N and K using the SUMT algorithm and determine the performance S(Té,N,k). (3) determine the optimal number of sampling intervals N* and k* that minimize S(Tf,N,k) using an integer programming algorithm and determine the Optimal sampled-data control law specified by * * k * N—l * * * * * {gig ,N ,k )}i: , l (N ,k ), N , k * k t and the performance 8(1 ,N ,k ). Although such an algorithm could be implemented, no effort was made to optimize over either N or k in this research. How- ever, extensive evaluation of system performance S(T%,N,k) is per— formed for different values of N and k. This generalized algorithm has several advantages over other possible procedures: (1) the Optimization over integers and real variables are separated. (2) the search dimension on the real variables is reduced from N(n + kr + 1) to N and the Nn equality constraints (7) are eliminated by solving for u:(T,N,k) using the Kuhn-Tucker necessary conditions. l9 * The SUMT algorithm, used to determine I_ in this generalized algorithm, has never been tested for the case where an equality con— straint .8.(T09 T19°°°9 TN‘l) = 9V was imposed on the sampling intervals. The SUMT algorithm can be used in this case if an appropriate penalty function is used. However, convergence may be slow and the cost of computation may be high. The form of the equality constraints imposed are quite simple and there— fore the v equations can be uniquely solved for v ‘variables as follows. The V ‘variables can be expressed in terms of the N-V variables T'=(Ti ,T ,....,T such that Ta: = £ The derived performance index becomes S = s + mi) N A = Z: ' _ _ P(T) [m1n(0,bifi Tifi)](bi2 T12) 2=v+l + [min(0’TiQ—ai£)](TiQ-aifl) which incorporates the sampling interval constraint i 6 [3, .121 where ‘3' = [a ,a, ,. ,a ] iv+l 1v-i-Z 1N b' = [b. ,b ,...,b ] 1 v+1 Since the penalty for violating the latter is proportional to V (V > 0), the minimization of L(T,N,k,V) for monotonically in- creasing sequence {Vp} results in a sequence {I}:=l that con— verges [9] to the Optimal if. The computational effort required by the SUMT algorithm developed by Schlueter [9] is quite large because the gradient of the performance index must be computed every time the performance 22 index is evaluated. Therefore, a non—gradient algorithm and a gradient algorithm are used to solve the same problem in order to determine whether the non—gradient algorithm will require less computational effort. The Fletcher-Powell gradient search algorithm used by Schlueter to determine If requires (N + l) evaluations of the performance index at every iteration to compute the gradient and evaluate the performance index. The Powell algorithm requires only one evaluation of the performance index because the gradient is not required. The following example problem was solved using both a non-gradient Powell search algorithm and the Fletcher-Powell algorithm and the numbensof evaluations of the performance index are compared. EXAMPLE 4—1. Given the system Mr.) = u(t) x(0) = 1 with the cost functional t J = x2(tf) + % f f (x2(t) + u2(t))dt o where the control satisfies the constraints u(t) = 111 t 6 [ti’ ti+l) O < ti+1 _ tI < w for i = 0,1 and t = t is free . 23 The Powell and Fletcher-Powell algorithms both converge as shown in Table 4-1. TABLE 4-1. Convergence of Powell and Fletcher-Powell Algorithm POWELL FLETCHER-POWELL 1 0.54064 0.54064 6 0.53340 0.52309 11 0.52799 0.52050 14 0.52174 0.52019 (converge) 20 0.52020 (converge) The results indicate the Powell algorithm needs a few more iterations for convergence, but the computational effort is much less since the Powell algorithm requires only 20 evaluations of the performance index while the Fletcher-Powell algorithm requires 42 evaluations to Obtain both the performance index and the gradient at each iteration. Therefore, the Powell algorithm is used in all the computational work which follows. The Powell computational algorithm was first tested on problems where the cost of implementation was neglected as in Exam- ple 4 -1. The computational algorithms Often did not converge or converged to local minima rather than global minima. This lack of convergence is not always caused by the round Off error or by the failure of the computational algorithms to converge, but can be attributed to the fact that the sampling constraints which require the sampling intervals to be positive were never imposed in the 24 optimization algorithm. The following theorem, which states that the sampling intervals will tend to approach zero if the cost of implementation is zero and the number of sampling times is unbounded, provides an indication that some difficulty with convergence might be observed if a cost of implementation is omitted. THEOREM 4-1 The optimal sampled-data control for the regulator problem is the optimal continuous-time control if the cost of implementation is negligible and the number of samples t is unbounded. The optimal sampled-data solution to the regulator problem for every .1 and N has been shown to be an Optimal approximation to the optimal continuous-time solution for the appropriate Hilbert space norm [57]. Since a sampled—data control 30:) = 30:1) t 6 [ti, t ) i+1 is a restricted class of controls, the control performance for the Optimal continuous-time control is less than or equal to the perfor— mance of an Optimal sampled—data control for any .3 and N. How- ever, the optimal periodic sampled—data control with period has been shown to converge to the Optimal continuous—time control [1] as N approaches infinity. Therefore, since the optimal 25 continuous—time control has the minimum value of control perfor— mance of all Optimal sampled-data control laws specified by T and N, the optimal continuous—time control is the optimal sampled—data control for the special case where the cost of implementation is negligible and the number of sampling times is unbounded. Q.E.D. If a cost of implementation is included or if Nmax is bounded, the continuous time control law will not be the optimal sampled-data control law. If Nmax is unbounded but the cost of implementation is omitted some elements of the optimal sampling intervals sequences will always be very small. Thus, the computa— tional algorithm, in searching for the optimal sampling intervals sequence, would often select a negative sampling interval which caused the algorithms to diverge. This convergence difficulty could be overcome by either adjoining a penalty term on the performance index which penalizes negative sampling intervals or by including the sampling constraints which restrict the sampling intervals to have non-negative values. The first approach is taken because there are penalties on performance in actual engineering design which pre- vent the sampling intervals from becoming too small. This penalty, which is the economic cost for implementing and Operating a sampled— data control law, has been overlooked in previous work on Optimal sampling criteria [9 - 11]. The design of sampling criteria always includes a tradeoff between control performance and economic cost which usually occurs after the continuous time control law is designed [2, 16]. The 26 inclusion of economic cost permits the design of a sampling criterion and control law together in one step using a single performance index. The economic cost should represent the cost for implementing and operating the hardware to 1) measure and collect the data about the states of the system. 2) transmit this data to the controller. 3) estimate the state and compute the control. 4) transmit the control back to the system. 5) actuate the control. The cost of implementation will be negligible if the cost of computing, storing, transmitting data and implementing a continuous- time control law is low. In this case, the continuous-time system is optimal. However, in most cases the cost of implementing a continuous-time control will be high and thus the cost of implementa- tion (COI) must be included. From this perspective, the optimal continuous—time control is a special case of the Optimal sampled-data control problem. Thus, the sampled—data control problem formulation is more general and Should be used as the basis for design. The formulation Of the continuous-time problem implicitly assumes that the cost of implementa- tion is negligible. The omission of a cost of implementation term in the performance index when it is not negligible is just as severe in terms of overall system performance as omitting any other significant term in the performance index. The inclusion of a cost of implementation is an important contribution to the art of optimal design since many Optimal control laws are often criticized for being overly costly or impractical. 27 Thus, a cost of implementation term in the performance index may prove to be an effective approach toward making Optimal design techniques more consistent with present engineering practice. The literature on a proper form for a COI for control laws is sparse. Research is presently under way [51] to develop models for implementation cost. The best intuitive models for COI presently available were found in the literature on adaptive sampling and Optimal aperiodic sampled—data control law [13, 56]. This form for the C01 is adopted for this study. The effects of including a COI are illustrated in the follow— ing example. EXAMPLE 4-2 Consider the linear system 0 l 0 in) = 33(t) + u(t) 0 —l 15 with cost functional 1 l 0 1 tf 2 0 J = E 33'(tf) ytf) + -2- ft (§'(t) EU) 0 l 0 0 2 + u2(t))dt . The initial time t0 = 0 and the terminal time tf = 20 are specified and the initial state is E0 28 The control is piecewise constant and changes only at the sampling time tl such that Augmenting the above performance index with a cost of implementation 1 C(T,N) = COI = 2 0.1e i =0 has a dramatic effect on the convergence of the Powell algorithm. The computational results, shown in Table 4-2, indicates that the inclusion of a COI term not only causes the algorithm to converge when it did not without COI but also suggests that Optimal solution with C01 may be global. The lack of convergence and divergence problems exhibited in the computational results, obtained when a cost of implementation was omitted, indicate (1) the changes in control performance for changes in the sampling times is often small near the optimal. (2) there can be several local minima for the derived control per- formance J(T,N). (3) the contraints that require the sampling intervals must be included if the cost of implementation is omitted in order to prevent divergence of the computational algorithm. 29 Aowum>COOV aosmo.o AGNo.~H .Nome.~ .wqaw~.ov Ha xaoso.o ASN.NH .mes.~ .Hmmhov aw mmoqo.o AAHGAH .qum.m .Homm.ov HA omomo.o AHN.wH .momm.a .oomN.oc Ho macho.o Aom.wa .moms.a .woo~.ov Hm mmax.o Acn.ma .amqw.o .Nmmq.ov as Namo.a ASN.aH .oooA.o .oooo.ov Hm mwum>ae AHN.SH .mm.c .SN.HIV“ Ha GNHaa.o Asq.wH .Homa.o .Homa.ov Hm omem.o AH.mH .mN.q .no.ov OH nomma.o Amm.ma .omm.m .smm.av Ha mom~.H Amm.NH .em.m .mm.~vu m mmuq.~ Aom.o .ao.o .ao.ov H em~q.~ Aoo.© .Ao.o .mo.c~. H Aomvw 1p QM mavcoaumumuH AQMvH _ ow .WQVQOfiumuouH aHoo :uH3v mute wanna AHoo uoonuwzv dmlq mant Enufiuowa< Hamsom m magma aw H00 uaonufis can saga mommo mnu now muaomwm Hmsoaumusaaoo .qu mqmcoov mosmo.o Hmo.xH .Hmwo.~ .quwN.ov as mmwmo.o HNmN.AH .aoaH.H .aaomq.ov He mwoqo.o HHw.HH .omm.H .waqam.ov Hm smaso.o Aaa.HH .amHH.H .HNHm~.oV HN Homao.o AaN.AH .H.N .Hoso.ov HH mSHN.H Amq.aH .m.o .mo.ov H Aowvh Aw AaV:OfiumHmuH I. l HmHuHcH OHoOHHOQm so bad Hoo SHHB mocmEH0mHmm omlq macaw Aomscflucoov Nlc mqm<8 31 The high rate of convergence and apparent global convergence of the algorithm, when a cost of implementation is included, indicates the inclusion of a cost of implementation (1) makes for a better formulation of the control design problem since the minimal value of the performance is more clearly defined. (2) prevents the algorithm from diverging by penalyzing small positive or negative values for the sampling intervals. The algorithm no longer selects negative values for the sampling intervals which previously had caused it to diverge. Although Powell algorithm works poorly with more than ten independent variables, it is quite satisfactory with the three examples computed , since the cost functional for the Optimal sampling starts leveling off before N (number of free sampling intervals) reaches five. The optimal sampling intervals sequence becomes periodic if the cost of implementation is much larger than the control perfor- mance cost so that SCI,N,R) ll 0 O H II I! M Q CD This cost of implementation becomes large if the cost per sample a or the number of sampling times becomes large. A heuristic proof that optimal aperiodic sampling is periodic when the cost of implementation is very large is included below. The optimal sampling interval sequence .T* can be obtained by solving the necessary conditions for the problem 32 N-l -BT min{S(T,N,R) = 2: ae 1} T_ i=0 subject to the condition N-l g(TO,Tl,...,TN_l) = 130 T1 - (tf - to) = 0 The necessary conditions become * -BT. 3 = —O8e 1 + A = 0 ‘—‘- [SCZ,N.k) + A 5(1)] 3T. * I and thus the optimal aperiodic sampling criterion is periodic when the cost of implementation is high. This result fits intuition and provides justification for the particular form of the cost of implementation chosen. CHAPTER V COMPUTATIONAL RESULTS 5.1 Introduction The performance of the Optimal periodic and optimal aperiodic sampled-data control law will be compared in this chapter. Performance of a sampled-data control law can be measured in several ways. Control performance J(T,N,k) defined by (3b), is the per- formance of the control law in meeting its objectives and has been the standard measure of performance. If this measure of perfor- mance were used exclusively, the continuous-time control law (T =_0, N 00) would always be optimal as proved in Chapter IV. A second performance measure, system performance S(i,N,k) defined by (3a), includes both the control performance J(T,N,k) and the cost of implementation C(T,N,k). This cost for implementa— tion should include the hardware and software costs for measuring the outputs, transmitting this data from the plant to the computer, computing the control law and state estimates, transmitting the control back to the plant, and actuating the control commands. These two performance measures can be used to compare the relative control performance and system performance for different sampling criteria (i,N) or different control approximations (k) Specified by (T,N,k). 33 34 A third measure of performance is the sampling index I(J0,k,T) which is defined as the number of sampling intervals required for a particular sampling criterion (N,T) and control law approximation (k) required to Obtain a control performance value J The sampling index can also be based on system performance 0' rather than control performance. These three measures of perfor— mance will be used to (1) compare the control performance, system performance and informa— tion required for differencecontrol approximations in section 5.2. (2) compare the control performance, system performance and sampling efficiency of optimal aperiodic and periodic sampled-data control laws in section 5.3. (3) compare the performance of an optimal control law which is sampled adaptively using different adaptive sampling schemes and the performance of the Optimal aperiodic sampled-data control law, in section 5.4. (4) compare the control performance and sampling efficiency for both an unstable and a stable system with ramp and parabolic desired trajectory in section 5.5. This study is made to illustrate a design procedure which designs both the control law and sampling criterion together by performing the tradeoff between control performance and cost of implementation in a single step. This procedure is used to eval- uate different order control approximations and compare optimal periodic and Optimal aperiodic sampled-data control laws. This study is intended to provide the basis for understanding the design 35 procedure, but is not intended as an indication of performance tradeoffs for any particular application. The system chosen for investigation was selected because it has been used extensively [13-20] for the evaluation of sampling criteria in the literature on adaptive sampling. Therefore, it provides a basis for comparing periodic and adaptive sampling criteria on an Optimal control law with the Optimal aperiodic sampled-data control law. This system is also chosen because it is unstable without feedback and therefore provides a good basis for comparing performance of the optimal control law implemented with different sampling criteria. This example problem will be used to compare the perfor- mances of different control approximations in section 5.2 and Optimal aperiodic, periodic and adaptive Optimal sampled—data con- trol laws in sections 5.3 and 5.4. EXAMPLE 1 Consider the system x1 0 0 x1 1 d .- 751? X — X + u 2 l 0 2 0 X1 y = [10 100] x2 with cost functional 1 2 I tN 2 2 J=fiflt)—dt» +—f HflU-zun +£2uUHM 2 N N 2 t0 N-I —BT. 1 + Z ue i=0 36 where tO is zero, t = 20, and the desired trajectory and initial N conditions are given below z(t)=0 t_>_0 §(t0) = O Matrices F_ and 9' are set as 1 while .3 is set as 0.02 since Athans [24] suggested that in order to obtain satisfactory tracking performance, the weighting coefficient on the error energy should be at least 50 times greater than that on the control energy. a and B are chosen as 0.1 and 10 respectively because: (1) small intervals below 0.1 second will be penalized heavily. (2) it is common practice in the design of adaptive sampling criteria [13] that a8 = 1. Therefore, the same practice is used here. The design objective is to (1) keep the output as close as possible to the desired trajectory. (2) minimise the control energy expenditure. (3) minimize the information to be transmitted. 5.2 Comparison of Control Approximations The control performance, system performance and information required to specified a control will be compared for an optimal zero, first, and second order control approximation. These compari— sons will be performed for a control law with both periodic and aperiodic sampling . 37 The performance indices J(T,N,k), S(T,N,k) and I(J0,k,i) do not compare the relative performance of the system with different order control approximations. Therefore, the performance of the zero, and the first order control approximation will be normalized by dividing this performance value with N samples, (J(T,N,k) for k = 0,1) by the performance for the second order (k = 2) control approximation with N samples. The normalization can be based on either the control performance measures .0): H: mm RJ(N,k) = —:—*———"-— J(T ,N,2) or the system performance measure x99: S(I_,N,k) RS(N,k) = A, sq ,N,2) If the sampling criterion is periodic the Optimal sampling * sequence .1 is specified as A* and thus the vector T. is identical in the numerator and denominator of these performance ratios. However, for optimal aperiodic sampling, the Optimal sampling interval sequence :9: k :‘c * T_ = [T0, T1,...,TN_2] 38 are determined by Optimizing S(T,N,k) for some specified number of samples N and a particular control approximation k. Therefore, the optimal sampling sequence in the numerator and denominator of these performance ratios are not identical and depend on the order Of the control approximation (k) specified. The number of samples, I(J0,k,i*), is not a prOper measure of performance for comparing different control approximations. The number of data words required to transmit a particular control approximation is a more significant measure of control approxima— tion performance. Thus, an information index is defined as IJ where (k + 2) represents the number of parameters required to specify the control approximation and the length of each sampling interval. This information index is the number Of data words re— quired by a control approximation k to obtain a control perfor- mance value J This information index can be based on either con— . O. trol performance or system performance. The normalization Of this information index can be performed based on either the control performance I (J k T*) J O, ’— EJ(N,k) 4N or the system performance -* ES(N’k) = 4N 39 where IJ(o) and IS(-) are the number of data words required to achieve the same value of control performance J0 or system perfor- mance S0 Obtained by parabolic control approximation with 4N data words. This information ratio index thus compares the number of data words used by a step or ramp control approximation with the number used by a parabolic control approximation. 5-2—1 Periodic Sampling The effects of the order of control approximation on control performance and information requirements can be observed for a peri— dic sampling criterion in Fig. 1 and 2. The two figures indicate the values of control performance over two separate ranges of N (i.e. 2 to 14 and 14 to 49) in order to provide better resolutions for comparison of the control approximations of order zero, one and two. The value of the control performance decreases monotonically to the value which could be obtained with the optimal continuous time control law. The ratios of the control performance for step and ramp control approximation J(T N k) - , _ RJ(N,k) — ETTfNT27 k - 0,1 are shown in Table 5-1. TABLE 5-1. Control Performance Ratio for Periodic Sampling N 2 4 8 14 19 24 20 34 39 44 49 STEP RJ(N,O) 3.98 4.19 4.47 4.82 5.50 5.20 5.32 5.41 5.44 5.46 5.49 RAMP RJ(N,1) 1.76 1.79 1.86 1.92 1.95 1.95 1.97 1.99 1.98 1.98 1.98 4O wcHHaEmm owoowpmm How mesmEH0muom Houucou H .wwm <40m ‘a =-:— _. u. 1 "11 ' 2 Mitchell T. = -—— + /(u /H,) + 2R/fi. R = 0 1 1 fi - i 1 1 i The variable 61 and ui in this table represent the first and second derivatives of the control u(t) at t = ti. Sample and Hold Mechanism *(t) N SYSTEM “ (Ex. 1) ———-)y(t) The value of control performance for each of these adaptive sampled-data control laws and the number of samples required are then recorded. The value of the performance index computed for the Optimal aperiodic sampled-data control law with four control changes (N = 4) and zero order control approximation (k = 0) is also determined. These values of the performance index and the resultant number of samples required are tabulated in Table 5-9 for both the optimal aperiodic and adaptively sampled—data control laws. 58 TABLE 5-9. Performance of Different Sampling Criteria Number of Sampling, Cost Hsia 5 614469.53 Dorf 4 13239277.9l Gupta m 3 0.0178 Mitchell 880,000 ; 0.0178 Optimal 4 0.06382 The optimal aperiodic sampled—data control with zero order control approximation outperforms all of the adaptively sampled Optimal control. This optimal aperiodic sampled-data control law, specified by an Optimal control sequence-Optimal sampling intervals sequence combination, had approximately the same level of control performance as the optimal control law sampled using Gupta's and Mitchell's criteria, but with significantly fewer control changes. This optimal aperiodic sampled-data control had significantly better control performance than the Optimal control laws adaptively sampled by Dorf's and Hsia's criteria. This comparison is based on approximately the same number of control changes. The values of costs for Gupta and Mitchell's criteria are Obtained by the observation that they both sample almost continuously on the Optimal control. Therefore, the control performance is approximately the value obtained using the Optimal control. The large values of control performance Obtained using Hsia and Dorf's criteria can be explained by the fact that they fail to sample the small variations of the optimal control in the 59 final period which is the longest. Since this system is highly unstable, the control performance will deteriorate if the sampling mechanism is not triggered sufficiently often. The results indicate that the selection of a sampling rule for even an optimal control law can have disastrous results if the rule is not selected prOperly or if the sampling rate for periodic sampling is not high enough. Moreover, it is obvious that by selecting the Optimal control sequence and sampling intervals sequence combination, excellent control performance can be obtained with very few sampling instants. Since the optimal control sequence depends on the sampling intervals sequence chosen for this Optimal aperiodic sampled—data control, the sampling instants can be viewed as tuned to the system dynamics, optimal sampled-data control law, the performance index, the trajectory and the initial conditions. 5.5 Performance, Different Systems with Different Inputs The control performance, system performance and sampling efficiency will be compared using both periodic and Optimal aperiodic sampling for different systems with different desired trajectories. A stable and an unstable system will be tested with both ramp and parabolic inputs. 5.5.1 Performance of a Stable System The following example uses a system with both eigenvalues negative. This system makes a good model of a closed-loop system and thus the tracking performance can be compared for both periodic 60 and Optimal aperiodic sampling since the control energy required to perform the regulation function is negligible. EXAMPLE 2 Consider the system d 1 0 l 1 0 __ = + U(t) d" x -5 -1 x 5 2 ° 2 X 1 y = [1 0] x2 with cost functional O 2 2 J =§ - 200))2 +§7101 - z(t)) + .ozu (cudt N—l -BT. + 2 ae 1 a = 0.1 B = 10 i=0 and initial condition f xl(0; 0 x2(0) O k A Case I. Ramp Trajectory z(t) = 0.1t This Type 0 (plant has no poles at the origin) system will follow a ramp trajectory with a monotonically increasing error. Therefore, one might expect a rather large performance index value regardless of the sampling criteria or control approximation used. The Optimal control performance, PlottEd in Fig. 8 and 9 for PS 61 COST STEP -—---‘—-—— ILABAF’ .l ‘, ——————— PARABOLA :2::::*~———___ .05 I» . A J A L x I 2 i 3 ’ N Fig. 8 System Performance of PS for the system of example 2 in Tracking a ramp trajectory COST STEP "\ '-———”'-—' IRAPMPD .1 ] ""'""""‘"" PARABOLA \ ---..-:- ~----- —-_ ._.— .05 d : : e 3 -) N l 2 3 4 Fig. 9 System Performance of OAS for the system of example 2 in Tracking a ramp trajectory 62 and OAS sampling criteria, are rather large as expected. Although the performance index value decreases as the order of control approximation increases, the relative improvement in performance is insignificant. Moreover, the decrease in control performance is also very small as the number of sampling times increases. The difference in performance for OAS and PS sampling criteria is also slight. Thus the control performance is apparently dominated by the large output error and the large control energy requirements which result from requiring a Type 0 system to follow a ramp trajectory. The Optimal sampling intervals sequence are shown below for different values of N and k. TABLE 5-10. The Optimal Sampling Intervals Sequence in Tracking a Ramp Trajectory N k Step (0) Ramp (l) Parabolic (2) 1 4.7189 8.5 9 5.2811 1.5 1 3.2807 4.2885 3.9074 2.6595 4.1087 3.5406 4.0598 .16028 2.5520 3.7024 1.5848 2.9422 2.7380 4.8031 2.6346 2.8871 2.5885 2.499 0.6726 1.0238 1.9242 2.9613 1.4271 2.2155 2.3063 3.6 3.0819 2.0169 2.1168 2.0005 1.9763 1.8497 1.9892 0.7297 1.0063 1.7318 63 The optimal sampling intervals sequence depends on the shape Of the trajectory to be followed, the order of the control approxi- mation, and the performance index used. Since the terminal error is weighted, the error at the terminal time should be small andthere- fore the last sampling interval should be short. The [tN-l’tN] results indicate the last interval is generally the shortest in the sequence. The lengths of the other sampling intervals in the sequence depend on how well the control approximation can represent the desired trajectory to be followed since in this case the Shape of the desired trajectory z(t) and control u(t) should be nearly identical a short time after the input is applied. The higher order control approximation (k = 1,2) can accurately represent the ramp trajectory and thus for N) 2 the sampling intervals sequence is close to periodic as the order of control approximation increases. For N = 2, the initial sampling interval increases as k increases because the control approximation can better represent the con- tinuous-time Optimal control over this interval as k increases. Thus, since the control over the initial interval is more accurate the length of that interval increases and the length of the final interval is reduced. Case 11. Parabolic Trajectory z(t) = 0.1 t 64 COST STEP 10 “ RAMP ------ PARABOLA 5 all -.--"‘--—---L_-.'_-. : e 3 IL 9 1 2 3 4 N Fig. 10 System Performance of PS for the system of example 2 in Tracking a parabolic trajectory COST STEP 10 4. ——--— RAMP ‘ """ PARABOLA J ~.—._o _o—. _o— P ! 3 z—> N i-‘db Fig. 11 System Performance of OAS for the system of exmaple 2 in Tracking a parabolic trajectory 65 The Optimal control performance for PS and OAS are plotted in Fig. 10 and 11 respectively. The control performance value is large and again does not change greatly for changes in sampling criteria (N,I) or control approximation (k). These changes are however considerably greater than Observed for the ramp trajectory input. This result might be expected since the parabolic trajectory is more difficult to follow than the ramp trajectory. The control performance is always lower for OAS than PS and decreases with increase in either N or k as eXpected. The sampling intervals sequence as a function of N and k are shown in Table 5—11. TABLE 5-11. Optimal Sampling Intervals Sequence in Tracking a Parabolic Trajectory N Step (0) Ramp (l) Parabola (2) 5.0932 8.0559 9 4.9068 1.9441 1 2.9891 4.8577 8.7762 3.2119 3.9979 0.6149 3.7990 1.1444 0.6089 2.1042 3.9095 8.9975 2.2253 2.4563 0.3340 2.2527 2.5670 0.3340 3.4179 1.0671 0.3340 0.8979 3.0226 8.1 1.8 2.0322 0.6916 2.0 2.0019 0.4814 2.0 1.9737 0.3618 3.3021 0.9696 0.3651 The second order control approximation can approximate the parabolic change in the desired trajectory very well and therefore the initial sampling interval T is always large for this control 0 approximation. The error near the end of the control interval is 66 heavily weighted in the performance index and therefore the number of sampling times at the end of the control interval increases as N increases in order to minimize the terminal error. The initial sampling interval T for the Optimal aperiodic O sampled-data control laws with lower order control approximations are much smaller than for the second order approximation because these lower order control approximations cannot approximate the parabolic trajectory as well over the initial interval. This can be observed especially on the zero order control approximation because many more sampling instants occur near the initial part of the control interval as N increases. 5-5—2. Performance of an Unstable System A non-minimum phase system with the same gain characteristics as the previous example is now considered. Since this system is unstable, control energy must now be expended to perform both regulation and tracking functions. EXAMPLE 3 Consider the system d x1 0 1 x1 0 a; = X + u(t) x2 -.5 1.5 2 .5 X 1 y = [l 0] x2 with cost functional 67 J =-% (y(10) — z(IO))2 +-% féo[(y(t) - z(t))2 + .02u2(t)]dt N-l —BT + Z ue 1 i=0 a = O 1 B = 10 Case I. Ramp Trajectory z(t) = 0.1 t The system performance for OAS and PS sampling criteria are plotted in Fig. 12 and 13 respectively for a ramp trajectory. The performance curves for zero, first and second order control approximations are shown in both figures. The system performance decreases significantly as the number of samples increases for both PS and OAS criteria. The performance for the second order control approximation is almost identical for OAS and PS. However, for lower order control approximation the per- formance of the OAS is significantly better than for PS. Apparently the optimal sampled-data control with second order control approxi- mation so closely approximates the Optimal continuous time control that the selection of sampling intervals does not greatly affect the performance. The system performance ratio RS(N,k) are given in Table 5-12 and 5-13 for PS and OAS. These performance ratios decrease as N increases for both zero and first order control approximations and for both sampling criteria. Thus, the performance advantage of the parabolic control approximation decreases as the number of samples increases. 68 COST L0 STEP 4» 0 4. -————u--————-FUABAP’ J. 5 J’ _____ —PARABOLA . ‘P ‘P b .2 .. .1 ‘D 0 (P O 0 .051p It I! .02., \ \‘ \~ \“ \. ‘\ \o 01 ~‘ ~ \--.§ 9 . f _;“'—J-I--O—.—-'.r__.: ______ 3 5 N l 2 3 4 5 Fig. 12 System Performance of PS for the system of example in tracking a ramp trajectory 69 COST .02 1 .01 ‘ STEP RJXRAP’ -’--"- PARABOLA \ ‘ \‘s‘. \\ ‘~\¥.‘_‘_%-__u —- -— —— -—- cf 2: 3 I} g > N Fig. 13 System Performance of OAS for the system of example 3 in tracking a ramp trajectory 70 The performance ratio for P8 is much higher than for OAS because the optimal sampled—data control with OAS and with any control approximation is so close to the Optimal continuous time control that the improvement due to additional terms in the control approximation is less than for the control law with periodic sampling. TABLE 5—12. System Performance Ratio for Periodic Sampling N l 2 3 4 5 Step RS(N,0) 16.61 8.17 3.34 2.08 1.59 Ramp RS(N,1) 2.17 2.34 1.14 1.07 1.02 TABLE 5—13. System Performance Ratio for Optimal Aperiodic Sampling N l 2 3 4 5 Step RS(N,O) 1.69 1.69 1.19 1.15 1.14 r Ramp RS(N,1) 1.20 1.11 1 1 1 The Optimal sampling intervals sequence for different control approximation and different number of samplings are shown below. 71 TABLE 5-14. Optimal Sampling Intervals Sequence in Tracking a Ramp Trajectory k Step (0) Ramp (1) Parabolic (2) .58262 1 1.7529 9.4174 9 8.2471 0.57325 0.97197 1.5114 3.7592 6.9998 6.7511 5.6672 2.0284 1.7375 0.64978 0.82625 1.9130 2.3205 5.0144 3.1614 2.9347 2.8353 2.8392 4.1042 1.3241 2.0864 0.65095 1.0101 1.9701 1.7476 3.5440 2.1187 1.9167 2.2670 2.0333 2.0385 1.9722 2.0014 3.6477 1.2067 1.8764 0.68421 0.96478 1.5909 1.4281 2.8156 2.2513 1.1949 1.7785 1.6850 1.6309 1.6672 1.6685 1.5873 1.5893 1.6605 3,4747 1.1746 1.1439 The approximation to the Optimal continuous—time control should be excellent over the initial segment of the control interval in order to adequately regulate the unstable system and to track the ramp trajectory. Therefore, the initial interval [t0,t1) was samll for all N and k. Moreover, in general, the length of this interval increased as N and k increased. The performance index penalizes terminal error and therefore the lengths of the terminal interval is also small. The number of samples in the middle of the control interval increase as the number of samples increase. The sampling intervals in the middle of the control interval becomes closer to periodic as both N and k increase indicating the regulation and tracking tasks require constant control effort for this particular system. 72 COST STEP ILALAF’ P —- F L ...1...._”._. IJAELAEMDITA 2. ()u A“‘ v "' 0- :QN t—nup Nur- on» 94p Fig. 14 System Performance of PS for the system of example 3 in tracking a parabolic trajectory 73 COST fi STEP 10,, RAMP 4 _- 1b 1» _______ PARABOLA 1D 5.0 4h 4L 2, u l. 1 1D I} q» .5 1D 0 4 . 21> \- \._-—-—U-— Fig. 15 System Performance of OAS for the system of example 3 in tracking a parabolic trajectory 74 Case 11. Parabolic Trajectory z(t) = 0.1t2 The system performance for OAS and PS criteria are plotted in Fig. 14 and 15 respectively for a parabolic trajectory. The performance curves for the zero, first and second order control approximations are also shown in both figures. The system performance generally decreases significantly as either the order of control approximation increases or as the number of sampling times increases. However, for the case of parabolic control approximation and OAS, the increased number of small sampling times in the initial period increases the COI without improving the performance enough to offset it and there- fore the system performance increases with the number of sampling times. Thus, for this case,excellent system performance was achieved using very few optimal aperiodic sampling times and a high order of control approximation. The system performance ratios RS(N,k) are given in Table 5—15and 5—16 for PS and OAS. These performance ratios decrease as N increases for both the zero and first order control approximation. This result implies the performance advantage of the second order control approximation is much greater when the number of sampling times is small. The ratio is larger for PS than for OAS because the control law with OAS closely approximates the continuous-time Optimal control so that increasing the order of the control approximation does not significantly increase system perfromance. 75 TABLE 5-15. System Performance Ratio for Periodic Sampling _ N 1 2 3 4 Step RS(N,0) 32.46 3.78 1.5 1.28 amp RS(N,1) 1.26 1.14 1.07 1 TABLE 5—16. System Performance Ratio for Optimal Aperiodic Sampling _ N 1 2 3 4 Step RS(N,0) 12.82 1.81 1.33 1.23 Ramp RS(N,1) 1.39 1.20 1.16 1.08 The optimal sampling intervals sequence is shown below for different values of N and k. TABLE 5.17. Optimal Sampling Intervals Sequence in Tracking a Parabolic Trajectory k . N Step (0) Ramp (1) Parabolic (2) 0.2 2.095 9 1 9.8 7.905 1 1.9083 3.7552 9.317 2 4.0514 5.6 0.3417 4 0403 0.6448 0.3413 0.52507 2.6451 8.8372 4.8085 3.5664 0.4136 3 1.6811 2.6352 0.3752 2.9881 1.1533 0.3743 1.1063 2.3539 6.8133 1.9981 2.0519 1.3542 4 1.9946 1.9996 0.7699 1.8982 2.0013 0.512 3.0028 1.5933 0.5236 76 The optimal sampling interval sequences are chosen based on the order of the control approximation and the trajectory to be followed. The second order control approximation can accurately follow the parabolic trajectory and thus the first sampling interval is large. As the number of sampling intervals increases, the length of the initial sampling interval decreases and the length of the other sampling intervals increase. The zero and first order control approximations cannot follow the parabolic trajectory accurately over any interval. Thus, the length of the initial interval de- creases significantly as the order Of the control approximation decreases. The last sampling interval, where the rate of change of trajectory is the largest, tends to be the smallest of the sampling intervals for the first and second order control approximations. The first interval is the smallest for the zero order approximation because the sampling intervals have to be chosen to provide effective control because the approximation to the parabolic trajectory is so poor. Thus, the sampling times for lower order control approxima- tions must be used to maintain tracking accuracy much more than for higher order control approximations. CHAPTER VI SAMPLED—DATA CONTROLLABILITY AND OBSERVABILITY Controllability and observability were originally developed as purely mathematical concepts. However, they were soon found to be related to the possibility of achieving a desired degree of con- trol and obtaining the desired information about the system. Controllability assures that the Optimal control law designed for a linear system using a quadratic performance index will be asymptotically stable. Observability assures the Kalman filter will be asymptotically stable. Moreover, controllability and observability are also important in the realm of mathematical modeling. Although a state space model is desired for analytic design of the control law, one often starts with an input-output model obtained experimentally. The minimal realization which does not introduce any phenomena that cannot be accounted for by an input-output description of the system, is intimately related to the concepts of controllability and observability. Thus, controllability and observability are important concepts in the areas of control, estimation, and identification of dynamical systesm. Controllability and observability will be investigated for sampled-data control systems where the continuous—time plant is known but the actuators and sensors are not specified and must be designed 77 78 as part of the control law. For this case, the number of sampling times and the lengths of sampling intervals are design parameters or control variables for the system. Definitions of controllability and observability have been recently prOposed by Troch [41] for the case where the number of sampling intervals is specified but the lengths of the sampling intervals are free and considered control variables. However, there never existed definitions that considered both the number of sampling times and the length of each sampling interval as control variables. Therefore, extended definitions of controllability and observability are proposed. Under these extended definitions, any system which is either controllable or observable when the control and measurements are continuous functions of time is shown to be controllable or observable when controls are changed and measurements are made only at the sampling times. Since controllability and observability should be only a property of the dynamic system being controlled and not a property of the hardware used to implement this control, the number of sampling times should be as much a control parameter as the lengths of the sampling intervals and the control levels over each sampling interval. Under this extended definition the actuators and sensors must be viewed as part of the control law being implemented rather than part of the system to be controlled. This point of view is required because the number of sampling times and the lengths of the sampling intervals are control parameters or variables. 79 Sufficient conditions were derived by Troch [41] which guaranteed that an observable system would be sampled-data observable over q sampling times where q is the order of the minimal polynomial. The conditions derived for observability were never extended to controllability. Moreover, the conditions were quite restrictive and did not indicate the conditions under which a system could not be observed on q sampling intervals. Necessary and sufficient conditions for the controllability and observability of sampled-data system are derived. These theorems state that a sampled-data system is controllable (observable) if and only if the continuous—time system is controllable (observable) and the sampling time sequence is such that a certain matrix is non- singular. This nonsingularity of this matrix can be used as a test for controllability or observability of a sampled-data system. This trst is used to determine conditions on the sampling times for which an observable and controllable continuous time system will not be observable and controllable on a sequence of sampling times. Finally, conditions on the sampling times are derived for guaranteeing that a system which is controllable and observable with continuous measure- ments and controls will be controllable and observable with sampled measurements and controls. The sampled—data control problem is now formulated in order to provide an apprOpriate framework for defining sampled-data controllability and observability. Consider the linear system 80 git) .A_§(t) +_§_g(t) -§(t0) =_§ (15) 10:) _9 35m (16) where _x(t) is the n-dimensional state vector, u(t) is the r— dimensional control, and yflt) is the m-dimensional output vector and A, B) E_ are compatible time-invariant matrices. The sensor provides measurements = 17 at the sampling times {t }N that are not specified but are h+i i=0 constrained to satisfy - = 18 0 < Tmin-i th+1+1 th+1 T1 i-Tmax ( ) The control actuator is also assumed to be a sampled—data device and therefore the control git) is sampled-data of the form = = l 3‘t) -3(th+i) Eh+i t 6 [th+i’ th+1+1) ( 9) for i = 0,1,...,N-1. This control is assumed specified by knowing the control sequence {Eh+i}§;0’ the sampling intervals sequence N-l {th+i}i=0’ and the number of sampling times N. This system can be represented by a set of difference equa- tions if the state differential equation is integrated over each sampling interval [ ) separately. The difference th+1’ th+i+1 equations have the form 81 + D ih+i+1 = $h+i§h+i —h+13h+i i = 0’1’ ' ' ' ’N'1 (20) Where 2Eh+i = 5(th+i) AI, $h+i =-$(Ti) = 9 Ti 2h+i = 2(1‘1) = ID 3(t)§dt . This representation does not indicate clearly that the sampling times are control variables, but imbeds these variables in the matrices o and . D .. oreover he a e . i t e -h+1 '—h+1 M ’ t St t 35h-+1 S h state at the sampling time specified by knowing the control N N-l m, {th+i}i=0’ {391:0}. system is used for notational convenience. The dependence of 2h+i This representation of the sampled-data }N must always be considered in this develop- 0" {th+i i=0 and 2M1 ment. 6-1. Observability Definition The system (15) is said to be sampled—data observable .at . . . N—l th if there ex1sts a f1n1te N and a sequence {th+i}i=0 such that any initial state xflth) can be determined from the knowledge of N-l N—l {14th+1)}1-0 and £2 q since there is a guarantee that n independent measurements can be found by selecting only q sampling times. Thus, if the system is q—sampled—data observable it is p-sampled-data observable for all p > q. The following theorem, stated and proved by Troch 89 [41], provides a sufficient condition on the sampling times which guarantees that a system which is observable with continuous measurements will be observable with sampled measurements. THEOREM 6-3 (Sufficient Condition for q-Sampled—Data Observability) A system (15) which is observable with continuous measure- ments is q-sampled-data observable. (i) for all sampling intervals sequence {t .}q-1 if all Of the h+1 i=0 eigenvalues of .5 are real. .}q'l h+1 i=0 such that (ii) for all sampling intervals sequences {t t t < TT h+1 h i = 1,2,... q—l wgmax , where wgmax is the greatest imaginary part of the eigen— values of A. The proof follows immediately if -§o can be proved non- singular over the set of sampling intervals sequence specified in case (i) and (ii) respectively. The functions ok(t) form a Chebyshev system [41] over [th, 00) if all eigenvalues are real and over [th’ t ) if some eigenvalues are complex. Since the h ”imax functions ak(t) are Chebyshev over these respective intervals for -1 A.t 1.: A1 1.1: . i i t i the two cases, the functions e , to ,...;-———-—T e , i = 1,2,...,k are also linearly independent over the same respective intervals for the two cases. Thus A0 is nonsingular over the intervals specified for the respective cases and the theorem is proved. Q.E.D. 90 This theorem clearly indicates that a system which is observable with continuous measurements is observable with any sequence of q sampled measurements as long as all of the eigen- values are real. If some eigenvalues are complex, the q sampling times must all be selected in an interval of length Tr/mflmax to insure that observability will be preserved using sampled measure- ments instead of continuous measurements. Since u&max is the largest imaginary part of the complex eigenvalues of .A, this constraint implies sampling must occur at a rate at least q times faster than the NquiSt rate (T = "/whnax) in order to insure all q sampling times occur in a "/whnax interval. This constraint is restrictive for some applications and since it is only a sufficient condition, less restrictive conditions are investigated in Section 6.3. The result of this theorem also guarantees that there will always exist q sampling times for which the system is observable and therefore the following theorem can be established. THEOREM 6—4 (Sufficient Condition for Sampled-Data Observability) If the system (15) is observable with continuous measurements, it is sampled—data observable. Proof From Theorem 6.3 it has been established that there always exists q sampling times {th+i}2;0 such that a system which is observable with continuous measurements will be observable with q sampling measurements. The system is always p-sampled-data observ- able for any p > q if it is q-sampled-data observable. Thus, there N-l always exists an N and a sampling times sequence {th+1 i=0 to make the system sampled-data observable. Q.E.D. 91 In the previous definitions of observability for sampled- data system [39, 41], either the number of sampling intervals and the length of each sampling interval are both specified or the number of sampling intervals is specified and the lengths of sampling intervals are considered control parameters. In both of these definitions, a system which is observable with continuous measurements may not be observable with sampled measurements. This extended definition, where both the number of sampling intervals and the lengths of sampling intervals are control parameters, permits the preservation of observability when the outputs are no longer measured continuously but are sampled. This preservation of observability under the imposition of sampling requires a system designer to view the sensor and its sampling intervals sequence specified by N—l * {th+i}i=0 and N ( ) to be part of the control law rather than part of the plant being controlled. This perspective is required since the sampling intervals sequence are control parameters in this extended definition of observability. Some explicit conditions on the sampling times {t }2_ l n+i '=0 for which A0 is singular will be investigated in Section 6-3 after a similar matrix condition is derived for controllability. In postponing this develOpment, the similarity of conditions for sampled-data controllability and observability will be emphasized and no duplication of discussion is required. 92 6-2. Controllability Definition The system (15)is said to be sampled-data controllable at there exists a finite N and if for every initial state -§h’ th a control ) i = 0,1,...,N-1 3‘t) = Eh+1 t 6 [th+i’ th+1+1 }§;é, the sampling time E. defined by the control sequence {uh+i and N, such that h+N ='Q. sequence {th+i}i=0 is finite but arbitrary and the sequence As stated above, N N—l . . is not constrainted in any way except { th+1 i=0 1: < th+l<...4 H N >1|H The matrix .2 assumed non—zero and therefore is nonsingular. 1B is nonsingular where I -A t —A t (e l 0_e 1 l) (e -A2t0 -A2tl (e -e ) p”: —A t —A t (e q O—e q l) (e k 0 Furthermore, P where [3 101 (e -A t .— q-l q-1_e A t 9‘1 Q) is nonsingular because all eigenvalues are Moreover,‘M X is nonsingular if and only if ‘M is nonsingular if and only if matrix can be expressed as B X —t E 102 Since _g is nonsingular _P is nonsingular if and only if q . . there exists a sequence {th+i}i=0 such that AC is nonSingular. This proves the theorem for the case where all eigenvalues are non- zero. If an eigenvalue is A1 = 0, the proof follows identically if the row . ' .t 'A.t [e 1 h e 1 h+1 . . . e 1 h+q] in matrix Ac is replaced by q + 1 row vector [th, th+1,..., ‘h+q] . Q.E.D. This theorem states that a system will be controllable with sampled—data controls }N—1 {t }N and N {3141 i=0 h+i i-O if and only if it is controllable with continuous controls {3(t), t e [to, tN]}. Moreover, the system is q—sampled-data con- trollable if and only if there exists a sampling intervals sequence such that .1 (or ‘§c) is nonsingular. The condition on X_ requires an integration of each term which is inconvenient. The condition on .AC does not require integration of each term and provides a condition on the sampling times which is similar to the condition on -§o obtained for observability. Although the condition on .AC was only stated for the case where eigenvalues are distinct, a matrix fie could be derived for the case where the eigenvalues are not distinct. The derivation of an appropriate form for .Ac for 103 the case of multiple eigenvalues is a subject for future research. The condition on .X_ (or AC) can be used to test whether a system which is controllable using continuous control will be q-sampled-data controllable on some particular sampling intervals sequence {th+i}g=0° Finally it should be noted p is constrained to be the order of the minimal order of the plant. However, if a sequence of q-+ 1 sampling times can be found for which the system is controllable, the system will be controllable for some sequence P }i-0 { th+i for each p > q since there is a guarantee that n independent controls can be found by selecting only q + l sampling times. Thus, if the system is q-sampled-data controllable it is p-sampled-data controllable for all p > q. The following theorem provides a sufficient condition on the sampling times which guarantees that a system which is controllable with continuous controls will be controllable with sampled controls. THEOREM 6—7 (Sufficient Condition for q-Sampled-Data Controllability) A system (15) which is controllable with continuous controls is q-sampled-data controllable. (i) for all sampling intervals sequence {th+i}2=0 if all of the eigenvalues of ‘A_ are real (ii) for all sampling intervals sequence )3 such that {‘h+i =0 t - t < n/ h+i h i = 1,2,...,q w gmax where is the greatest imaginary part of the complex w 2max eigenvalues of .A. 104 The proof follows immediately if .X can be proved nonsingular over the sets of sampling intervals sequences specified in case (i) and (ii) respectively. The functions ok(t) form a Chebyshev system [41] over [th,m) if all eigenvalues are real and over [th, h if some eigenvalues are complex. Since the function ok(t) are Chebyshev over these respective intervals for the two cases, the functions ‘h+i+1 f ok(C)dC k = 1,2,...,q t . h+1 also form a Chebyshev system over the same respective intervals for the two cases. Thus, the functions A.-l t 3 AI; t o o t I l AOC ++ + ++ f h i 1 e i dC f h+1 1 Ce 1 d; . . . f h i 1 C e 1 dc _ ' ‘h+i th+i ‘h+i (Y1 l)' are linearly independent over the same respective intervals for case (i) and (ii). The matrix X_ is therefore nonsingular over these intervals and the theorem is proved. Q.E.D. This theorem clearly indicates that a system which is con— trollable with continuous controls is controllable with a sampled— data control over q sampling intervals as long as all of the eigenvalues of the system matris are real. If some eigenvalues are complex, the q + 1 sampling times must all be selected in an interval of length fl/m to insure that controllability will be imax preserved using sampled—data controls instead of continuous controls. t + 11/0) Qmax 105 Since ”imax is the largest imaginary part of the eigenvalues of A, this constraint implies sampling must occur at a rate at least q times faster than the Nyquist rate (T = "/wfimax) in order to insure all q sampling times occur in a "/wflmax interval. This constraint is restrictive for some applications and since it is only a suf— ficient condition, less restrictive conditions are investigated in Section 6.3. The result of this theorem guarantees that there will always exist q + 1 sampling times for which a controllable system will be a q-sampled-data controllable. Therefore, the following theorem can be established. THEOREM 6-8 (Sufficient Condition for Sampled—Data Controllability) If the system (15) is Controllable with continuous controls, it is sampled—data controllable. From Theorem 6.7 it has been established that there always exists q + l sampling times {t such that if the system is }q h+i i=0 controllable using continuous controls, it will be controllable using sampled-data controls. The system is always q—sampled-data con- trollable and is therefore always p-sampled—data controllable for all p > q. Thus, there always exists an N and a sampling times }N sequence {th+i 130 to make the system sampled-data controllable. Q.E.D. The implications of Theorem 6—8 are quite important. First, if the continuous-time system is completely controllable, then it is sampled-data controllable which implies there exists a control 106 ) .g(t) =_g(t ) t 6 [ch h+i = Eh+1 +1’ th+i+l for i = 0,1,...,N—l specified by a finite set of parameters N—l N . . Eh+i}i=0’ {th+i}i=0 and N that will for any initial state x ‘ —0 guarantee that §_+N;Q. Thus, the controllability of the system does not depend on whether the control is actuated with an analog or sampled-data device. In previous definition of controllability [36, 41], either the number of sampling intervals and the lengths of sampling intervals were specified or the number of the sampling intervals was specified and the lengths of sampling intervals were free and considered control parameters. In both definitions, the sampled-data system could be uncontrollable when the continuous- time system was controllable. The implicit assumption made in these definitions [36, 41] was that the system model included the sampled- data actuator and the sampling intervals sequence. In this defini- tion, the actuator and the sampling intervals sequence are considered part of the control law. This development of sampled-data con- trollability provides a more general perspective on dynamic system and control. In the following section, the explicit condition on sampling times for which ”AC or Ac is nonsingular will be investigated. 6—3. Sufficient Condition for the Singularity of A0 and Ac The sufficient conditions imposed on the sequence of sampling times for the special case where the system matrix has complex eigen— values may be quite restrictive for some applications where the cost of communicating, storing and processing data are quite high. In 107 these cases, the average sampling rate may be much closer to the Nyquist rate. Sufficient conditions should be established which will guarantee that observability and controllability can be pre- served if a sampling process is imposed by design considerations. A rule for selecting sampling times is desired which, if followed, would guarantee the preservation of controllability and observability. Although such a rule is not derived, a rule is suggested by in- vestigating sufficient conditions for the singularity of matrices X and ~§c° The pattern developed by investigating the conditions ..O sampling intervals sequence must satisfy to make 1A0 and -§c singular provide a basis for suggesting a rule for selecting sampling intervals sequence which will preserve controllability and observability. The following theorems, which are extensions of results by Kalman [36] for periodic sampling, provide a basis for the develOpment of this sampling rule. THEOREM 6-9 Given a system.(L5)which is controllable in the Kalman's sense [36], the system is not controllable with a sampled—data control if t =— k=1,2,... i = 0,1,...,q-1 for any m2, where w are the imaginary parts of eigenvalues of A, Proof Since the eigenvalues occur in complex conjugate pairs 108 1 = + ° 2 02 3‘2 A1+1 = 02 ’ 3‘1 . = _I the apprOpriate rows of AA for th+i wk have the form ’31 ‘h "Az‘h+1 -‘2th+q [e , e ,..., e ] -Az+1‘h "‘2+1‘h+1 —A£+lth+q = [e , e ,..., e ] = [eofithcos t ‘eogth+lco t -e02th+qcos w t ] “2 h’ S “2 h+1’°"’ 2 h+q Since two rows of 'AC are identical,‘)_(C is singular and the theorem is proved. Q.E.D. THEOREM 6-10 Given a system (l3)which is observable in the Kalman's sense [36], the system is not observable with sampled measurements if t =-—- k = 1,2,... for any wl’ where w 's are the imaginary parts of the complex 2 eigenvalues of A, The proof of this theorem is identical to the proof of Theorem 6—9 except that -§0 replaces AC. The results of these two theorems indicate that if the sampling times are all multiples of a basic period n/ml for some 2, then the system will not be observable or controllable using the sequence. This condition does not imply that the sampling criterion described by this condition be periodic as assumed by Kalman. 109 General conditions which will describe all sampling intervals sequences for which -§0 and -§c are singular are not derived. However, from a brief study of the following simple cases and the results of Theorem 6—9 and 6-10, a possible set of conditions can be suggested. In the following set of examples, conditions are derived on a set of any two sampling times in the sequences which together could cause X or AC to be singular. A matrix .A is _0 A0 or -§c' (1) Assume that m(A) has two complex eigenvalues and thus the used to denote either matrix .A has the form C s (p+3w)t0 (p+jw)t1 e e Ix u (o-jw)t0 (o-jw)t1 e 8 J \ This matrix is singular whenever (ii) Assume m(A) is of degree 3 and has eigenvalues p + jw, p - jw, and O . The 3 by 3 _A matrix will be singular if any two of the columns of _A are dependent. Therefore, determine the conditions for which 110 r ‘ f ] (0+Jw)t12 (0+Jw)til e e (o-jw)t. (o-jw)t. e 12 = c e 11 (21) 0‘12 0‘11 Le e J ( , for some real c. A condition 2kn . . ‘12 ' ‘11 ' w O :-11 < 12-3 2 is imposed so that r 1 ’ ~ +' _ ~ (0 Jw)t12 0(t12 ‘11 (0+Jw)til e e . e (o-Jw)t12 0(ti2-til (p-Jw)ti1 e = e . e 0‘12 “(‘21"11 0‘11 e e . e k a L . The second condition 0:0 (iii) Assume m(A) is of degree 5 with two pairs of complex con- jugate eigenvalues and one real eigenvalue. The matrix .A is singular if any two of the columns of .A are dependent. Therefore, determine the conditions for which 111 r ‘ ' ] (91+le)ti2 (91+3w1)‘11 e e (91’3w1)‘12 (01'3“1)‘11 e e +' ( +- (22) (92 J“’2)‘12 = c 02 sz)‘11 e e (92’3“2)‘12 (92’3w2)‘11 e e 0‘12 0‘11 e e ' L . . l The first condition 2k 0 2k 0 t. — t. = 1 = 2 12 1l “1 “2 so that I T ' T (“1+3wi)‘12 “1(‘12“11) (91+3w1)‘11 e e e (91‘3“1)‘12 91(‘12't11) (pl-jwl)til e e e (92+Jw2)‘12 _ 92(‘12“11) (02+3w2)‘11 e “ e e (92'sz)‘12 92(‘12"11) (92’392)‘11 e e e 0(t. ) 0‘12 0(‘12 11) 1‘ e * e e 1k 4 g J The second condition 92 = D]. = 0 ”(‘12-‘11) _ e““12"11) In summary, Theorem 6—9 and 6-10 indicate that observability and controllability can not be preserved if the q + l sampling 112 times {t .}3 are chosen to be a multiple of the same period h+1 i=0 T when T =n/wl 2 = 1,2,...,k The results of example (i) - (iii) indicate the observability and controllability may not be preserved if for = 0,1,...,q—l. Thus, these two conditions suggest that 11’12 a sufficient condition which will preserve controllability and observability is to select all sampling times so that t -t. #51 (1) = 0,1,...,q-l 11 1 il,i2 for all integers k and all mg, 1= 1,2,...,q. This condition has not been established as a sufficient condition for the singularity of -§c and A0 and thus is purely a hypothetical condition suggested by the results in this section. The establish- ment of a sufficient condition for the invertibility of .A is an important result because it provides guidelines for the system designer. Thus, the derivation of this sufficient condition is an important topic for further research. CHAPTER VII THE INFINITE TIME REGULATOR PROBLEM The infinite—time periodic sampled—data regulator was formulated as an extension of the finite—time problem [2]. The existence of an optimal feedback control and the form of this infinite—time control law were both established formally in a more recent publication [55]. The convergence of the finite-time feed- back control law to the infinite—time feedback control law was also formally provem in this latter publication. However, these results were established only for the case of periodic sampling where the length of the sampling period is specified. The infinite-time sampled-data regulator problem is formulated in this paper for the case where both the number of sampling times and the lengths of the sampling intervals are considered control parameters. The existence of an optimal closed lOOp sampled-data control law is proved for the cases where the number of samples are both finite and infinite. Computational algorithms for calculating the Optimal control are proposed for both the case of finite and infinite number of samples. 7.1 Problem Formulation Consider the linear system 33(t) = _ 35(t) + g 30:) (23) 113 114 1(t) = _ §(t) with randomly distributed initial state €{xo} = £0 (24) “(£0 -§0)(§0 -€__0)'} =3 where x(t) is the n-dimensional state vector, 3(t) is the r— dimensional control, and y(t) is the m-dimensional output vector and .é:.§9.9 are compatible time-invariant matrices. The sensor provides measurements 10:1) = g yup (25) at the sampling times {ti}§=0 that are not specified and are con- strained to satisfy 0 < Tmin 5 ‘1+1 ' ‘1 = T1 3 Tmax (26) where N is unspecfied and satisfies (NiN (27) The control actuator is also assumed to be a sampled-data device and therefore the control .E(t) satisfies _q(t) = 3(t ) = u t 6 [ti, t ) (28) i —i i+l for i = 0,1,...,N—1. This sampled-data control is specified by _ N the control sequence {Bi}:=0’ sampling time sequence {ti}i=0 and the number of sampling times N. The initial time t0 6 (-m,w) 115 and the terminal time (tf = tN = 00) are specified. The design objective is to minimize the error §(t) with minimal control energy and minimal cost for implementing and Operating a sampled-data control. A system performance index is chosen of the form 8 = J + C ‘ (29) where the control performance has the form J = ex {It Agmg §(t) + yum 3mm} (30) —O 0 and the cost of implementation has the form C(T,N) = X as (31) The matrix “Q is positive semi-definite symmetric matrix and '3 is a positive definite symmetric matrix. A cost for implementation is adjoined and represents the economic costs for implementing and Operating a sampled-data control law. This cost for implementation can be considered to represent the cost for transmitting and storing the optimal sampled—data control law. It is similar in form to the costs for sampling used in the analytic derivation of adaptive sampling rules [13] and the optimal periodic sampling rate for a feedback control problem [56]. The control problem becomes: Given the linear system (23, 24) with measurements (25) determine the piecewise constant control (28) specified by the control and sampling times sequence 116 {u.}N'l {c.1i 1 —1 i=0 ; 1 i=0 ; and N that minimizes the performance index (29) satisfies the sampling constraints (26, 27). This problem can not be solved directly due to the con- strainton control (16). Therefore, the problem is transformed from a continuous-time one into a discrete—time one by the same technique used in Chapter III. The sampled-data problem can be transformed into an equi- valent discrete-time one by integrating (12) and (18) over each sampling interval T1 = ti+l — ti' §1+1 = 31% + 9.121 (32) N-l = 1 v 1 t 3 Ex {‘2 >3 (2519.151 + 251E121 + 313153)} + c(_T_,N) (33) -—0 i=0 where x, = x(t,) and ’“1 — l _i = 3(1‘1) Ti —i " 2(Ti) = ID g(r)§ dt T1 = = ' ¢ 91 Q(Ti) f0 3 (t)_q _(t)dt Ti = = ' _i MTi) f0 9; (mg p_(t)dt Ti _i = 3(Ti) = f [3 + you; you: O The matrices fli’-Ei and -Qi are in general time varying even though '0 and .3 are constant because the sampling intervals 117 are not equal. The matrix 21 is nonsingular because it is a fundamental matrix. Moreover, it is easily shown that -Qi(5i) is a positive semidefinite (definite) symmetric matrix because ‘Q(R) is a positive semidefinite (definite) symmetric matrix. The discrete-time problem becomes: Given the sampled-data system (32, 24) with the measure- ments (25),determine the control and sampling interval sequences N-l , _ {31}. 3 - (TO,T1,...,TN_1) i=0 and N that minimize the cost function (33) subject to the sampling con— straints (26, 27). 7.2 Computational Algorithm The existence of an optimal control law for the optimal sampled-data regulator problem is now established for both the case where the number of samples is finite and unspecified and for the case where the number of samples is infinite. In both cases, the existence and uniqueness of the control is first established for the case where the number of samples and lengths of sampling intervals are specified. The existence of an optimal sampling interval sequence which defines the optimal sampled—data control law for a specified number of samples is then proved. In order to establish existence of a control for these cases, three separate sets of definitions of controllability and stabilizability are thus required. The definitions are stated for the following three conditions: (1) where both the number of samples and the lengths of sampling intervals are specified; 118 (2) where the number of samples is specified but the lengths of sampling intervals are unspecified and considered control parameters; (3) where both the number of samples and the lengths of sampling intervals are considered control parameters. The first set of definitions are for the case where both the number of samples and the lengths of sampling intervals are specified. Definition. A system (23) is said to be controllable on a sampling interval sequence {t.}? if for any initial state x there i i=0 -o exists a control sequence {2i}§;0 which specifies the sampled- data control (28), such that AP = 0, Definition. A system (23) is said to be stabilizable on a sequence {th+1}i;0 if the part of the system, which can not be controlled . p-l . by selecting {Ei}i=0’ is stable. The following set of definitions of controllability and stabilizability hold for the case where the number of samples is 1 0 and sampling specified (N = p) but the control sequence {Bi}:; time sequence {ti}‘i)=O are control parameters. Definition. The system (23) is p—sampled—data controllable at tO if for every initial state E0 there exists a control sequence p-l . p . .f {Bi}i=0 and a sampling time sequence {ti}isO’ which speCi y the sampled-data control (28), such that é? = 9, Definition. A system is p-sampled—data stabilizable if the part of the system, which can not be controlled by selecting {Bi}:;0 and {ti}‘i)=O for a sampled-data control (28)’is stable. 119 The final set of definitions hold for the case where both the number of samples and the lengths of sampling intervals are control parameters. Definition. The system (23) is sampled-data controllable at to, if for every initial state x0, there exists a finite number of N- samples N, a control {Ei}i—O’ and a sampling time sequence N . {ti}i=0’ which specify the sampled—data control (28), such that we- Definition. A system is said to be sampled-data stabilizable if the part of the system which can not be controlled by selecting N, (311%, and (t 1 1: i 2:0, for a sampled-data control (28) is stable. The existence of an Optimal sampled—data control law is first proved for the case where the number of samples is finite and unspecified and then for the case where the number of samples is infinite. This theoretical develOpment is presented not only to establish the existence of solutions, but also to provide a frame- work for the computational algorithm which follows development. 7.2.1 The Infinite-Time Problem with a Finite Number of Samples The existence of an optimal sampled-data control law is proved in the following theorem for the case where the number of samples is specified. This result is proved by first establishing the existence and uniqueness of the closed lOOp control law for the case where both the number of samples and the lengths of sampling intervals are specified. 120 Theorem 7.1. An optimal closed-loop control exists for the infinite-time sampled—data regulator if the system is p-sampled—data controllable or p—sampled-data stabilizable. Prggf; Consider the problem first for the case when E0 is specified. A feasible solution {Bi}::0 exists for each {tn+i}§=0 for which the system is controllable or stabilizable. If the system is p-sampled-data controllable there exist sampling interval sequences .2 e [§,h] for which feasible control sequences {Bi}:;3 exist. Therefore, it follows from Theorems 1 and 3 [22, pp. 137 and 133] respectively that there exist unique optimal control and trajectory sequences p—l p—l {31(E)}1=0 {§i+l(l)}i=0 for each T_£ [_Jb] for which the system is controllable and stabilizable. For each feasible _1, the necessary conditions [6, 7], can be solved to obtain the control law 2, <1) = £59551) where the ‘r)(r1 dimensional feedback gain matrix satisfies 1M! + [R. + DZK. D ]-1D'K 0 —1 —i -—i—i+ -§i‘:) =-31 1—1 ~—1~i+1—1 The matrixes Ei‘l) satisfy the matrix Riccati equation 1 ‘1 v 51 = L + Aim K, D] DR 10 _ ' 1 -—1+1 -—1+191L31 +-9151+1—1 —1—1+1 —1 for i = p-l,p-2,...,l,0 with boundary condition 121 K = O —p ._ _ '11 _ "'11 - where g1 31 -2iRi M1 and £1 0i - fliBi-Ei' The assoc1ated Optimal cost can be expressed as a function 8(1) defined over each 3.5 [Efihj' This derived cost function can be expressed as _—————- 80(2) = {S = %-x K(T)xO :a < T < b} + C(T, N) k The optimal sampling interval sequence To which minimizes 80(2) over this set of feasible .2 specifies an open loop control because Tn depends directly on the initial state x -o —0 Now letting E0 be randomly distributed and letting {Ei(§o)}:;0 be any closed loop control law the system performance (18) becomes 1.: 51(T) = EX {—'X K (T) } = 1 2 —o—o E0 2 _0 (ngoyg) + E; K t ) + C(T,N) since the following exchange of operators is valid p-l min {E {l-x' Qx + %- Z (x!Q.x, + 2x!M.u, + u!R.u.)}} _ x0 2 —-p——p i=0 _I—I—l -—i—i—i -—i-i-i {u (X )}p 1 1 p“ = E {min {—-x'Qx + —- Z (xEQ.x, + 2fo,u, + u!R,u,)}} x 2 —p—#p 2 i=O-—i—r—i -i—i—i ‘—I—I—l —o p-l {El-1‘50) }i=0 Since there exist feasible solutions if the system is p—sampled- data controllable or p-sampled-data stabilizable and since the performance index Sk(T); k = 0,1 is non-negative on each feasible sequence of sampling intervals, there exists an infinum Tf. Since k the set T_e [EJEJ is closed and bounded, there exists an optimal 122 * solution -Ik for this derived problem. Thus, there exists a solution A N-l * N * {Ei(lk)}i=0’ ‘§i(—T—k)}i=1 ’ and 3k for the optimal infinite-time sampled-data regulator if the system is p-sampled-data controllable or p-sampled-data stabilizable. The control law is closed loop when the initial state is randomly dis- tributed (k = 1) because the optimal sampling interval sequence :1 does not depend on the initial state. An Optimal control law will not exist if the system is not p-sampled—data controllable or, p-sampled-data stabilizable for a particular value N = p. However, if N is not specified and the system is controllable with continuous controls, it has been proved in Theorem 6.8 that the system will be controllable with sampled— data controls (28) for all N :_q where q is the order of the minimal polynomial of the system matrix ‘A. Therefore, an optimal infinite-time sampled-data control exists for all p :.q if the system is controllable with continuous controls. The maximum number of samples Nmax should be chosen greater than q in order to insure that an Optimal sampled-data control exists for every system for which an optimal continuous—time control law exists. The algorithm, developed to compute the Optimal sampled— data control law for the tracking problem can also be used to compute the control law for this problem with one slight modification. Since the control interval is now infinite, one of the sampling intervals would be infinite. Therefore, this control interval should 123 be selected as large as possible, consistent with the word length of the computer being used to solve this problem. Since this problem is related to the finite time problem and since extensive computational results have already been obtained for that problem, no effort will be made to present computational results for this case. 7.2.2 The Infinite—Time Problem with an Infinite Number of Samples The existence of an optimal closed—loop sampled—data control law is now proved for the case where the number of samples is in- finite. This result is proved by first establishing the existence of an optimal open loop control and then establishing the existence and uniqueness of a closed loop control law for the case where the number of samples and the lengths of sampling intervals are specified. These two preliminary results are presented in order to indicate the theoretical difficulties in proving the existence and uniqueness of this closed loop control law. The existence of an optimal open loop sampled—data control law is now established. THEOREM 7.2 An Optimal Open loop sampled—data control exists if the system is controllable or stablizable on the sampling time sequence {t.}f chosen. 1 i=0 If the system is sampled—data controllable or stabilizable for the sampling times sequence chosen, the system can be driven to the origin and therefore the error energy and control energy are both finite. Therefore, there exist feasible solution sequences 124 G) 1 {0.}? §i+1 i=0 -—1 i=0 { for the sampling interval sequence .3 . = T (TO,T1,...,Ti,...) chosen if the system is controllable or stabilizable on this sequence. Since this set of feasible intervals is compact and since the perfor- mance index is bounded above and non—negative on this set, there J. exists an Optimal control sequence {Bi}:=0 which minimizes the system performance (33) if the system is controllable or stabilizable on the sequence .T chosen. Although an Optimal open loop sampled-data control exists for each sampling interval sequence _T for which the system is con- trollable or stabilizable, the control is impractical because the entire infinite control sequence and sampling interval sequence must be computed and stored. The cost of implementation would make the system performance high and thus would make the Open—loOp control sub— optimal. Since, the system is assumed observable, a closed loop control law is possible. If the gain of this closed loop control law is time invariant, the cost of implementation will be relatively low and the closed loop control law may be quite practical. There— fore the existence and uniqueness of the closed loop control law is established. In the previous literature, the infinite—time sampled-data problem was only considered for the case of periodic sampling where the sampling period was specified. The existence and uniqueness of this closed loOp control law was established [2] by first assuming 125 the existence of the optimal infinite time control law and then assuming that the extension of the finite—time control converged to the infinite time control law as N and tf = tN approached in— finity. In a recent paper [55], the form of this infinite-time closed loop control law was established and the existence and unique- ness of the closed loop control was proved. Moreover the finite— time control was proved to converge to the infinite-time control as N approaches infinity. These results have only been established properly for the case of periodic sampling and have never been proved for the case of aperiodic sampling. The existence and uniqueness of the Optimal closed—loop control law for aperiodic sampling is not proved here because (1) all previous work (except [55]) on the infinite time problem have always assumed that the finite-time feedback control law can be extended to the in- finite time case; (2) the results on the periodic sampled—data control law should be enough to suggest that the finite-time aperiodic sampled-data closed lOOp control law can also be extended to the infinite—time case; and (3) the proofs for the aperiodic case are tedious and beyond the sc0pe of this work. The existence and uniqueness of the extension of the finite time closed loop control law is now proved. THEOREM 7.3 An Optimal closed-loop sampled—data control law exists and is unique if the system is controllable or stabilizable on the sampling time sequence chosen. 126 Proof. Consider the case first when the initial state A0 is Specified. The existence of an Optimal open-loop control was proved if the system is controllable or stabilizable on the infinite sampling interval sequence chosen. Moreover, the infinite-time closed loop control was proved to exist for a finite number of samples and thus exists for an infinite number of samples. Moreover, it is assumed that the finite-time control law converges to the optimal infinite time control law as the number of sampling—times in the sequence increases to infinity. Thus, the infinite time control law has the form: 21(2) = .91 (E)§i (I) where the r;<'n dimensional feedback gain matrix satisfies _ “l v v '1 1 91(3) ’ R1 fli + [31 + 2151+121] 2151+191 -l ' - D D'K O . V The matrices El. 1 _i+l-I)‘i [Bi + RiEi+l—i —1—j_+l —i for i = N-l,N—2,...,l, 0 with boundary condition with Eva where N = m, 0. = 0, - D R—lM! and F, = Q. — M.Rf1M The T -1. —1 —i—i-i -1. —1 —1—1'-i' associated optimal cost can be expressed as a function 8(2) over the set T_e [2,b]. This derived cost function has the form x K (T)x :a < T < b} + C(T) —o—o —-—o ————— -— NHF‘ 30(3) = {ST = The Optimal sampling interval sequence Io which minimizes 80(1) 127 over this set of feasible T. specifies an Open 100p control because * T . . . . . —0 depends directly on the initial state x0 Now letting go be randomly distributed and letting {Ei(§0)}i=0 be any closed 100p control law, the cost function becomes x K (T)x -1r1>--o NIP—I _ _ _1_ . 51(1) - E5: } - 2(Tr{50y_} + a K a > + cg) since the following exchange Of operators is valid 00 min {B {l 2 (x',Q,x, + 2x',M.u, + u',R,u.)}} w x0 2 i=O-—r—i—i -r—r—1 -r—i—i {u.(x )}. -1 —0 i=0 = E {min f %- Z (x!Q,x, + 2x!M.u. + u!R.u.)}} m . -—i-i-i -—i—i—i -—i-i—i —0 {n.(x )}. i=0 —i —0 i=0 The Optimal sampled—data control law obtained by minimizing 81(T) over ‘T 6 [a,b] is closed loop because the infinite sampling interval sequence does not depend on the initial state go. The existence of the control law has been established under the assumption that the extension of the finite-time control law is the Optimal infinite time control law for the case where the number of samples and the lengths of sampling intervals are specified. The uniqueness Of the control is proved by establishing the uniqueness of the sequence {E1}? . Two distinct sequences i=0 {E.}? and {K }? are assumed and are now shown to be identical. 1 i=0 '—1 i=0 Let all n x n matrices form a metric space which is complete. From Theorem B [29, pg. 47] this metric space forms a normed linear space with matrix norm M“Dt1=fir‘ The matrix sequence {Ei} and {£1} satisfy the Reccah equation and therefore A ‘7 _ r = '7 — 'h 51 51 g151+191 91—1+1—9—i where - -l -— -l = + 91 [l- P-i‘-Iii'-Qi—K-i+l] ‘gi A = + 9i [l- D 1R11D1§i+110i This matrix difference can be expressed as . -1,-1— h+l][1+§, DR D] }e '_ A A A —l K - K = + . _i .1 e {[I K. D 1.11 D'. ][E 1+1—i—i _i _1 i —i —- —1++l-—i‘—i i+l - The term in the parenthesis is just a matrix similar to .E, — K, and thus has the same eigenvalues as -—1+l -—i+l Ei+l - l<--i+l [28]. Since §i+l _-Ei+l is symmetric and n x n , the norm of this matrix on the normed linear space is the maximum absolute value of the eigenvalues of the matrix. Therefore . -1 ._ z 3 ne+éfiaayinsfl-ifiuu+sfiarL%1W = H§i+1 ' 5&1 H The norm of this matrix difference in the normed linear space has the form H11. - 11;, u sHQiH-HEM - gun-HE,“ where the norms 1191“ < 1 HM < 1 if the system is controllable or stabilizable on the sampling interval sequence .1. Then the difference matrix must approach zero as i approaches zero. Thus, there is a unique sequence {5i}:=0 and a unique control law if the system is controllable or stabilizable on the infinite sampling interval sequence I, The existence and uniqueness of the Optimal closed loop control law was proved formally for the case of periodic sampling [55]. The finite-time closed loop control law was then proved to be the infinite-time control law as the number of samples approach infinity. Thus, the assumption that the extension Of the finite- time control law is the infinite-time control law is valid for the case Of periodic sampling. The control law for the case of periodic sampling has also been proved time invariant rather than time varying. The following theorem is stated to establish the form and the existence and uniqueness of the periodic sampled-data control law. THEOREM 7.4. An optimal closed loop sampled—data control * k u (T) = -G(T)x.(T) —i —- —i where 130 1 em = My i+ [3 +252'1’1 Egg exists and is unique if the system is controllable or stabilizable on a periodic sampling time sequence with period T. The system performance is _i . SO(T) - 2 xo§_x0 + L(T) (34) where ‘E satisfies the Riccati equation E=£+@K-EEW+EEEEM9 The proof of this theorem is contained in the literature [55] and is not proved here. The system performance (34) becomes kflh‘ 31(1‘) = (53715;) + Tr{§ _w}) + C(T) if the initial conditions are randomly distributed as assumed pre— viously. The assumption is made, as pointed out earlier, in order k to make the Optimal sampling interval sequence El independent of the initial state go. The following theorem establishes the existence of an Optimal closed-loop sampled-data control law. THEOREM 7.5 An optimal closed loop sampled—data control specified by {3:}m and If exists if the system is sampled-data controllable or sampled-data stabilizable. In Theorem 7.3, the existence and uniqueness of the Optimal control was proved for each 2.5 [a,b] for which the system is l3l either controllable or stabilizable. Since the control is unique a derived performance index 81(2) was defined over I 6 [3,9]. Since there exist feasible solution {2, {Bi}:=0} and since Sl(T) is non-negative, an infinum exists. Since the set of feasible sampling intervals sequences T_e [3,b] is a compactum, an optimal sampling interval sequence 2: exists. Thus an Optimal control sequence {3:(T:)}:=O and trajectory sequence £§i(:1)}:=0 exists. The control law is closed loop because the optimal sampling interval sequence I: does not depend explicitly on the inital state. Computing the Optimal sampling interval sequence is in gen- eral impractical due to the high cost of computation storage, and hardware implementation. Thus the Optimal sampling interval sequence must be highly structured and must depend on the form of the cost of implementation chosen. A periodic sampling criterion has been heuristically established as optimal if the cost of implementation has the form 8 -BT. i C(T) = as (35) i "M 0 It is quite apparent that other structured sampling criteria may be Optimal if other forms for the cost of implementation are prOposed. The optimal sampling period T* for this infinite-time optimal sampled-data regulator problem with cost of implementation (35) can easily be computed using a one dimensional search algorithm, such as Fibonacci search. The use of such an algorithm on several example problems is a subject for future research. CHAPTER VIII CONCLUSIONS AND FURTHER INVESTIGATION The principal contribution of this thesis is the develop- ment of a new framework for the design and analysis of sampled- data control systems. The formulation of the sampled—data control problem is extended by considering both the number of samples and the lengths of sampling intervals as control parameters. A system performance index is proposed which measures not only control performance but also the cost of implementation. The sampled-data control is generalized by assuming polynomial form over each sampling interval. The controllability and Observability Of these sampled- data control systems are defined for the case where both the number of sampling times and the lengths of sampling intervals are control variables. It is established that a necessary and sufficient con- ciition for p-sampled-data controllability and Observability can be )> 133:7; [u(t) - gamma N-l -BT. -_§(t))> + ]dt + z 66 1 (Bl) i=0 Since y(t) = Cx(t) and .F is symmetric 1 _ 1. v“ 1 . - . 2,g(y_(t> - §(t))> + ]dt O N-l 1 1 - _ v v _ v _ _ ‘ 1:0 [2 331915-1 +351331-31 + 2 l31-13-131 h15-1 -5131] t + if N 2'(t)Q 2(t)dt 2 c — -—-— O T, , M, = f 1 e5 IO D(T)dT *1 o -—-— T, , where Q. = f-1 eé TO eAIdT _1 O __ r _ _~ Ti R . RTZk 3 RIZk 2 .. v " "' " — _ Bi - f0 [9 (T)Q.R(T) + . Eg2k.1 ET2k—1 BTZk R J _ i , AT, —i [O §_(ti + I)g”9 e— OT (r1 = I -§i f0 '3 (ti + T)Q_C_D(T)dr Substituting (B2) and (B3) into (Bl) gives N-l -BTi 1 , 1 N—l = __ V _ + __ V + V 5 J0 + .2 “e + 2 §N£-§N‘EN§N 2 .2 (£191E1 2515131 1 = O 1 = O ' - - + AME, 21111 2819.1) tN where J = l g}(t)§_g(t)dt 0 NHP‘ . .1 §_(tN)§_g(tN) + 2 It (B3) ]dT (B4) 145 C. KUHN-TUCKER NECESSARY CONDITION OF OPTIMALITY FOR QUADRATIC PROGRAMMING PROBLEM The canonical form of the quadratic programming problem is: . . . l Minimize — + 2— subject to the constraints: .§.X ='g and a < v < 8 (C1) The Kuhn-Tucker necessary conditions for this problem are stated in the following theorem. THEOREM If g_ is a feasible solution to problem (C1), then .2 is an Optimal solution if and only if there exists a vector 3' = (w',...,wm) e Em such that for i = 1,2,...,n <1111> - - 11 = 0 if 9.1 < 11 < 11 i i _ i - (511,2) - a 1 0 if a - i <1,.1> - <1i.1> - 11 : 0 if 11 = 11 where for i = 1,2,...,n, 11 is the ith column of -§’«fli is the ith column of g, and d} is the ith component of d. 146 D. NECESSARY CONDITIONS FOR OPTIMALITY OF THE SAMPLED DATA TRACKING PROBLEM The sampled-data tracking problem can be put into the canonical form (Appendix C) of the quadratic programming problem as follows. The cost functional N-l -8T 1 l N-l = _ t " __ v I J J0 + .1 ae + 2 x F EN thL + 2 Z (x Qiéi + 2xiM u i=0 =0 ' — — + 313131 2519—1 25151) can be represented as N-l -BT. 1 J = J + 2 ae 1 + 7 + (D1) 0 i=0 2 _ 7 — _ _ The (n + kr)(N + 1)* vector 3_ has the form , V V V V 44) 41-1 21-1 1, 91 and the (n + kr)(N + 1) square matrix Q. is ( x r) 90.. 21 2: 911-1 9N K I As the dimension is concerned, k = l for step, k = 2 for ramp, k = 3 for parabola. 147 where the (n + kr) square matrices Qi’ i = 0,1,...,N are defined as follows . M. A "‘1 _l g} = for i = 0,1,...,N-1 1 M', R —l —l i‘ 9 E) = —N 9 0 The (n + kr)(N + 1) vector d is v _ _ _ _ _ - fl “ [ I109 E090°09 LIN-1, ‘g‘N-l’ ENDQJ The state equations and the initial condition 10 = .5. 51+1 = $133, + 9.131 1 = 0,1,. ,N—1 can be put into the form as BX = 9 (D2) where the n(N + l)X(n + kr)(N + 1) matrix .R is / - l-9n(kr) -$0 - D I O _0 __ __ 121-21 _I. .9. 3 = 9111-1 I311—1 —I- 9 5 )1 148 and the n(N + 1) vector 9 is c'=[§g... 0] The inequality constraints g — <3]??? - 51-1 = O This condition can be stated in matrix form as follows ”‘1 m I [o i-9=9 because a = -m B = m V i This vector condition can be shown to be equivalent to the following set of conditions. = I 1 __ I ‘21 ¢'pi+1 + an, + M,u. h. 0 = D' + Mix _1_. - V -kr —1P—1+1 + R'”' 51 i ‘—1_1 for i = 0,1,...,N—l and P—1\1=£351~1"—}11\1 Thus the necessary conditions for the problem are the equations stated in Theorem 1. (D4) (D5) (D6) 150 E. DERIVATION OF THE CONTROL LAW The control law can be solved for uniquely using (US) to obtain u =-R-1(M'x + D'p - g') (El) —' ~1 -—1—1 ~1~1+1 1 ssince Bi is positive definite for all 1. Assuming the (Lagrange) multipliers have the form (verified in [241) Bi = K.x + k, (E2) where Ei’ ki are to be determined. Using (E1), (E2) eliminates Hi from (7) and (D4), followed by rearranging terms with the help of the well known matrix identity (1 + A B')‘1 = 1 — A(I + B'A)—1B' "‘11 _‘-' ’11 ""I' ‘_—‘ ‘— firmly yields x = 0 x + D 8.1("' — D'k ) (E3) ~1+1 41—1 —1—1 21 -—1—1+1 O = 0' + F x + M R-1 ' - h' (E4) *1 —1B1+1 «—1—1 —1 1-51 -—1 for i = 0,1,...,N-l where 0 = ¢ - D,RTlM! _‘1 *1. “1-1 *1 0 = (I — D s'lD'K )0 —1 —— —1=1 —1—1+1-—1 151 U) ll '-i (R'1+'2iEi+1Di ) -1 ' Comparing (E2) with (E4) and substituting (E3) for §i+1 give K,x+k. =(OiK ei+ri,)x+(cfg.-h',+0!.k) (ES) —- 1+ —i ~i-1 —i l— i—ifl where G: = (IvLRTl + C'K D S l) -i -i-i i4—1——i--i Since (E5) must hold for any choice of initial state g and since -5i’-Ei does not depend on .EI (E5) must be satisfied for all xi. This implies = 1 E6 Iii 9151+191£1 ( ) = I I _ I 1 51 E1—g-1 fl1 +9111(1+1 (U) Substituting (E2), (E3) into (E1) and rearranging terms finally Obtain -1 - “ E1x1 §1 (51 p1131M) (E ) 152 E. DERIVATION OF THE OPTIMAL COST The discrete form of cost functional is (from BA) N-l -8Ti l A N-l _ __ v _ S JO + .2 ae + 2 EN- EN hlxl + .2 J1 (F1) 1=0 i=0 where l I . t y . J. = -(x,Q,x, + Zx,M.u, + u.R.u, - 2h,x, - 2g,u,) 1 2-—1»1—1 —1—1—1 1—1—1 -—1—1 -—1—1 Replacing -Ei by use of (El) obtains l ' . '1 q l 1 I -1 ' = _ . + T + ._ . :1 _. J1 2313151 P—1+1[9—131 QiBi+1D 5251—1151 51 2D~151 - g R-lg') (F2) 1‘1 1 Using (E1), (7) can be rearranged as D R-lDflp = —x + G X + D R.1 ' (F3) -—1—1 —1 1+1 ——1+1 —~1——-1 ~1-1 51 Replacing the bracketed term in (F2) by (F3) substituting u, for using (E1) and using the rearranged (E4) -1 = ' + ' + « ' - (F2) becomes 1; v I v -1_ "'1 v '1 v _ 1 ’ 2[(Rix—1 P—1+13‘-1+1) + (1)—1+19151 52151 + 513131 >311 5151] L; l ) - u'O' - hix (F4) .1. v _ . 2 [(131331 E1+1-“31H —1-Q1 —1] Replacing Bi with (E8), rearranging (E7) and substituting (E3), the following expression becomes 153 V ' I u,q, + n,X, = ,u, + h,x, ~1L1 -i~1 ‘gl—l -—1*1 = A __ I _ I _l I (91—51“ 1‘1) 331 51—911 9151+1 + 51—8—1 51 = I _ I _ I I "'1 I _ I (5'+1§1+1 5151) + (g1 9341) —S—1 (51 9151“) (F5) Therefore, putting (F5) back into (PA) obtains l V ' J1 ’ SUP—1351 ’ P—1+13‘—1+1) ' (51+1§1+1 " 51331) I I I '1 I I ' -(gi-2k )s. M > o (b) ‘11 + 1‘1 _ V E 1 :5 (c) 11 - an : MM (d) flaAh = Iol'HAH a is a scalar. n and HXH = ( Z I .l2)l/2 H . 1 i=1 (2) if A is hermitian ”A“ = 0(A) where p(A)=:max lxil is the 1 spectral radius of .A. (3) If .A is positive definite, then pi(A) > O V i. This can be expressed as ‘A > 0. (4) If ‘A > 0,.X < 0 then “A +12“ < “A“ If .E > 0 then ”(I + §)—lu < 1 (5) If (A > O, B > 0 then A-E > O (6) If .9 is a triangular (upper or lower) square matrix with 0, then Hegtu = 1. C.. 11 (7) Two similar square matrices which have the same characteristic polynomial, will have the same values of norm. 111111111111111\1111111111s