MSU LIBRARIES .—:—. RETURNING MATERIALS: P1ace in book drop to remove this checkout from your record. FINES will be charged if book is returned after the date stamped below. _FT F? 3 g‘rwm “I a :‘ 1L ’11” («Ear i! a} 'j‘ ‘ ILL-POSED PROBLEMS IN OPTIMAL CONTROL SYSTEMS AND A METHOD TO SOLVE THEM BY Lili Hedayatolah-Tabrizi A DISSERTATION Submitted to Michigan State University :1r1 guartial fulfillment Of the requirements for the degree Of DOCTOR OF PHILOSOPHY [Department of Electrical Engineering and System Science 1983 ABSTRACT ILL-POSED PROBLEMS IN OPTIMAL CONTROL SYSTEMS AND A METHOD TO SOLVE THEM BY Lili Hedayatolah Tabrizi It is well known that many important practical auto- matic control problems require solving a set of differen- tial and/or integral equations and obtaining the extrema of some functionals. In considering these mathematical problems, there naturally arises the question of the correctness of their for-Inulation. This thesis deve10ps the concept of correctness for an extremely broad class of mathematical and optimal control problems presented in general form of either an operator equation or a minimization problem. The concept of an ap- proximate solution of such problems that is "stable" to small changes in the initial data is defined and the methods of constructing solutions that are easily processed on a Computer is examined. Precise definitions of well-posedness and ill—posedness in the sense of Tikhonov and of Hadamard are presented for a general class of mathematical and optimal control problems. Examples of ill—posed problems relating Lili Hedayatolah Tabrizi bOth to the basic mathematical apparatus and also to a broad class of applied problems are presented. Well-posedness and ill-posedness are discussed for a tnnaad class of applied Optimal control problems such as, time-optimal and fuel-optimal linear time-invariant regulator problems. Ill-posedness of "uncontrollable systems" is studied, and a particular class of optimization problems in 'which the system equations are linear in the control variables is examined in detail. The conditions which characterize sin- gular control are derived for various cases of optimal con- trol problems and the ill-posedness is studied for the singular problems. Different approaches to the solution of ill-posed prob- lems are considered. The method of solving an incorrect problem depends basically upon the presence of additional in- formation on the properties Of the solution of the problem. The "regularization" method makes it possible to construct a sequence of uniformly converging solutions, without making a priori assumptions about a solution belonging to a given compact space. The "regularization" method is discussed for different types of ill-posed Optimal control problems of an applied nature such as linear time-optimal regulator prob- lems, singular linear quadratic regulator problems with free final state and more general forms of optimal control problems in which the cost functional and the state variables are con- tinuous functions of the control variable. To my Mother and in memory of my father ii ACKNOWLEDGMENT S I wish to thank Dr. Albert A. Andry for proposing this topic and for his encouragement, enthusiasm, and stimulat- ing discussions. I am most grateful to Dr. Robert O. Barr for his valuable guidance, and continued support throughout my graduate study at Michigan State University. I would like to thank Dr. G. L. Park for his continuous material and moral support in the entire course of this study. My gratitude extends to the other members of my com— mittee, Dr. H. Khalil, and Dr. T. Yen, and also to Dr. D. P. Fisher. for their valued inputs and directions. I am also indebted to Professor J. B. Kreer, the Chair- man of the Department of Electrical Engineering and System Science, for his encouragement and financial support as teaching assistantship throughout my graduate study at Michigan state University. I am most grateful to Enid C. Maitland, and Pauline Van Dyker the secretaries of the Department of Electrical Engineering and System Science, for their extensive help and kindness' I would like to thank my husband, Iraj, and my family iii for their continuous encouragement. Lastly, I appreciate Mrs. Perri-Anne Warstler for her fine typing of the final draft of the Dissertation. iv TABLE OF CONTENTS Chapter Page 1 . INTRODUCTION. . . . . . . . . . . . . . . . . l 5 2. HISTORICAL BACKGROUND . . . . . . . . . . . . . 3. DEFINITIONS AND EXAMPLES OF ILL-POSED PROBLEMS............. l4 3 . 1 - Stability of the Solution in the Sense of Hadamard . . . . . . . . 14 3,2 - Stability of the Solution in the Sense of Tikhonov . . . 15 3,3 . Ill-Posedness in the Sense of Hadamard................ 16 3.4 , Ill-Posedness in the Sense of Takhonov............... 18 Definition of Ill-Posedness of Variational Problems. . . . 20 ,3 .5.A. Ill-Posedness of the Optimal Control Problems in the Sense of Hadamard. . . . . 23 3 -5.B. Ill-Posedness of the Optimal Control Problems in the Sense of Tikhonov . . . . . . . . 24 General Examples of Ill-Posed Problems.......... Examples of Ill-Posed Problems in 3'7' Optimal Control Systems . . . . . . . . 39 3 8 A characterization of Well-Posed ° ' Optimal Regulator Problems by ZoleZZi o o o a o c o o o o o o o o o 57 Chapter 4. METHODS OF SOLVING ILL-POSED PROBLEMS . O O O O O O O O O O O O O I O O O 4.1. Introduction. . . . . . . . . . . . . 4.2. The Selection Method of Solving Ill-Posed Problems. . . . . . . 4.3. The Method of Quasisolutions. . 4.4. Replacement of the Equation with an Equation "Close to It" . . . 4.5. The Method of Quasiinversion. . 4.6. The Regularization Method . . . 4.7. Methods for the Regularization of Optimal Control Problems . . . . . RELATIONSHIP BETWEEN ILL-POSEDNESS AND UNCONTROLLABILITY . . . . . . . . . . . 5.1. Introduction. . . . . . . . . . . 5.2. Uncontrollable Linear and Non- Linear Optimal Control Systems. . . . 5.2.A. Linear Optimal Control Systems . . . . . . . . . . 5.2.B. Non-Linear Optimal Control Systems . . . . . . . . . . . 5.3. 111- Posedness of Weakly and Strongly Controllable Systems. . . . . . . 5.3.A. Linear, Time-Invariant Perturbed System. . . . . . 5.3.B. Linear, Time-Invariant Singularly Perturbed System . ILL-POSEDNESS OF TIME-OPTIMAL REGULATOR PROBLEM . C O I O O O O I O O O O C O O O O 6.1. Definition of Linear Time-Optimal Regulator Problem . . . . . . . . . Page 65 65 66 68 72 74 74 92 105 105 110 110 118 124 124 128 132 132 6.2. Ill-Posedness of Singular Linear Time-Invariant Time-Optimal Regu- lator Problem . . . . . . . . 6.3. Ill-Posedness of Non-Singular Linear Time-Invariant Time- Optimal Regulator Problem . . 6.4. Examples. . . . . . . . . . . ILL-POSEDNESS OF THE FUEL-OPTIMAL REGULATOR PROBLEM . . . . . . . . . 7.1. Ill-Posedness of the Fuel- Optimal Regulator Problem for Singular and Non-Singular Linear Time-Invariant System. 7.2. Ill-Posedness of More General Form of Fuel-Optimal Regulator Problem . . . . . . . . . . . 7.3. Examples. . . . . . . . . . ILL-POSEDNESS OF SOME GENERAL FORM OF OPTIMAL CONTROL PROBLEMS AND SINGULAR LINEAR-QUADRATIC PROBLEM . 8.1. More General Form of Optimal Control Problem . . . . . . . 8.2. Singular Linear-Quadratic Problem Consideration . . . . CONCLUSION AND SUGGESTIONS FOR FUTURE WORK . . . . . . . . . . . . 9.1. Weak Controllability of the System. . . . . . . . . . 9.2. Ill-Posed Infinite-Order Singular Linear-Quadratic Problem with a Fixed Final State . . . . . . 9.3. Convexity Studies . . . . . . 9.4. Simulations and Other Techniques. . . . . . . . . . vii Page 136 147 178 205 205 220 225 249 249 264 274 275 280 283 283 Chapter Page APPENDIX. 0 O O O O O O O O O O O O O O O O O O I O 286 BI BL IOGRAPHY O O I O O O O O O I O O I O O O O O O O 2 9 0 viii CHAPTER 1 INTRODUCTION The rapid rise of computer use (and by implication, numerical analysis) has extended the application of mathe- matics in all branches of science and technology. One important aspect of this influence is that of Simulation, i.e., the translation of physical into mathematical models which are then exercised by numerical procedures. A necessary first step in this process is to determine an appropriate mathematical model for the practical prob- lem; the second is to construct the computational algorithms to apply to the mathematical model. One important property of the derived mathematical problems is the "stability" of their solutions under small changes in the data, an inevitable situation in practical cases. Problems that fail to satisfy this "stability condi- tion" are called "ill-posed". This recognition of well- posed/ill-posed problems represents a Significant step toward eventual "solutions" of these problems. Many problems encountered in practice do not yield expected results which are based upon their classical con- ception and formulation. For example many of these problems have mathematical models whose solution requires solving "Fredholm integral" type equations. Solution of this type of equation requires exact knowledge of the in- put data or function. However, in practical cases, the output data is usually obtained experimentally and contains errors attributable to noise, disturbances, or error; these errors imply that this input data or function may have corners, i.e., is not differentiable. Given such data. the integral equation does not have a solution in the clas- sical sense. Given the difficulties in dealing with prob- lems where the "output" data is not precisely or approxi- mately known and a solution or "input" data is to be de- termined, one is naturally led to the concept of "ill-posed" problems. Unfortunately most analytical methods are best suited for solving "well-posed" problems and it remains unclear in what sense "ill-posed" problems have solutions that are meaningful in applications. For many years mathematicians felt that "ill-posed" problems cannot describe real phenomena. However, it has been shown that "ill-posed" problems include many classi- cal mathematical problems and specially those which have important applications in the real world. An example con- cerns the investigation of an "incorrect" Cauchy problem for the Laplace equation; this has applications in the field of geophysics. One can also relate Similar examples such as: differentiation of functions which are known only approximately, solution of Fredholm integral equation of the first kind, summation of Fourier series with ap- proximate coefficients, analytical continuation of functions, inverse Laplace transforms, solution of a Singular system of linear algebraic equations, etc. The most basic functional difficulty in the solution of incorrect problems arises when using approximate methods. In this situation, a small error arising in the method or its implementation such as accumulation error, truncation error, or round—off error when the computer is being used, may lead to a significant deviation from the correct solu- tion. The outline of the dissertation is as follows. In chapter 2, a historical background is presented, which contains a survey of results in the theory of ill-posed problems. Included are results which may be applicable to problems in optimal control theory. In chaptersB and 4 we define ill-posedness and present the relevant concepts of stability in senses of Hadamard and Tikhonov. Some special techniques and their applications are also presented. Finally, related mathematical notions are presented in order to complete the chapter. Chapters 5, 6 and 7 begin the discussion of the classes of optimal control problems and their 111- or well-posedness. Special attention is given to uncontrollable systems (linear and non-linear), singular systems, linear time optimal control problem and linear fuel optimal control problems. In chapter 8, a Special case of quadratic optimiaa- tion is considered and shown to exhibit qualities that render it ill-posed. Also in chapter 8, a Special case of non-linear optimal control is discussed in terms of ill- posedness and an extention of the regularization technique is applied to solve the problem. Finally, in chapter 9, a summary of contributions of this dissertation and possible future areas for research are discussed. CHAPTER 2 HISTORICAL BACKGROUND Recently ill-posed problems have been recognized in diverse areas such as economics, mechanics, and control systems theory. To this end the investigation of incorrect problems and the search for methods which can be used to solve them have acquired special Significance. During recent years the number of articles in this area has been increasing, and it is difficult to discuss in detail all the papers deserving attention. Therefore, we consider only those investigations which are the most characteristic or are the most interesting from the point of view of the automatic control specialist. For many years an accepted point of view in the mathema- tical literature was to concentrate on two key results for each problem: existance and uniqueness of the solution. How— ever, the requirement of stability of the solution to small changes in the initial data has to be satisfied, and this characterization and stability condition of the solution under small perturbations was virtually ignored. From a numerical analysis point of view, this lack of stability was particularly disturbing since computations of the solution were very sensitive to initial data ([TS],[K6]). The notion of a well-posed ("correctly set") mathe- matical problem made its debut with the Hadamard's discus- sions on the mathematical difficulties due to their complex topological aspects [T5]. However, he did not present any methods which might "solve" ill-posed problems. It also remained unclear, in what sense ill-posed problems have solutions that would be meaningful in applications. Following Hadamard was A. N. Tikhonov, an early re- searcher in the field of ill-posed problems who succeeded in giving a precise mathematical definition of an "approxi- mate solution" for general classes of such problems; he also constructed "optimal solutions" for such problems ([T3],[T4],[T5],[T8],[K6]). Tikhonov, who gives a general theory of ill-posed problems, begins by presenting a precise definition of ill-posed problems. He also ob— served that, optimal control problems lead by necessity to variational problems and he formulated a definition of in- correctness of such a problem [T8]. Also, he presented a physical interpretation for problems where small distur- bances in the initial data cause large changes in the solu- tion [T5]. In the area of optimal control, which is our basic con— cern, Tikhonov presented a precise definition of ill- posedness and provided techniques which classify ill-posed I I Optimal control problems [T5] (we will discuss this in more detail in Chapter 3 and 4). In retrospect, the major contribution of Tikhonov is his concept of "regularization". This method gives ef- fective results for a number of diverse problems arising in Optimal control, linear and dynamic programming, mathe- matical economics, inverse problems of potential theory and the theory of thermal conductivity, and etc. "Regularization" means, loosely speaking, a replace- ment of the original, ill-posed, problem by a family Of well-posed problems ([TS],[K6],[R2]). One important advantage Of a "regularizable" ill-posed problem:n5that error bounds can be readily constructed for approximate solutions, which has been done by Tikhonov for various types Of problems [T6]. Analysis of ill-posed problems in system theory and optimal control problems started as early as in the late 60's, and some useful techniques were suggested. Most of the publications, however, were concerned with utilizing the existing regularization techniques, primarily the Tikhonov regularization. However, he applied the "regular- ization" technique to a few optimal control problems, and he was more interested in ill-posed mathematical problems rather than ill-posed Optimal control problems. Tikhonov used the optimal control problem as an applicational ex- ample Of ill-posed problems without defining the precise relationship between ill-posedness and some characteristics of control system such as controllibility, Observability, stability,singularity and etc. In later chapters, we prove that there are direct relationships between those characteristics and the definition Of ill—posedness, and we can actually modify the definition of ill-posedness in the sense of Tikhonov to a new definition of ill-posed Optimal control problem in terms of their characteristics. We will also extend Tikhonov's regularization to find the solution Of ill-posed optimal control problems since no comprehenSive study of the solution Of ill-posed problems in system theory has been conducted. Another contribution to the subject Of ill-posed prob- lems comes from Phillips [P1]. The character of Phillip's work is more intuitive than it is mathematical. However, it contains, though in preliminary form, several ideas for methods of solving incorrect problems, the strict develop— ment of which is given in works of Soviet mathematicians ([K6],[R2]). Bakushinsky provided a comprehensive Spectral analysis of ill-posedness and regularization in Hilbert spaces ([Bl],[R2]). V. F. Turchin presented the use of mathematical statis- tical methods in the solution of ill-posed problems. He also introduced the idea of "statistical regularization" of the solution Of ill-posed stochastic processes ([R2],[K6]). The Soviet mathematician V. Y. Arsemin has contributed to the solution of many specific incorrectly formulated problems. He specially studied the discretion and accuracy of the "regularization" method of solving integral equations of the first kind [K6]. He also applied the "regulariza- tion" technique to solve the integral equations of the first kind, convolution type [Al]. To M. M. Lavrent'ev is attributed the solution of many specific incorrectly formulated problems. A number Of the works of Lavrent'ev are devoted to the question of choosing the additional conditions ([TS],[K6]). He formulated the concept of well-posed problems in the sense of Tikhonov. To him belongs the idea Of replacing the original equation of ill-posed problems with an equa- tion that in some sense is close to it and for which the problem Of finding the solution is stable under small changes in the initial data ([T5],[L2],[L5]). Lavrent'ev's work has been continued by V. K. Ivanov. Ivanov clarified certain topological aspects of ill-posed problems, and he developed the technique of "quasi—solution", which he extended subsequently to the case of closed but unbounded operators ([Il],[IZ],[T5]). Ivanov established the connection between Tikhonov's method and the theory of "quasi-solutions". It has been shown that the regu- larized class of approximate solution of Tikhonov's 10 coincides with the family of "quasi-solutions" on some broadened compact space [K6]. Ivanov's concept of a "quasi-solution" of ill-posed problem is an attempt to avoid the difficulties associated with the absence of a solution of ill—posed problem in the case of an inexact input data, and is a generalization Of the concept of a solution to that problem. A number of surveys have been published in the area of topologically ill-posed problems, including variational ill-posed problems. One of the most complete is the sur- vey paper by L. S. Kyrillova and A. A. Pointkouski [K6]. Their primary attention is given to the results obtained by Tikhonov and his students. They presented some examples of ill-posed optimal control problems and applied Tik- honov's "regularization" technique to find the "stable" solutions of those problems. Still other investigations are those of non-Russian mathematicians and researchers. Among these, the works of Zolezzi ([Z1] [24]), Audley and Lee [A3], Seidman [Sl], Bellman [B3], and Rutman [R2] are the most important since their primary thrust is toward application of the "regularization" technique to ill-posed problems in control systems theory. Zolezzi's work included a characterization Of a well- posed optimal control system. He considered a constrained Optimal regulator problem and proved the continuous 11 dependence Of the Optimal control on the desired trajectory (Hadamard's well-posedness) and convergence toward the optimal control of any minimizing sequence (Tikhonov's well-posedness) when the dynamics are "affine" (linear plus constant [24]). He defined the dense well-posedness in the non—affine case. He also Showed that the necessary and sufficient conditions for well-posedness of all desired trajectories are the affine structure of the plant. Zolezzi also presented the relationships between Tik- honov and Hadamard well-posedness and proved that the Tik- honov well-posedness implies the Hadamard well-posedness for some problems Of best approximation [24]. Proceeding, he defined the conditions of well-posedness for the quadratic optimal control problems described by ordinary differential systems. It has been Show that, a well-posed problem can be obtained, in some cases, for a given ill-posed optimiza- tion problem by suitable modifications of cost functional ([21] and [24]). Zolezzi's conditions for well-posed regulator problems will be discussed in Chapters 3 and 4. T. I. Seidman discussed the non-convergence results for the application of least-squares estimation to ill—posed problems. He considered the problem involving the solution of the equation f(n) = b in which the unknown is an element of some infinite-dimensional space X. If b is to be given, in practice only approximately, as an element Of a certain space D, then it may happen that f has no continuous inverse. 12 He noted this and proved that the least-square estimation l(n) problem is always ill-posed if the inverse operator f- is unbounded [51]. An article by R. Bellman, R. Kalaba, and J. Lockett [B3] has indicated the effect Of ill-posedness of linear sys- tem on the dynamic programming technique. They showed that even though no one technique would resolve the fundamental problem of Obtaining sensible results from ill-conditioned systems;the techniques Of dynamic programming, successive approximation, extrapolation, and smoothing can yield worth— while results in some cases. It has been shown that a dynamic programming approach has a built-in "stability", and that it may be desirable for this reason to use it in some cases of ill-conditioned linear systems. Audley and Lee considered ill-posed problems arising in system identification, specially the impulse response iden- tification problem [A3]. Since impulse response identifica- tion is almost always an ill-posed mathematical problem this ill-posedness is the basis for the well-known numerical difficulties of identification by means of the impulse response. Audley and Lee Show that the theory of "regular- izable" ill-posed problems furnishes a unifying point of view for several specific methods Of impulse response identification. Still other investigations include those Of Rutman [R2]. He considered a certain class Of ill-posed problems, inverse 13 problems and the "regularization" technique. He outlined directions in which the solution for ill-posed problems can be developed. Specifically, he discussed spectral tech- niques on line regularization, and statistical regulariza- tion. This review is not meant to be comprehensive in terms of the theory of ill-posed problems. However, it accurately reflects those contributions which have impact on control theory and, hence, this thesis. CHAPTER 3 DEFINITIONS AND EXAMPLES OF ILL-POSED PROBLEMS In this chapter we present various definitions Of ill- posedness in the sense Of Hadamard and Of Tikhonov. We also present some important examples Of ill-posed problems in modern control theory. Finally we discuss a class of optimal control problems which have been shown to be well- posed by Zolezzi. 3.1. Stability Of the Solution in the Sense Of Hadamard Definition: Let F, U be metric spaces with metrics OF, oU, respectively. Let R be a mapping with The solution 2 of R(u) = z is said to be "stable" in the sense Of Hadamard on the Spaces F, U if for every 5 > 0 there exists a 6(5) > 0 such that 14 15 DU(ul,u2) : 6(6) + OF(zl,zz) : c where z = R(ul) and N ll R(uz). Here pU(u1,u2) is the measure Of changes in input data defined by: DU(ul.u2) = {Jfl[ul(x)-u2(x)]2dx};5 while pF(zl’22) is the measure Of changes in solution de- fined by [T5]: 9F(21'22) = MaXIzl(s)-22(s)| . S€[a,b] According to the above definition the solution is unstable in the sense of Hadamard if, no matter how small the error in the input data, the corresponding solution can differ strongly from the solution Of the initial prob- lem [T5]. 3.2. Stability Of the Solution in the Sense of Tikhonov Suppose that a continuous functional f(z) is defined on a metric Space F. The problem of minimizing f(z) on F consists of finding an element 2 cF that provides f(z) with 0 16 its smallest value f0: inf f(z) = f(z ch Let us suppose that this problem has a unique solution 20. Let {zn} denote a minimizing sequence, that is, one such that 11m f(zn) = f n+oo 0 Definition: We shall say that the minimization Of the functional f(z) on the set F is "stable" in the sense Of Tikhonov if every minimizing sequence {2n} converges (in the metric Of space F) to the element 2 Of F ([TS],[K6]). O 3.3, Ill-Posedness in the Sense Of Hadamard The concept Of a well-posed problem in mathematical physics was introduced by Hadamard in an attempt to clarify the types of boundary conditions that are most "natural" for various types Of differential equations [T5]. Definition: The problem of determining the solution z = R(u) 17 in the space F from the "initial data" u in the space U is said to be well-posed in the sense of Hadamard on the pair of metric Spaces (F,U) if the following three condi— tions are satisfied: 1. For every element ueU there exists a solution 2 in the space F. 2. The solution is unique. 3. The problem is "stable" in the sense of Hadamard on the Spaces (F,U), i.e., the solution continuously depends on the input data. If one or more of these conditions fail, the problem is said to be ill-posed in the sense of Hadamard. The first two conditions above are referred to as algebraic conditions of well-posedness; correspondingly, a problem which fails to meet one or both Of these algebraic conditions shall be referred to as algebraically ill-posed (as a point in fact, most parts of this thesis will deal with problems which are algebraically well—posed). It is not an overstatement to say that modern systems theory implicitly recognizes the basic importance of alge- braic well-posedness. Indeed, the whole construction of structural analysis, modern optimal control and state estimation depends upon the notions Of controllability and Observability. One should bear in mind that controllability means well-posedness Of the state control problem in terms 18 of the first algebraic condition while Observability is essentially well-posedness of the problem Of state Ob- servation (state estimations) in terms of the second alge- braic condition [R2]. Hadamard's third condition will be referred to as that of topological well-posedness. Correspondingly the problems which fail to meet the third Hadamard condition are called topologically ill-posed problems. 3.4. Ill-Posedness in the Sense Of Tikhonov Definition: The problem Of determining the solution 2 of the equation A2 = u (3.4.1) in the space F from the initial data u in the space U is said to be well-posed in the sense of Tikhonov on the pair of metric Spaces (F,u) if we know that, corresponding to the exact data u = uT, there exists a unique solution zT Of T = uT) belonging to a given compact set M ([T5],[I4]). In this case, the Operator A.1 is equation (so that Az continuous on the set and, if we know not the element uT but an element ud such 19 that DU(uT,u6) : 6 and uéeN then for an approximate solution of equation (3.4.1) with right-hand member u = ucS we can take the element 25 = A-lué. Since uaeN, this 25 approaches 2T as 6 + O. A set F1 (contained in F) on which the problem Of finding a solution of Equation (3.4.1) is well—posed is called a well- posedness class [I4]. Thus, as we discussed before, if the Operator A is continuous and one-to-one, the compact set M to which 2 is restricted is a well-posedness class for T Equation (3.4.1). It is clear that, the well-posedness in the sense of Tikhonov also includes the algebraic well- posedness such as existance and uniqueness, and topological well-posedness such as continuous dependence Of the solution to initial data (Hadamard's well-posedness). If one or more of these conditions fail, the problem is said to be ill—posed in the sense of Tikhonov. Remark 1. It is important to notice that the definition of an ill-posed problem is with respect to a given pair of metric spaces (F,u) since the same problem may be well-posed in other metrics (see examples 3.6.1 and 3.6.2). 20 Remark 2. The fact that the Spaces F and U are metric spaces is used here to express the closeness of elements as a means of describing neighborhoods in the spaces F and U. The basic results remain valid for topological spaces F and U [12]. Remark 3. If the class U of initial data is chosen and specified, conditions 1 and 2 of well-posedness charac- terize its mathematical determinacy. Condition 3 Of well- posedness is related to the physical determinacy of the problem and with the possibility Of applying numerical methods to solve it on the basis Of approximate initial data. 3.5. Definition of Ill-Posedness Of Variational Problems (Including Optimal Control Problems) A number of problems that are important in practice lead to mathematical problems of minimizing functionals. We need to distinguish between two kinds Of such problems. First Kind - Problems in which we need to determine the extremum value of a functional, even though it is not important to reach the Optimal value of 2. Various prob- lems of planning optimal systems or constructions are of this type [T5]. With them, as we mentioned, it is not important which elements 2 provide the sought minimum. Therefore, as approximate solutions we can take the values Of the 21 functional for any minimizing sequence {2n}, that is, a sequence such that: f(zn) + inf f(z) as n + m We will see later that this type of variational problem is not topologically ill-posed since the convergence of the minimizing sequence {zn} is not important. Second Kind - Problems in which we need to find the elements 2 that minimize the functional f(z). We shall refer to these problems as problems of minimization with respect to the argument. It is possible that in this type of variational problem,the minimizing sequence diverges away from its Optimal value. In such cases, it is clear that we cannot take as approximate solution the elements Of the minimizing sequence. This type of variational problem includes the certain problems of Optimal control ([T5], [K6],[Tl]). Methods of direct minimization of the functional f(z) are extensively used for finding the element 20. With these methods, one constructs a minimizing sequence {zn} with the aid of the same algorithm. Here, the elements zn for which f(zn) are sufficiently close to f0 are treated as approxi- mate values of the element 20 that is being sought. The methods Of solving such a problem are rather general and not connected with any particular functional f(z) ([T5], 22 [Tl],[K6],[K6]). However, such an approach to finding an approximate solution is justifiable only when the minimizing sequence {2n}, which is constructed, converges to the element 20. Now let us consider the general form of a variational problem of the second kind which includes the Optimal con— trol problems. The general problem is to find the solutions of differential equation Of the form: 0.. it: = F(t,x,u) (3.5.1) that satisfy the initial condition x(tO) = x (3.5.2) 0 where x(t) = {xl(t),x2(t),...,xn(t)} is an n—dimensional vector-valued function defined on an interval t0 3 t i T, x0 is a given vector and u(t) = {ul(t),u2(t),...,um(t)} is an m-dimentional vector-valued function (the control) with range in an m—dimentional metric space U. Let f(x(t)) denote a given nonnegative functional (the target functional) defined on the set of solutions Of the system (3.5.1). Obviously, the solution of the system (3.5.1) depends on the chosen control u(t), i.e., x(t) = Xu(t) . 23 Therefore, the value Of the functional f(x(t)) for each solution of the system (3.5.1) is a functional of the controlling function u(t). It is defined on the set U, that is, f(xu(t)) = ¢(u). The problem of optimal control can be formulated, for example, as the problem of finding, in some class U1 Of functions in the Space U, a controlling function u0(t) that minimizes (or maximizes) the functional ¢(u) = f(xu(t)). 3.5.A. Ill-Posedness of the Optimal Control Problems In the Sense of Hadamard Definition: The Optimal control problem described above will be said to be well-posed in the sense Of Hada- mard if: 1. The Optimal control (no) exists. 2. The optimal control is unique. 3. The Optimal control depends continuously on the desired trajectory (x*) and the target functional ¢*(u0). If one or more Of these conditions is not met, the varia- tional problem is said to be ill-posed in the sense of Hadamard [T5]. 24 3.5.B. Ill-Posedness of Optimal Control Problems In the Sense of Tikhonov Definition: The Optimal control problem described above will be said to be well-posed in the sense Of Tikhonov if: 1. The Optimal control (no) exists. 2. The Optimal control (no) is unique. 3. Any minimizing sequence {un}eilwill converge strongly to no. Therefore, Tikhonov well-posedness implies unique solu- ability together with convergence of the numerical methods of minimization for the Optimal control problem. However, Hadamard well-posedness implies continuous dependence on x* and ¢* Of the Optimal control, and it is not required that the optimal control converges to its Optimal value. The practical meaning Of such a well-posed- ness is quite clear; suitably small changes of the desired trajectory result in arbitrary small deviations Of the cor- responding optimal control and state. In particular, an a priori knowledge of Hadamard well-posedness is useful in connection with numerical methods of solutions of the Opti- mal control problems, which require approximation Of the data ([T5],[K6]). From a more general point of view the mathematical structure of the optimization problem can be revealed by finding its variational stability prOperties under data perturbations [I4]- 25 3.6 - General Examples Of Ill-Posed Problems Example (3.6.1) ([T5],[Al],[Il],[L4],[K6]), consider the Fredholm integral of the first kind with kernel k(xlg): b £3k(X.g)f(g>dg = y(x) c i X i d where f(g) is the unknown function in a space F and y(x) is a known function in a Space U. Let us assume that the kernel k(x,g) is continuous with respect to the variable x and that it has a continuous partial derivative %§. Now we define the Operator A such that: b Af = éik(x,g)f(g)dg (3.6.1) We shall seek a solution f(g) in the class C Of func- tions that are continuous on the interval [a,b]. We Shall measure changes in the right-hand member Of the equation with the L -metric defined as before by: 2 o(yl,y2) = {4?[yl(X)-y2(X)12dx}% while we measure changes in the solution f(g) in the C- metric defined by: 26 o(f1.f2) = maxlf1 - {Id (X)+bek(x )sin a - (x) 2dx}* DY leYZ " C [y]. a lg mg 9 1’1 1 *5 d b . 2 pY(yl'y2) = |M|{£:[éik(x,g)51nwgdg] dx} arbitrarily small since: _ a -1 b 1 bak(x, ) 2 g DY (Y11Y2)- IN] {4: [Tu—k (X19)C05w9|a+wfa a:3——1--cosmgdg] dx} oY(y1.y2)=|Ml{4?[-%(k(X.b)COSOb-k(x.a)coswa) + % fb a«15—95—55-1)”coswgdgjzdx};5 a 39 ._ 1 d b 8k(XI ) DY(Y1:Y2)—IMIE{.£:[J;i -—7E;g—coswgdg+k(x,a)coswa -k(x,b)coswb]2ax}L2 = 1&1 pY(yl,y2) w constant For any number M,pY(yl,y2) can be as small as possible by 28 choosing w very large, while the change in the corresponding solution f1(g) and f2(g) is: oF(fl,f2) = Maxlf2(g)-fl(g)| ge[a.b] pF(fl,f2) = Max|fl(g) + Msinwg - fl(g)| gEEarb] OF(fl,f2) = MaxIMsinwgl = |M| g€[atb] Now compare OY(yl,y2) and OF(fl,f2): _ |M| pY(yl.y2) — w constant OF(f1'f2) = IMI By choosing w sufficiently large, for any M, we can make the changes of the data, pY(yl,y2), arbitrarily small with- out preventing the change in the corresponding solution pF(f1,f2), from being arbitrarily large, which is the ill-posedness in the sense of Hadamard. In this particular example, even if we change the metric form C-metric to LZ—metric to measure the difference between the solutions, f1(g) and f2(g), the solution of Equation (3.6.1) is still unstable under small changes in y(x). 29 Specifically: oF-f2(g>lzdg}* = {gibmzsinzwgdg}$5 1 1 1 b = |MI{-2-(b-a) - -2— ° :5 cosiwgla};S b-a l pF(f1'f2) = [M] —§— — 4? (cosZwb-cosZwa) _ IMI i pY(yl'y2) — w constant One can easily see that the numbers w and M can be chosen in such a way that, for arbitrarily small discrepan- cies between y1(x) and y2(x), the discrepancy between the corresponding solutions can be arbitrary. For example, if we choose M large and w >> M then I; z 0 and CF (fllfz) + large pY(yl,y2) + small which is again ill-posedness in the sense of Hadamard. Example (3.6.2) ([T5],[I4],[K6]). The problem of dif- ferentiating a function y(t) that is known only approximately. 30 Suppose fl(t) is the derivative of the function yl(t): __ d The function y2(t) = yl(t) + Msinwt differs from y1(t) in the metric of C by an amount OC(yl,y2) = MaXIMsinwtl = |M§ for arbitrary values of w. However, the derivative f2(t): dy2(f) dy1(t) f2(t) = ——aE——'= ——aE—— + MwCOS t f2(t) = fl(t) + choswt differs from fl(t) in the C-metric by an amount: pC(fl,f2) = Malewcoswtl = |Mw| which can be arbitrarily large for sufficiently large values of lw . Once more by comparing pC(fl,f2) and pc(y1,y2) we have OC(Yl,y2) = IMI oc(fl,f2) = le| 31 for a given M by choosing w very large DC(Y1:Y2) >> QC(Y11Y2) which is ill-posedness in the sense of Hadamard. As we pointed out in Section (3.4), if we take other metrics on the sets F and Y (or on one of them), then the problem of differentiating y(t), which is known only ap- proximately, may be well-posed on the pair of metric Spaces (F,U). Thus, if Y is the set of continuously differentiable functions on the interval [a,b] and the distance between two functions yl(t) and y2(t) in Y is measured in the metric defined by: p (y ,y ) = SUP{|y (t)-y (t)|+|y'(t)-y'(t)|} Y 1 2 t€[a,b]l 2 l 2 but the distance between two functions fl(t) and f2(t) in F is measured in the C-metric, then OY(Y11Y2) = SUP{Msinwt|+|choswt|} t OY(Y1:Y2) = IMI+IMwI = lel Therefore the problem of differentiation Obviously is 32 well-posed in that pair of metric spaces (F,Y) Since the changes in the solution continuously depends on the changes in the initial data (well-posedness in the sense of Hada- mard). We note that the general form Of the problem of finding the nth derivative of the function y(t) reduces to solving the following integral equation Of the first kind: t l ‘1 f0 m(t-T)n f(T)dT = Y(t) since 31?: 1 (t_ )n-1]fmdT : dny(t) dtn 0 (“'l)‘ atn or 1 an n-l dny(t) ((t-T) ) 5(1) = -——-—- (““1“ [atn ] at T=t at“ or (n-l)! _ _ n W gag (t—T)n (n l’fm = 91—? ‘ at T=t dt §£ f(T)(t-T) f(t) T=t Therefore 33 = dny (t) f(t) dtn In Example 1, we proved that the integral equation of the first kind does not have the property of stability. Therefore the problem of finding the nth derivative Of the function y(t) is ill-posed which in fact leads to great dif— ficulties in approximate evaluation of derivatives. Example (3.6.3) [T5]. Numerical summation Of Fourier series when the coefficients are known approximately in the metric of L2. Suppose that: 91(t) = 2 an COSHt n=0 If instead of an we take the coefficients bn where and which is the change in an by small amount e/n, then we Ob- tain the series f2(t) where: 34 92(t) = n30 bn cosnt. The coefficients in these series differ (in the LZ-metric) by an amount: 51 = e{ Z N ll 77 which we can make arbitrarily small by choosing a suf- ficiently small. At the same time, the difference between gl(t) and 92(t), which is as follows: _ °°1 92(t)'91(t) — e 2 H cosnt may be arbitrarily large (for t=0 the series diverges). For example if we measure the difference between gz(t) and gl(t) in C-metric, then °° 1 DC(91:92) = MaxI92(t)-gl(t)l = Maxlc E H cosnt] t t n—l _ 1 35 which is divergent series. Again, while the coefficient Of the Fourier series changes slightly, the function g(t) changes arbitrarily large (ill-posedness in the sense of Tikhonov). Thus, if we take the deviation of the sum of the series in the metric of C, summation of the Fourier Series is not stable. As we mentioned in Section (3.4), if the difference between functions g(t) in G is estimated in the metric Of L2, the problem of summation of Fourier series with co- efficients given approximately (in the metric of L2) will be well-posed on such a pair of metric spaces. Here we pL2(bn'an) = E:1 = {JD—£6— 2 have: from Parseval's theorem, we have: {4f[gl(t)-gz(t)]2dt % = {Tf[n:1 % cosnt]2dt}% _ 82 n 2 k _ w 62 n — { Z —— f cos ntdt} - { Z —— 3 n=1 n2 }% 36 Therefore: < )- .11— 0L2 g1'92 "\(2 81 Therefore, in this metric, the Fourier coefficient and the Fourier summation will change accordingly, which is the well-posedness in the sense Of Hadamard. Example (3.6.4). The System Of Linear Algebraic Equations ([GZJIEH3]:[I6]:[L3],[T5]) Consider the following systemsci'linear algebraic equa- tions: AX : y (3.6.4) where A = {aij} is a matrix with elements aij' x = {xj} is the unknown vector with coordinates xj, and y = {yi} is a known vector with coordinates yi. In these definitions, i,j = 1,2,...n. If we consider systems with fixed norming Of the ele- ments of the matrix A, then the determinant detA is close to zero for ill-conditioned systems of this kind. If the calculations are only approximate, it is impos- sible in some cases to determine whether a given system is 37 singular. Obviously, such a situation may arise when the matrix A has eigenvalues sufficiently close to zero. In practical problems, we often know only approximately the right-hand member y of the system and the elements of the matrix A, that is, the coefficients in the system (3.6.4). In such cases, we are dealing not with the system (3.6.4), but with some other system: ~ Ax = y such that IIA - All i 5 and II? - YII : 5 where the particular norm chosen usually depends on the nature of the problem. Since we have the matrix A rather than the matrix A, we cannot definitely decide about the Singularity or nonsingularity of the system. In such cases, all we know about the exact system Ax = y,whose solution we need to find,is that ||A-A|| and Ily-yll are each no greater than 6. However, there are infinitely many systems with such initial data (A,y), and within the range of error 6. Since we have the approximate system Ax = y instead of the exact system, we can Speak only of finding an approximate solution. However, the ap- ~ proximate system Ax = 9 may not be solvable. 38 Among the "possible exact systems" there may be Sin- gular system and therefore ill-posed. Thus, we Often have to consider a whole class Of systems of equations that are indistinguishable and may include both singular and unsolvable systems. The methods Of construct- ing approximate solutions of systems Of this class must be generally applicable. These solutions must be stable under small changes in the initial data. The construction Of such methods is based on the idea of "selection" expounded in Chapter 4 in the discussion of the techniques tO solve the ill—posed problems. Example (3.6.5). ([A3],[A4],[Nl]). Impulse Response Identification. Identification problems are Often ill-posed because they correspond to mathematical problems of the form: to find h such that (I H1 Ah (3.6.5) where A is a given Operator whose domain is H and whose range is F, where H is contained in Banach Space B1 and F is contained in Banach space B2 and f is a given element Of F. For identification problem A.1 is usually an unbounded operator. Such a problem has a unique solution, but this solution does not change continuously with the data. 39 For the case of impulse response identification the problem is to find h(t) from Observation of its response f(t) to a known input u(t), by solving the integral equa- tion: 4fh(T)u(t-r)at = f(t) (3.6.2) leads to ill-posedness (Ex. (3.6.1), also [A4]). This equa- tion is called the integral equation of the first kind of the convolution type which are usually ill-posed. 3.7. Exapples of Ill-Posed Problems in Optimal Control Systems Example (3.7.1) [T5]. The problem of the vertical as- cent Of a sounding rocket in a homogeneous atmosphere to its maximum altitude. For this problem we know the exact solution. The verti- cal motion Of a body of variable mass m(t) in a homogeneous atmosphere is described by the system of equations: dv _ l 2 _ 5E — ETET[au(t) - cv (t)] g = -u(t) 53% v(o) = v m r O O m(o) 40 Here, m(t) is the variable mass Of the body (m0 = m(o) 3 m(t) 2.“! where u is the mass of the rocket body), v(t) is the velocity of the rocket, u(t) is the control function (equal to the consumption of fuel per second during the time of flight, and a, c and g are known constants. The approximate solutions of this problem have a sharp corner at the initial time t=0. Therefore, it is natural to seek a solution in the form 111 = A6(t) + u(t) where A is a constant and 6(t) is Dirac delta function. The function u(t) that we are seeking is a continuous and sufficiently smooth function. It will be simpler to find the function u(t) numerically than to find a function with a 6-form singularity. Such a representation of the solution means that, to attain the maximum altitude, it must burn up a certain amount of fuel instantaneously, and, after a certain * velocity v is attained, to begin a gradual consumption 1 of the fuel. Obviously, from physical considerations, the * optimal control function u (t) must be positive on some * * interval 0 i t 3 T1 and zero for t > T1 * >0 0 i t i Tl * u (t) = * O t > T1 1: 'k We need to determine u (t) and the parameters vl (Optimal * 1 (optimal time). velocity) and T 41 Thus, we need to investigate the two-parameter target function f(u,vl,Tl) and, to do this, we need to find * u*(t), V1* and T1 such that * t * _ . f(u ,v1 ,Tl ) — Min f(u,vl,Tl) The mass m1 Of the fuel that must be consumed instantaneously to Obtain a velocity V1 is Obtained from the Equation ([T5]) "‘1 v1 = v0 + al1n(l - fi—)| 0 Then, A is determined from ml. Let's take: v0 = O and m0 = l The altitude to which the rocket is lifted H = H(v(u)) is equal to: H = If V(t)dt where T is the instant at which the velocity becomes equal to zero: v(T) = 0 42 we can also write: H = Hl + AH where T1 u Vic Hl =f) v(t)dt, AH = 2E’Pn(l + IE7 T1 is the instant Of termination Of burning of fuel and vu is the velocity Of the rocket at that time. As target function, we take: H(v(u)) f(u,V1,Tl) = 1 - H0 where H0 is a value close to the maximum altitude. This problem of optimal control is unstable in the sense Of Hadamard, since the small changes in the target functional f(u,v ,Tl) corresponds to large changes (sudden changes) 1 in the control u, close to time t=0. It can be also shown ([T5]) that there exist at least one minimizing sequence * {un} which does not converge to optimal control u (ill- posedness in the sense of Tikhonov). Example (3.7.3) ([A2],[B4],[H2],[K6],[L8]). Optimal Con- trol Problems.As is well known, Optimal control problems lead to the necessity of solving variational problems. We 43 Shall consider the following optimal control problem and we shall formulate the definition Of ill-posedness Of such a problem. We wish to minimize a functional F(x(t)) where x(t) is given by f((t) = f(x(t)tu(t)rt)r X(t0) = X0 and the control function u(t) belongs to some set U. We assume that MinF(x) = F0 usU exists and is attained with some function u eU. [In prac- 0 tice, an algorithm is used to generate an approximate solu- tion.] As a result, it is hOped that there is obtained a minimizing series, i.e., a series {un(t)} such that: lim Fn = lim F(x(un),t) = F n+oo n+0) 0 For the moment, assume that _ _ 2 F - F(u) — Q) x (u,t)dt 0 0,T=1,X(0)=0 C \ 44 U = {u= |u(t)| g 1. u(t)eL2[O,T]}. Then, using basic Optimal control theory, H(x(t).u H = x2(u,t) + pu(t) p = - g; = -2x(t) -_1I1= x — 8p u(t) Using the minimum principle: * * * * * H(x ,u ,p ) i H(x ,u,p )VugU Therefore: *2 * * *2 * x (t) + p (t)u (t) i x (t) + p (t)u(t) * * * P (t)u (t) i p u . * Case 1 - If p (t) # 0 then * u (t) -1 if p > 0 * u (t) +1 if p < 0 45 u*(t) in this case cannot be zero because if u* = 0 then for p* > 0 or p* < 0 and, then H(x*,u* = 0, p* > 0 or p* < 0) = x*2(t) H(x*,u* = l, p* > 0 or p* <0) = x*2(t) i lp*(t)| H(x*,u* = O, p* > O or p* < 0) > H(x*,u = :1, p* < 0 or p* > 0) which is in violation of minimum principle. In case 1 it follows that: u*(t) = :1 V p* < O or p* > 0 and u* also belongs to admissible set U since: ||u*||L2 = [filu*l2dt = [fidt = l . . Thus in case one, when p*(t) < 0 the solution ex1sts and is: p*(t) = -2x = :2t + p* = :t2 x* = t ’X* = ’t o I . * _ 2 * _ 2 Optimal solutions. p - -t or p — t u* = +1 u* = -l and the cost function is: l F(u) = 10x2 (u,t)dt = fétzdt =§ (3.7.3) Case 2: In this case if p*(t) = 0, then we have -1 5’u* i l p* = 0 = -2x* + x* = O 47 This case is called the Singular case. Note, since the dimension of the system is one, then the number of switches that we can have for x is zero, and therefore one of the three forms +1, -1, 0 is acceptable as an optimal control [A2]. Now for the case u* = O the cost function is: H O F(u) = flodt Therefore, the cost function in Case 2 (F = O) is less than the cost function in Case 1 (F = 1/3), therefore in singularity interval the cost functional reaches its mini- mum value. Thus the control u*(t) E O is the Optimal control. Now, apply the numerical computation in order to reach the minimum value of the functional F. Let's consider the minimizing series as: nt2n un(t) = Sin T‘ for T=l un(t) = 51n2nnt Then xn = fundt = - 2nn cosZnnt + An x(O)=0->A =—-1— n n 2hn Therefore: 48 xn(t) = 2%n (1 - cosZnnt) an) = I01 xi(t) = fl ———l—§ (l-cosZnnt)2dt (2hn) l 1 l 1 dt 1 cos4wnt Fn(u) = dt] Fn(u) = —-——% 2[1 + ;s] = :2 4n n 8n n It is easy to see that {xn} and {Fn} converge to their optimal values: . _ . l _ _ * 11m xn — 11m 5?; (1 cosant) — 0 — x n+oo name 0 — o 3 _ - * 11m F — 11m — 0 - F n n 2 n+w n->co 8n n However, the minimizing series un(t) = Sin2nnt con- verges to u* = 0 only weakly, and is not a strong measure of convergence. The weak convergence Of {un(t)} follows from a well-known lemma. (Norms of the functions un(t) in the measure L2 are bounded, and the scalar derivatives from the entire system of functions tend to zero. Such a system may be considered to be {sinnt, cosnt} or the set of step functions that are characteristic functions of the intervals 49 . T . T [1 Er (1+1) 3])- Therefore there exists at least one minimizing series not converging to an extremal (weakly converging) solution in the sense of L metric measurement of space U, and the Optimal 2 control problem is ill-posed in the sense of Tikhonov. Exapple (3.7.4) ([A2],[B5],[K6]). Consider the following optimal control problem: F(u) = fg f(x,t)at + ¢(x(T)) Subjected to the constraint: x(t) = Ax + bu, lulil, u(t)cL2, x(O)=C We have again: f(x,t) + PT(Ax+bu) H 2* = Ax* + bu* 3f x*,t T Using the minimum principle: f(x*,t)+p*TAx*+p*Tbu*:f(x*,t)+p*TAx*+p*Tbu p*Tbu* i p*Tbu 50 Case 1: p*Tb > o + u* = —1 p*Tb < O + u* = +1 Case 2: P*Tb = 0 + singular case. Then: ,. 8f T T T 3f T _ _ *T + (3;) b - p Ab boundary condition: 1% .. * T §_¢_ : (3x T p (T))6Xf + [H + atjétf 0 since the final time is fixed but the final state is free: [3¢(§;t)) JTb = p*T(T)b = 0 T 51 Now suppose u* is the Optimal control (for either Case 1 or 2), with |u*| :1 v t€[O,T] Consider the minimizing series: where an is the variation Of the minimizing series having the form: ’1 for 2i DID-3 SIP-3 : t : (21+l) :31 ll -1 for (21-1) SIP] 2i | A ('1- A sue It is easy to see that the series {un(t)} converges to u* weakly, (according to the lemma which we mention in Example (3.7.3)). However, in the sense of the measure L2 it does not converge: _ * = ~ = ~ = IlunullL llumllL f0(u)dt T7‘0 2 2 (The proof Of the above equation will be given in the next sections). An analogous result may be obtained for a functional of 52 the form: F(u) = fOTf(X.t)u(t)dt + f(xm) Therefore for above Optimal control problems there exist a minimizing control un which is not converging (in metric L2) to an Optimal value u*, while the cost functional converges to its minimum value. Now, let's show that {un} is a minimizing sequence. TO prove that, let's consider dxn -a—t—= AXn + b(un+u*) _ At t A(t-T) ~ * xn — e C + L) e b(un + u )dT x = eAtC + ft eA(t-T)bfi dt + ft eA(t-T)bu*dt n O n 0 but, since fin is a minimizing sequence as defined before then: . t A(t-T) ~ 11m f0 e bun n+oo + 0 Therefore lim xn = eAtC + a; eA(t-T)bu*dr = x* n-Hao 53 where x* is the solution of the following equation, we also have F(un) = fgf(xn,t)dt + ¢(xn(T)) F(un) = Q§f(x*+gn(t>.t>dt + ¢(x*+gn(T)) where: t eA(t-T)bfin(T)dT Now we have: lim F(un) = lim 4?f(x*+gn(t),t)at + lim(x*+gn(T)) n+oo n-mo n+oo In most practical cases fenuio are well behaved func— tions therefore: limF(un) = Q?f(x*+limgn(t),t)dt+¢(x*+limgn(T)) n-mo n+oo n+oo lim F(un) = &;f(x*,t)dt + ¢(x*(T)) = F*(u*) n+oo 54 We can also say that at n=w xn converges to x*, therefore, the value of cost functional at n=m is equal to: lim F(xn(t))= fgf(x*, t)dt + ¢(x*(T)) = F* n+m Therefore {un} is a minimizing sequence which does not con- verge tO its optimal value, and therefore the problem is ill-posed in the sense Of Tikhonov. For the case when the functional F is as follows: F(u) = pf f(x,t)u(t)dt + f(x(T)) we have: —— T ~ F(un) — &) f(xn,t)(un(t)+u*)dt + f(xn(T)) Since xn x* we will have: n+oo lim F(un) = 4?f(x*,t)u*dt + f(x*(T)) + 1im &?f(x*,t)findt n+°° n+oo but according to the given definition for fin (21+1) IA ri- IA 21 I H N H. I H I |A rt A tile 55 It is easy to prove: T * ~ h(t) lb f(x ,t)un dt d Therefore lim {F f(x*,t)fi dt d lim h(t) + o n n n+oo n+oo and thus: lim F(un) + 4? f(x*,t)u*dt + f(x*(T)) = F* n+oo Therefore {un} is also minimizing sequence for this Optimal control problem, which does not converge to its Optimal value, and the problem is ill-posed in the sense of Tikhonov. Example (3.7.5) ([A2],[B5],[K6]). Consider the follow- ing Optimal control problem: Assume that the functional F(x(u)) has the form: F(x(u)) = fOTf(X,t)dt + ¢(x(T>) which is a continuous function of x. Suppose the system is a nonlinear system with the form: x = f(x,t,u), x(0) = C, tc[O,T], 56 and the control function u(t)cC, that is, belongs to a space having a measure of uniform convergence: ||u|| = max|u(t)|. This problem is always ill-posed. Let u* be the Optimal solution giving the functional the minimum value F*. We set up the series of controls {un} each of which differs from u* only in a small interval At, with the norm of the difference ||u* - unll itself being some fixed number not depending on n. ||u* - unll = maxlu*-un| = constant in n For sufficiently general assumptions about the right part f(x,t,u) it is possible to choose the magnitude of this interval At such that: F(x(u )) - F(x(u*) < l n n Then the series {un} is minimizing series since: lim (F(x(un)) - F(x(u*)) < lim % = o n+m n+m and therefore 1im F(x(un)) + F(x(u*)) . n+0!) However, {un} does not converge to u* in the sense of the uniform measure: lim||u*-un|| = lim max u*-un| = 1im constant = constant # 0 . n+oo n-mo n+oo Therefore the problem is ill-posed in the sense of Tikhonov. Thus, many Optimal control problems of practical in- terest are incorrect. It appears that problems having bounds on the control (of the type llu(t)I|:M, o i t i T) are correct in the majority of cases. Nevertheless, if there is found a control passing along the interior points of M on a set of positive size, then it is Shown that (as example (3.7.4) ([K6]) that the problem is ill-posed. (Notice that the Ex- ample 3.7.4) is the special case Of general problem Of Ex— ample (3.7.5).) 3.8. A Characterization of Well—Posedggptimal Regulator Problems by Zolezzi In this section, a constrained optimal regulator prob- lem is considered, and Zolezzi's major theorems concerning the well-posedness conditions for Optimal regulator problems has been stated. He proved the continuous dependence Of the Optimal control on the desired trajectory (Hadamard 58 well-posedness) or convergence toward the optimal control of any minimizing sequence (Tikhonov well-posedness) when the dynamics are affine (linear plus constant) [24]. He also Obtained the dense well-posedness in the non affine cases. We consider the following Optimal regulator problem: _ T T T F(v,u) — 4)[(u-u*) Q(u-u*) + (x-x*) p(x-x*)]dt + [(x(T)-y*)TE(x(T)-y*fl (3.8.1) on the trajectories (v,u,x) Of the system: g(t,x(t),u(t)) t€[0:T] x. r1. II x(0) = v (3.8.2) Subject to the following constraint (VIuIX)EK (3.8.3) From a theoretical as well as practical standpoint it is useful to know in advance which functions g define well- posed optimization problems as above for as wide a class of desired trajectories as possible. Let us remark that if K is a closed convex set and g is an affine function, that is g(t,x,u) = A(t)x + B(t)u + C(t) (3.8.4) 59 with appropriate matrices A(t), B(t) and C(t), then the cor- responding affine regulator problem can be naturally viewed as a convex best approximation problem in a Hilbert space, whenever the matrices P, Q, and E are positive (semi) defi- nite in a suitable way. This geometrical interpretation automatically gives Tikhonov well-posedness for all desired trajectories in this affine case as a consequence Of the classical Riezz projection theorem. Zolezzi's theorems are devoted to a study of a well— posedness Of Optimal regulator problems mainly in the non- linear case. The existence of Optimal controls may not hold in general, therefore well-posedness may fail also. The pair (v,u)cRm 9 L2 will be referred to as a control, and any continuous solution x of (3.8.2) as a state. The control (v,u) is called admissible if there exists a state x corresponding to (u,v) such that (v,u,x) satisfies (3.8.3). The corresponding Optimal control problem will be referred to as Problem (3.8.1), (3.8.2), (3.8.3), and as Problems (3.8.l»(3.8.2) whenever it is unconstrained. For any Problem (3.8.1), (3.8.2), (3.8.3) considered in the following, we will assume that some admissible con- trol exists. The following definitions will be used. Given a non- empty subset DCRme L29 L2 60 any of the Problems (3.8.1), (3.8.2), (3.8.3) will be called: Tikhonov well-posed in Djiflffor every 2* = (y*,u*,x*)gD there exists an unique Optimal control (v*,u*)eRm 0 L2, and for every minimizing sequence of admissible controls (Vn' un)e:Rm 0 L2, that is F(vn,un) + inf{F(v,u): (v,u) admissible} We have v + v*in Rm, u + u* in L2 , n n Hadamard well-posed in Didfiffor every z*eD there exists an unique Optimal control (v*,u*), and the mapping 2* + (u*,V*) is continuous in D between the strong topologiescflme 0 L2 6 L2 and Of RIn 6 L2. When D is the whole space, we shall say that the problem is well—posed for every desired tra- jectory. Now consider the following assumptions: (1) For every toe[0,T], vsRm, ueL2 there exists a unique continuous function x, defined in the whole inter— val [0,T] such that 6l x(to) = v, x = g(t,x,u) for tc[0,T]. Such a unique solution will be denoted by X = 2(to,v,u) . (2) Given vn + v in Rm, un + u in L2 then z(o,vn,un) + z(o,v,u) uniformly in [0,T]. (3) There exists a positive constant p such that for every vector C of the appropriate dimension CTQ(t)C 3 aICI, CTp(t)C 3 0, CTEC > 0 (4) There exists a closed set LC[0,T] Of Lebesgue measure 0 such that for every ucRK the function g is continuous at every (t,x)c[O,T]xRm if t ¢ L. (5) There exists a positive constant a such that for every vector C Of the appropriate dimension CTQ(t)C 3 a|C|2, CTp(t)C 3 a|C|2, CTEC 3 a|c|2 Now let's present Zolezzi's theorems as follows: Theorem 1 ([Zl],[z4]). Let veRm be fixed. Assume (3) 62 and let 9 be an affine function of the form (3.8.4) with A,CcLl and BeLz. Let K be closed and convex. Then for every desired trajectory the Problem (3.8.1), (3.8.2), (3.8.3) is both Tikhonov and Hadamard well-posed. Theorem 2 ([24]). Assume 9 an affine function of the 1 and BcLZ, let K be closed and form (3.8.4) with A,CcL convex, and assume (3). Suppose E is a positive definite matrix. Then for every desired trajectory the corresponding problem (3.8.1), (3.8.2), (3.8.3) is both Tikhonov and Hadamard well-posed. Now we will present a Theorem which has considered the Optimal regulator problem (3.8.1), (3.8.2), (3.8.3) without the assumption that g is affine. Theorem 3 ([Z3],[Z4]). Fix veRm and assume (1), (2), (3). Let K be a closed set. Then for every y*cRm and x*cL2 2 there exists a dense subset GCL such that for every u*eG the corresponding problem (3.8.1), (3.8.2), (3.8.3) is Tikhonov well—posed. Moreover un,ch and un + u in L2 imply that the Optimal controls u* + u* in L2, the Optimal n states x; + x* uniformly and the values vn + v. Theorem 4 ([24]). Assume that K is a closed set with bounded projection on B“. Let (1), (2), and (5) hold. 2 2 Then there exists a dense subset D Of Rm + L + L such that 63 the Problem (3.8.1), (3.8.2), (3.8.3) is both Tikhonov and Hadamard well-posed in D. Now we will present the theorems which have considered the well-posedness Of the problems without constraints. We Shall work within the Hilbert Space 2 2 2 H = Rm 0 L 0 L and K = Rm 0 L2 0 L Lemma [Z4]. Assume (1), (2) and (5). Then G defined G = {(x(T),u,x)eH: ucL2,x = z(o,v,u) for chm} is convex if for every desired trajectory the problem (3.8.1), (3.8.2) is Tikhonov or Hadamard well-posed. Theorem 5 [24]. Assume (l), (2), (4) and (5). If problem (3.8.1), (3.8.2) is Tikhonov or Hadamard well-posed for every desired trajectory, then there exists matrix- valued functions A, B, C, continuous in [0,T], such that for every t t L, stm and ucRK. Theorem 6 ([Zl],[Z4]). Suppose that assumptions (1), (2), (4) and (5) hold. Moreover let ch for every stm, 2 for every ueRK. Then a necessary and g(.,o,u)-g(.,o,o)cL and sufficient condition such that g is affine for almost every tc[0,T] is that the optimal control problem (3.8.1), 64 (3.8.2) is Tikhonov or Hadamard well-posed for every desired trajectory. CHAPTER 4 METHODS OF SOLVING THE ILL-POSED PROBLEMS 4.1 - Introduction In this chapter we Shall consider different approaches to the solution of ill-posed problems. As we pointed out in Chapter 3, the most fundamental difficulty in the solution of ill-posed problems is the situation that, in using approximate methods, a small error in the solution may lead to a significant deviation from the required solution. As an example we considered Friedholm's integral equa- tion of the first kind in Chapter 3 fb — ak(x.g)f(g)dg - y(x) (4.1) and we showed that the problem is ill-posed in the sense of Hadamard. This problem was investigated by Phillips [Pl]. Phillips gave several illustrative examples applying his technique. However, the character of Phillip's work is more intuitive than it is mathematical, and it is not applicable to a general class Of ill-posed problems. 65 66 The possibility of determining approximate solutions Of ill-posed problems that are stable under small changes in the initial data for more general problems is based on the use of additional information regarding the solution [P1]. Additional information can be of a quantitative nature, which enables us to narrow the class Of possible solutions, for example, to a compact set, and the problem becomes stable under small changes in the initial data. Additional information can be also of a qualitative nature in the form Of information regarding the solution (for example, information regarding smoothness). There are five important techniques which may be ap- plicable to the solution Of ill-posed problems; these tech- niques are based on various kinds of supplementary informa- tion which may be available. These techniques are [T5]: (1) The Selection Method. (2) The Method of Quasisolutions. (3) The Method of Replacement (Of the original equa- tion with an equation close to it). (4) The Method of Quasiinversion. (5) The Regularization Method. 4.2 - The Selection Method of Solving Ill-Posed Problems We shall consider the problem of solving the equation Ax = y (4.2.1) 67 for x, where y belongs to a metric Space Y and x belongs to a metric space X. The Operator A maps x onto Y. It 1 which is not is assumed that A has an inverse Operator A- in general continuous (ill-posed problem). The Selection Method of solving equation (4.2.1) ap- proximately consists in calculating the Operator Ax for an element x belonging to some given subclass MCX of possible solutions, that is, we solve the direct problem [T5]. AS an approximate solution, we take an element xO belonging to the set M for which the difference pY(Ax,y) attains its minimum: pY(AxO,y) = inpr(Ax,y) xcli Suppose that we know the right-hand member of Equation (4.2.1) exactly, y = YT' and that we are required to find its solution x . If the desired exact solution x Of Equa- T T tion (4.2.1) belongs to the set M, then infpY xeM(Ax,y) = 0 and this infimum is attained with the exact solution xT. If Equation (4.2.1) has a unique solution, the element xo minimizing pY(Ax,y) is uniquely defined [T5]. In practice, minimization Of pY(Ax,y) is done only approximately. If {xn} is a sequence Of elements in M such that py(Axn,y) + 0 as n + m, then the basis for the success of the selection method has led to some general 68 functional requirements restricting the class M of pos- sible solution for which the selection method is stable and xn + xT. These requirements consist of compactness of the set M based on the following topological lemma: Lemma - Suppose that a compact subset X Of a metric Space X is mapped onto a subset Y of a metric space Y0. 0 If the mapping X + Y is continuous and one—to-one, the inverse mapping Y + X is also continuous. Thus, the minimizing sequence {xn} in the selection method converges to xT as n + m if xT belongs to a compact class M of possible solutions. Therefore, by applying the selection method, the ill- posed problem (due tO the unboundness Of the Operator A-l) may attain stability if a solution is being sought in a given compact set MCX, for which the mapping M = A-lN is continuous on N. Therefore, if ysN while varying y we do not leave N, then x will depend continuously on y. Thus, if the operator A is continuous and one-to-one, the compact set M to which xT is restricted is a well- posedness class for Equation (4.2.1), and the selection method can be used successfully to solve the problem. 4.3 - The Method Of Quasisolutions Consider the Operator equation Ax =y (4.3.1) 69 where the Operator A has an inverse A-l, which is not con- tinuous. If this problem is solved in some compact Space M, then the conditions for well-posedness in the sense of Tikhonov are fulfilled, and the problem has a stable solu- tion (the Selection Method). Generally, however, there are no effective criteria for y belonging to the set N = AM. Yet for an approximate solution we use the approximate value ya, which also may not lie in N, so that in general the solu- tion may not exist or belong to M ([T5],[I6]). In this connection it is natural to change the state- ment of the problem and instead Of the exact solution Of Equation (4.3.1), we seek a quasi-solution. The classical conditions for being well-posed are preserved for a quasi- solution (Theorem 4.3.1) and one may indicate convergent processes for finding it by modifying well-known methods [I6]. If for a given y there exists a true solution in M, then the quasi-solution coincides with it. In other cases it gives the best approximation to the solution. Definition: We shall say there exists a quasi-solution of Equation (4.3.1) on a given compact set M Of the space X and for a given yoeN if there is a point xOsM for which ||Ax-yo|| attains a minimum on M [T5]. If the Space X is linear metric Space, Y is a Banach space, and the Operator A is linear and continuous, then a quasi—solution exists for any compact space M and any yosN. For uniqueness Of the quasi-solution it is necessary 70 and sufficient that y be convex [T5]. The quasi-solution in this case continuously depends on yo. If yocN, then the actual quasi-solution coincides with the exact solution xoeM. Thus, the quasi-solution is that generalization of the exact solution for which the problem is well-posed in the sense of Hadamard. The results can be generalized when considering a closedcperator.Athat is not continuous. For this the set N = AM will not even be compact. Therefore, in order to ensure the existence and the stability of the quasi-solu- tion, it is necessary to use certain geometric properties of the unit sphere in Y and restrict the requirements for the spaces X and Y. Now consider the following two theorems (by Ivanov [I3],[15]): Theorem (4.3.1) - A quasi-solution of Equation (4.3.1) exists for any nonempty compact set MCX and any er. If M is convex, and the sphere in the space Y is strictly convex, then the quasi-solution is unique and depends con- tinuously on y (well-posedness) in the sense Of Hadamard. [16]. Under the conditions of Theorem (4.3.1), the projection of a point on a convex set is uniquely determined, hence the uniqueness Of the quasi-solution. Continuous depen- dence follows from uniqueness and compactness. 71 Theorem (4.3.2) - The quasi-solution of Equation (4.3.1) on “R (the sphere ||x|| £,R) is eXpressed by the formula 8n x - X An+A un , (4.3.2) where A = 0 if 2 8n 2 Z ——7-: R , (4.3.3) n An B 2 nz—RZ, n (An+A) if 2 B Z n > R2 n A 2 n The proof is shown in [I6]. (4.3.4) Ivanov's quasi-solution technique coincides with Tik- honov's regularization method on some broadened compact space MC: MC = {x: H(x) i c} 72 where H(x) is a nonnegative convex functional satisfying a certain additional assumption. The family of quasi-solu- tions is determined by the condition IIAXC‘YII = min) le-YII = 1minIIAX-filll xeMC {x:O:c} and, for c + w, forms a regularized class of approximate solutions. 4.4 - Replacement of the Equation with an Equation "Close to It Equations of the form Ax = y (4.4.1) where xsX and ng in which the right-hand member y does not belong to the set N = AM have been studied by Lavrentyev [L2]. His idea was to replace the original equation with an equation that in some sense is close to it and for which the problem of finding the solution is stable under small changes in the right-hand member and solvable for an arbitrary right-hand member y 73 belonging to Y. This technique is as follows: Suppose that X = Y = H are Hilbert Spaces, that A is a bounded, positive, self-adjoint and linear operator, that SR E {x,||x|I : R,xeX} is the ball of radius R in the space X, and that B is a completely continuous Operator defined, for every R > 0, on SR. AS the well-posedness class M, we take the set that is, the image of the sphere SR under the operator B. It is assumed that the exact solution x of Equation T (4.4) with right-hand member y = yT exists and belongs to the set DR’ Equation (4.4) is replaced with the equation (A + AI)x E Ax + Ax = y , (4.4.2) where A is a positive numerical parameter. With an appropriate choice Of the parameter A, the solu- tion of Equation (4.4.2) 1 XA=(A+AI)-y. is taken for the approximate solution of Equation (4.4.1). 74 Here I is the identity operator. We can estimate the deviation pX(XT'XA) Of the approxi- mate solution from the exact one by using the "modulus Of continuity" w of the inverse Operator on N [T5]. Therefore, employing the Method of replacement, we assume that instead of knowing the exact value of y we know an approximation Y6 with accuracy 6 and that the function w(6) or a majorant of it is known. 4.5.- The Method of Quasiinversion We showed in Chapter 3 that the Cauchy problem for the backward heat equation is unstable under small changes in the initial values. The instability remains in cases in which the solution is subject to certain extra boundary con- ditions. The method of quasiinversion has been developed for a stable solution of such problems [T5]. This quasiinversion method can be applied to a broad class of problems such as ill-posed problems in the area of electromagnetics [T5]. 4.6 - The Regularization Method In previous methods we assumed the situation in which the class of possible solutions Of equation Ax = y (4.6.1) 75 is a compact set. However, for many applied problems, this class X is not compact and the changes in the right- hand member of the equation (4.6.1) can take x outside the set AX. We shall call such problems "genuinely ill-posed" problems ([T5],[K6]). A new approach to the solution of ill-posed problems enables us, in the case of genuinely ill—posed problems, to construct approximate solutions of Equation (4.6.1) that are stable under small changes in the initial data. This approach is based on the funda- mental concept Of a regularizing operator [T6]. Therefore, the regularization method makes it possible to construct a sequence of uniformly converging solutions, without making 3 priori assumptions about a solution belong- ing to a given compact space [T6]. Instead it is assumed that the solution satisfies certain smoothness require- ments, and, in addition, the degree of the error with which the initial data are given is known. Suppose that the operator A in equation (4.6.1) is 1 is not continuous on the set Ax such that its inverse A- and the set X Of possible solutions is not compact. We assume that there exists some solution xT correspond— ing to the input y. However, in practical problems, the function yT is always given to some error 6, that is, the function Y6 is known such that 76 IIYG-YTII 5.6 ' It is Obvious that the approximate solution x Of Equation 6 (4.6.1) cannot be defined as the exact solution of this equation with approximate right-hand Side y = y6, that is, according to the Equation (4.6.1) _ -l x(S — A Y6 . As the right-hand Side Y6 Of Equation (4.6.1) approaches (in the metric of the space Y) the exact value yT, the ap- proximate solution x must approach (in the metric Of the Space X) the exact solution xT Of the equation Ax = Y6 . (4.6.2) However, for an incorrect problem the solution of the Equa- tion (4.6.2) may not exist, or it may not be unique, or it may give a large error. It is natural to determine the ap- proximate solution x(S corresponding to the perturbing input as some function "close" in the norm to the solution x : T llxé - XTII i E ' Here x(S may not satisfy the function X6 arises the problem of determining a stable solution, for = R(yG). There 77 which 8 + 0 for 6 + 0. Such a problem formulation is especially typical for the so-called inverse problems [T7]. Now we present the regularization technique to solve the ill-posed inverse problem [T5]. Suppose that the elements x ex and chY are connected T by Definition 1: An Operator R(y,6) is called a "regulariz- ing operator" for the Equation (4.6.1) in a neighborhood Of (1) There exists a positive number 6 such that the 1 Operator R(y,6) is defined for every 6 in 0 i 6 i 61 and every yéeY such that and (2) for every 5 > 0, there exists a 6o = 60(E'YT) i 51 such that the inequality 78 py(yO'YT) : 6 i 60 implies the inequality pX(XO'xT) : E I where x6 = R(y6p6) . This definition does not assume uniqueness Of the Operator R, and x6 denotes any element in the set {R(y5,6)}. In many cases, it is more convenient to use another defi- nition of a regularizing Operator [T5]. Definition 2: An Operator R(y,a) depending on a param- eter a is called a regularizing operator for the Equation (4.6.1) in a neighborhood Of y = yT if (1) There exists a positive number 61 such that the Operator R(yya) is defined for every a + 0 and every y in Y for which py(YIYT) : 6 i 61 and (2) There exists a function a = 0(5) of 6 such that, for every 5 > 0, there exists a number 5(5) 3 51 such that the yéeY and 79 imply pX(XT’Xa) : c , where xa = R(y6,o(6)). Again, there is no assumption of uniqueness of the Operator R(y5,a(6)). We Should point out that here the function a = 6(6) also depends on Y6' Dependence of the parameter a on Y6 implies that it is also dependent on yT and hence on x since Ax = yT. T' T If pY(yT’y6) i 6, we can take for an approximate solu- tion of Equation (4.6.1) with approximately known Y6 the element x(3 = R(y5,a) obtained with the aid of the regulariz- ing Operator R(y,a), where a = a(6,y5). This solution is called a "regularized" solution of Equation (4.6.1). The numerical parameter a is called the "regularization param- eter" [T6]. Obviously, every regularizing operator defines a stable method of approximate construction of the solution Of Equation (4.6.1) with the choice for a according to the condition dT6]). a = a(6) - If we know that 80 oY(yT.y5) : 6 . we can, by the definition of a regularizing operator, choose the value of the regularization parameter a = a(6) in such a way that, as 6 + O, the regularized solution xa = R(y6,a(6)) approaches (in the metric of X) the exact solu- tion x that is T! oX(xT.xa(6)) + 0 Thus, the problem of finding an approximate solution of Equation (4.6.1) which is stable under small changes in the y reduces (a) to finding regularizing Operators, (b) to determining the regularization parameter a from additional information of the problem (for example, the size of the error in the right-hand member yé). This method of constructing approximate solutions is called the "regularization method" [T5]. Of all the Operators R(y,a) from Y into X that depend on the parameter a and that are defined for every er and every positive a, we need only those which are continuous with respect to y. For these, we can give sufficient con- ditions for belonging to the set of regularizing operators of Equation (4.6.1). This is due to the following theorem 81 (by Tikhonov [T5]). Theorem (4.6.1) - Let A denote an operator from X to Y and let R(y,a) denote an Operator from Y into X R(y,a): Y + X that is defined for every element y of Y and every posi- tive a and that is continuous with respect to y. If limR(AX,a) = x O+O for every element x of X, then the operator R(ypa) is a "regularizing Operator" for the equation Now we present a method of constructing regularizing Operators for Equation (4.6.1) which is based on a varia- tional principle. We shall assume that the equation Ax = yT has a unique solution xT [T5]. (1) Let H(x) denote a continuous nonnegative functional defined on a subset Xl of X that is everywhere dense in X. 82 Suppose that: (a) x belongs to the domain of definition of H(x), T (b) for every positive number d, the set of elements x of X1 for which R(x) i d is a compact subset of X1. We shall refer to functionals H(x) possessing these conditions as "stabilizing functionals" [T5]. Suppose that we know that the difference between Y6 and yT does not exceed 6, that is, oY(y5.yT) : a. It then is natural to seek an approximate solution in the class D6 of element x such that This D6 is the set of possible solutions. However, we cannot take an arbitrary element x6 of D6 as the approxi- mate solution of Equation (4.6.1) with y = Y6 because such a "solution" will not in general be continuous with respect to 5. Therefore, the set D6 is too broad. We need a rule 83 for selecting the possible solutions that ensures that we obtain as an approximate solution an element of D6 that depends continuously on 6. Suppose that H(x) is a stabilizing functional defined on a subset X1 of the set X. We Shall consider only those elements of D6 on which the functional H(x) is defined, that is, we shall consider only elements of the set Among the elements of this set, let us find the one that Let x denote will minimize the functional H(x) on X 6 1,5. such an element. Then X6 is equal to X6 = R(y6,6) . It has been shown that ([T5],[T6]) the operator R(y6,6) is a regularizing operator for Equation (4.6.1) and therefore the element can be taken as an approximate solution of Equation (4.6.1). If the equation Ax = yT has more than one solution, this method can still be used to construct a regularizing operator [T4]. In this case, every convergent subsequence 84 {X6 } converges to some solution of equation (4.6.1) with y =nyT, although different subsequences may converge to different solutions. Therefore, with this approach, the problem of finding an approximate solution of Equation (4.6.1) with approxi- mate right-hand member reduces to the following problem: Minimizing Q(x) , on the set where D 0, so that the element Xa = R1(Yla) minimizes the functional Ma(x,u). It is shown that the operator Rl(y,6) is a regularizing operator for Equation (4.6.1) [T5]. The question of determination of the regularization parameter will be treated only for regularization operators Rl(y,6) obtained by the variational method [T4]. It is usually difficult to actually find the regulariza- tion parameter 6 as a function of 6(6) (where 6 is the error -7.“ 92 in the initial data) for which the operator Rl(x,6(6)) is a regularizing operator. In many practical cases, we know a number 6 characterizing the inaccuracy of the initial data. The problem is then to find the corresponding value of the regularization parameter 6 out of all admissible values, that is, values that are equal to the value of one of the functions 6 = 6(6) for which the operator R1(x,6(6)) is a regularizing Operator. The choice of the admissible value of the regularization parameter depends on the information available regarding the approximate initial information. There are various ways of finding such a value 6. In computational practice, one way of determining 6 from the error 6 is shown in ([T4],[T5]). It is shown that ([T4]) in computational practice, this value of 6 can be found approximately either by a sorting from a given set of values 61,62...,6n or by Newton's method which converges for an arbitrary initial ap- proximation 60 > O. 4.7. Methods for the Regularization of Optimal Control Problems Consider the system of equations x = f(t x,u), (x=xl,...,xn), (u=ul,...,um), to i t g T: with control functions u(t) from some complete functional 93 class U and with the initial conditions x(to) = x0 and let there also be given the continuous non-negative functional F(x) defined on the functions x(t) specified in the interval t i t i T. Let us suppose that the class U contains an Optimal control, that is, there exists a function H(O)(t) such that x(t,iI(O)) for which inf F(x(u)) = F0 . ueU We now consider the problem of approximating H(O)(t) [T8]. Approximation of the Optimizing control can be done by the method of minimizing the functional F in which a se- quence of functions un(t) such that [T8] Fn = F(x(un)) + F0 . n+oo The function un(t) for which the value of Fn is sufficiently close to F0 is taken as an approximation of E‘O)(t). We pointed out in Chapter 3 that the Optimal control problem is ill-posed if un(t) does not approach 5(0)(t). Namely 94 it is not difficult to find a control u(t) such that F(X(3)) i F + c, for fi(t)gU 0 and such that the difference (t) - H(t)|| can assume arbitrarily large values. (0) We choose H(t) to coincide with H (t) everywhere ex- cept in the small interval (tl-n,t 1 where the difference ||H(t)—E(O)(t)|| is made to exceed +n) about sone point tl some fixed number MO permissible in the range within the class U. It is clear that for any 6 the quantity n can be chosen that the difference (0) ||x(t) - R (t)|| i 6 . Taking n and thereby 6 sufficiently small, we find that FiF-i-g. Therefore, for small changes in functional F and state x we have arbitrarily large change in control function u(t), and the problem is ill-posed [T8]. Now, we want to construct regularizing algorithms that can be used to find the optimal control, that is, algorithms which yield minimizing sequences convergent to 3(0)(t). We consider the smoothing functional 95 G“(u) = F(x(u)) + ao(u) . where R(u) is a regularizing functional such as T Mn) =f to 1 "MB - 2 2 1 [kl(t)(ui) +ko(t)(ui) Jdt. kl(t)>0, k0(t)>0 The functional Ga(u) is non-negative, which guarantees the existence of its lower bound G%[T8]. Let us consider some decreasing number sequence 6k + O and some controls oak . u (t) for which (1 6 k(t)] : G k + 6 C , (I. kA G [u o k where c is a constant independent of 6. (0) If there exists a unique optimal control u (t), and if this control is a smooth function, then the sequence of 6 functions u k(t) satisfying the conditions 6 u ] i G k + o O‘kc converges uniformly to H(O)(t) [T8]. It is clear that a — — Gok 1 F(x(u(0))) + 6kQ(u(0)) = = F + 0 6kc (c0 = 9(E(°))) 0 96 This implies that 6 6 6 6 6 k A k _ A k A k k G0 (u ) - F(x(u )) + 6kQ(u ) i G0 + akc i F0 + 6k(c0+c) and that 0‘k Aak 6kQ(fi ) i F0 - F(x(u )) + 6k(co+c) : 6k(co+c), since Auk F0 - F(x(u )) _<_ 0 Hence 9(Gak): c0 + c, Auk and the set of functions {u (t)] forms a compact family. 6 Let the subsequence G k(t) converge uniformly to the func- tion H(t). It is clear that . A6k Aak = llm[F(X(u ))+ 6kQ(u )J= F(x(u))= F k+oo 0 and Since we assumed that the Optimal control is unique “6(t) = 3“” (t). If there exists at least one optimal control that belongs to the U, then the convergent sequence of functions 97 oak . u converges to one of the optimal controls. The above discussion is valid if U contains a subset U which admits of the new metrization 91(31'32) such that S (E) = {fizp(fi,0) i c} is compact in U [T8]. In this case, c setting 9(6) = p2(a,0) . we obtain the convergence of the minimizing sequence provided there exists an optimal control H(O)eU ([T5], [T8]). Let us denote by E the set of elements EEU for which 9(5) is defined and assume that E is a convex complete set in the Hilbert norm (9(a))2. If E, the set of elements of U on which R(u) is defined, is convex and is complete, there exists at least one func- tion ua(t)cfi for which we obtain the minimum of the func- tional 68(3) = F(x(u))+ ao(fi) (ago). In fact, let the sequence of minimizing functions u: converges uniformly to the function fi(t)eU. We will show that EtU and that Q(u%(t) - E(t)) + o (n+e) 98 To do this we need only prove that the sequence ua(t) n is fundamental: 6 R(un-ug) + 0 (n,m+w) . If this is not the case, there exist so such that )2t 0 6 6 R(un un+Pn for an infinite sequence of numbers. _ 6 _ 6 We set on — un un+pn and _ 6 6 En _ H(un+un+pn) ' so that = 6 _ = 6 an un 15¢)n un+pn + I¢n ' Since 3: + 3, it also follows that and since F(x) is a continuous functional, F(x(th - F(x(u:)) = n :- F(x(gn)) - F(x(u:+pn)) = n; e o. 99 Clearly, for the function an we have 6 6 6 6 G (En) Z G 3 G (un) 81 for n 3 n (el) , 0 F(x(gn)) + 6{Q(u%) - 9(u§,gn) + %Q(€n)} 3 F(x(u:))+ 60(ug) - El or 6(-Q(u:,€n) + %Q(£n)) 3 F(x(ug) - F(x(gn)) - e1 Z '51 'lngl ° Similarly, using the representation _ 6 E:‘n — un+pn + k¢n ' we obtain a n o(9(un+pn.§n) + ao(gn)) 3 -e1 - Innl , _ O._ (X _ _ _ n 6( mun un+pn,€n) + %Q(€n)) 3 261 Ingl Inn) or ) o) = F2(x(u(t)).xi(t).u(t)) @(Xi(t),u(t)) x(to) x0, xi(t0) = xi0 114 Now let's assume that there exists an unique Optimal solution, since we assumed the problem is algebraically well-posed, such as x*(t) 8*(t) = ( ) r 11* I Xi(t) J* = :(xi(t).u*(t)) (Notice that there is no constraint on xi(t)). Now, suppose we have changes in the input state vari- able xi(t) such that xi(t) z Xi(t) where there exists a large number M such that _ t ||xi(t) xi(t)I|L2 > M Since xi is independent of the control function u(t) then, for minimizing sequences {xn(t)} and {un(t)} we have: {un(t)} + u* (for Xi(t)) and llS * I {un(t)} + u (for xi(t)) However, for cost functional we have: Jn(xn(t).xi(t).un(t)) + J*(x*(t),xi(t).U*) and Jn(xn,xi(t),un(t)) + J*(x*(t),xi(t),u*) where ||J*(x*,xi(t),u*)- J*(x*,x'i(t),u*)||L > N and I II I Wt) Wt) I |S*-S'* = I - I >M L2 Xi(t) xi(t) L2 Therefore, even if we construct the minimizing se- quence; it is still possible that the state (S(t)) changes arbitrarily large due to large changes in xi(t) while the control u(t) remains the same (ill-posedness in the sense of Tikhonov and of Hadamard). Therefore, the problem is not regularizable. Special Case - Assume that the cost functional is independent of xi, then we have the following theorem 116 (the problem is still ill—posed since the changes in xi are still independent of the changes in u(t)). Theorem (5.2.A.3) - The ill-posed uncontrollable linear Optimal control problem given as (2) where J = f(x(t),u(t)) , is not regularizable, that is, the problem is always un— stable in the sense of Tikhonov and of Hadamard. Proof - Since xi (original or transformed state vari- able) is independent of u(t) we can again rewrite the system equations as before. Again we assume that there exists an optimal solution, i.e., there exists U*(t),X*(t),J*(U*(t)) - Let's assume that we can construct the minimizing se- quence such that {un} + u*, Jn = f(xn(t),un(t)) + J* = f(x*(t),u*(t)) and x x* s = n +S* n x x. 117 Now let's assume that there exists a large number N such that xi(t) = xi(t) + N for all te(tl-T,t +T) 1 Since xi(t) is independent of u(t) everywhere including the interval (tl-I,tl+1), then there exists a T = T(6) such that ||J(S*',u*) - J(S*,u*)l|L2 = llf(x*,u*)-f(x*,u*)|lL2 = O and Imp, — nymllLZ = ||u* - u*ll = 0 whereas X*(t) x*(t) ||S'-SI| = sup||< . > - < )II = L2 xi(t) xi(t) L2 te(tl-T,tl+T) _. X*(t) _ X*(t) _ sup||(x.(t)+M) (xi(t))|lL2 ‘ M which is the instability in the sense of Tikhonov and of Hadamard. That is, by constructing an apprOpriate minimizing sequence the cost functional and the control converge to their Optimal values. However, the changes (arbitrarily large) in xi(t) does not effect on the cost 118 functional and the control and they remain at their optimal values while the state variable S(t) changes arbitrarily large due to changes in xi(t). Therefore, no changes in the control and cost func- tional corresponds to possible large changes in the state variable which is ill—posedness in the sense of Tikhonov and of Hadamard, and the problem is not regular- izable. 5.2.B. Non-Linear thimal Control_§ystem The non—linear optimal control problem is given as: Find the extremum of the following function J = ¢(x(t),u(t)) (3) subjected to x(t) = g(xIt).u(t)) (4) x(tO) = x0 and satisfies the conditions (i), (ii), (vi) of Section (5.2.A) plus the following conditions: 119 (i)' x(t) is an n-dimensional vector valued func- tion where x(t) = (xl(t),...xn(t))T. (ii)' 9 and ¢ are continuous functions of x and u on an interval t0 i t i T and differentiable with respect to x(t), u(t), t at least once on an interval t i t i T. 0 If we solve the state equation, we obtain: x(t) = x0 + ftt g(X(u(T));U(T))dT 0 X1 = x(T) »= x0 + ItTOng>,u(T)>dT. (5) We have presented in Section (5.1) the general defini- tion of "controllability" for the general optimal control problem. For the case of non-linear systemsvmacan also use the following definition for the "controllability" of the non-linear optimal control problem. Definition: The state x0 is said to be "controllable" if for given x0 and x1, there exists a T < m and a control u(t)eU1 (class of admissible controls) CU such that Equa- tion (5) holds. If there is no such T or u then the non-linear system is "uncontrollable". 120 There are two cases when the non-linear optimal control problem is "uncontrollable". Case 1 - At least one of the state variables xi is independent of u, then we can present the following theorems: Theorem 5.2.B.l - The uncontrollable non—linear optimal control problem, given as (3), (4) and (5), with at least one state variable be independent of control u is topo- logically ill-posed. Proof — The proof of this theorem is completely analogous to the proof of Theorem (5.2.A.l). Theorem (5.2.8.2) - The ill-posed uncontrollable non- linear optimal control problem given as (3), (4) and (5), with at least one independent (of control u) state vari- able, is not regularizable. Proof - The proof of this theorem is completely anal- ogous to the proof of the theorem (5.2.A.2). Case 2 - All the state variables are directly or in- directly dependent on the control u. In this case, since theljxmnurtransformation can not be imposed to non-linear system, therefore, the dependence of the state variables on control u may be present while the system is uncon- trollable. However, we can divide this case into two 121 subcases: (a) The control function u(t) is bounded. In this case suppose |u(t)| < M , then h(U(t)) = g(X(u(t)).u(t)) and therefore I Q? h(u(T))dTl < N Now choose where N 0 such that the pair (A(u), B(u)) is controllable for all us [0,u* J([CZ],[K1]). However in general the controllability of the perturbed system for u > 0 does not guarantee the controllability of the un- perturbed system. If we use a Jordan form transformation 3': = T(IJ)X with T(u) nonsingular and bounded for u 3 0 and small, the perturbed system (5.3.1) becomes [C2] >2 = T>~< + T(u)B(u)u (5.3.2) 126 where Jl(u) 0.....0 -l T(u)A(u)T (u) = Q J2(u)---0 0 0...Jk(u) and -8101). T(U)B(U) = 32m) -Bk(u)( where Ji(u), i = l,2,...,k, are Jordan blocks [C2]. The unperturbed system of (5.3.2) is also the Jordan form of the unperturbed system (because T(u) is continuous with respect to u). Now if we eliminate the weak connections, then if the eigenvalues of the Jordan blocks Ji(u) and Jj(u) differ only by a function of u, then Ji(0) and Jj(0) will have the same eigenvalues [C2]. If the last rows of the Bi(0) corresponding to Ji(0) having the same eigenvalues are linearly dependent then the unperturbed system is not controllable [C2]. 127 Definition: The perturbed system (5.3.1) is said to be weakly controllable if it loses its controllability when the weak connections are removed (notice that the unperturbed system of weakly controllable system can be structurally controllable and it regains its controllability by a small perturbation) [C2]. Definition: The perturbed system (5.3.1) is said to be strongly controllable if the unperturbed system is con- trollable [C2]. Therefore the perturbed system will lose its control- lability without weak connection if it is weakly control- lable. Theorem (5.3.A.l) - The linear, time—invariant per- turbed system given as Equation (5.3.1) is ill-posed if the pair (A(u), B(u)) or (A(O), B(O)) is uncontrollable pair. grog: — The proof is the direct result of the Theorem (5.2.A.l). If (A(u), B(u)) is uncontrollable pair then the perturbed and unperturbed systems are both uncontrol- lable and according to the Theorems (5.2.A.1) and (5.2.A.2) the problem is topologically ill-posed and unregularizable. On the other hand, of (A(u), B(u)) is controllable pair but (A(O), B(O)) is uncontrollable pair, then the problem is weakly controllable and the solution x(u) is independent of control u at u = 0, therefore the solution is not stable (in the sense of Tikhonov and of Hadamard) for all u, and 128 the problem is ill-posed in the sense of Tikhonov and of Hadamard (Theorem 5.2.A.l). 5.3.B. Linear, Time-Invariant Singularlnyerturbed System Let's consider the controllability of a singularly per- turbed system as u + 0 with respect to its slow and fast subsystems. What is meant by slow or fast modes is, if the eigenvalues of matrix A are in two groups i and ii where (l) Eigenvalues close to origin, resulting in slow modes of the system. (2) Eigenvalues far to the left, resulting in fast modes of the system. A linear time-invariant singularly perturbed system is modeled as y = A11(u)y + A12(u)z + Bl(u)u where i) y is n1 x 1 , ii) 2 is n2 x l , iii) u is m x l , iv) A22 is nOn singular. 129 This system possesses slow modes with n1 small eigen- values and fast modes with n2 large eigenvalues of magni- tude O(%) (assume that the transient of the fast modes is instantaneous) [C2]. Now let u = 0, then for slow modes we have y5 = A11Ys + A1225 + Blus (5.3.8.2) 0 = AZlys + A2225 + 82us Now let s = ys, then s = ASS + BsuS (5.3.8.3) A = A - A A'lA s 11 12 22 21 B = B - A A'lB s 1 12 22 2 Now for fast modes assume that the slow modes are con- stant during the fast transient period and the perturba- tions in A22(U) and 82(u) are small, the fast subsystem is Obtained from (5.3.8.2) as f + 8 u (5.3.8.4) “f = A22 2 f where f is the fast part in z and uf is a control of fast 130 variables only (uS is a control of slow variables). Now if u # 0, then the fast subsystem is controllable iff the pair (A22,82) is controllable. A Since the eigenvalues of A0 and —%£ are far apart for small u we can say: Definition: If A22 is non-singular and if the fast subsystems and slow subsystems are controllable then, there exist u* > 0 such that the singularly perturbed system is controllable for all ue(0,u*) (the controllability of the singularly perturbed system does not necessarily require the controllability of the fast and slow subsystems) [C2]. Definition: The singularly perturbed system is said to be weakly controllable if it loses its controllability as u + 0, and strongly controllable if it maintains its con- trollability as u + 0 [K1]. Definition: The singularly perturbed system is strongly controllableiff its fast and slow subsystems are control- lable ([C2],[Kl]). Theorem (5.3.8.2) - The linear, time-invariant singu- larly perturbed system given as (5.3.8.1), (5.3.8.2), (5.3.8.3) and (5.3.8.4) is ill-posed if either the pair (AS,BS) or (A22,82) is not controllable. Proof - If the pair (AS,BS) is not controllable, then the slow modes are not controllable and at least one state 131 variable corresponding to slow modes (si) is independent of the control us, and therefore the changes in si does not depend on changes inlnsand the problem is ill-posed (Theorem (5.3.A.1)). On the other hand if (A22,82) is not controllable (the fast modes are not controllable), then at least one state variable (fi) corresponded to the fast mode is independent of the control uf and the changes in fi do not follow the changes in u and the problem is ill-posed (Theorem f: 5.3.A.l). Therefore in the case of weakly controllable system the numerical computation of perturbed solution (near u = 0) or the numerical computation of fast or slow solutions are unstable (in the sense of Tikhonov or of Hadamard) and the problem is ill-posed (in the sense of Tikhonov or of Hada— mard). Remark. When we say si (or fi) is independent of control us (or uf), we mean the slow (or fast) state vari- able in original or transformed slow (or fast) subsystem. CHAPTER 6 ILL-POSEDNESS OF TIME-OPTIMAL REGULATOR PROBLEM 6.1 - Definition of Linear Time-Optimal Regulator Problem In this chapter we shall investigate the ill-posedness of problems in which the objective is to transfer a linear time-invariant system from an arbitrary initial state to a specified target set in minimum time. The minimum time required to reach the target value (or set) will be denoted by T*. Mathematically, our problem is to transfer a linear time- invariant system x(t) = Ax(t) + 8u(t) , where A and 8 are constant nxn and nxr matrices, from an arbitrary initial state to the final state xf = x(T) 132 133 and minimize the finctional J(u) Typically, the control variables may be constrained by requirements such as |ui(t)| i l, i=l,2,...,r, te[t0,T*]. We shall refer to this problem as the "stationary", linear regulator, minimum-time problem. According to the theorems, proved by Pontryagin, we have: Theorem 1 - If all the eigenvalues of A have nonposi- tive real parts, then an optimal control exists [A2]. Theorem 2 - If an extremal control exists, then it is unique [A2]. Therefore, Theorems 1 and 2 guarantee the "algebraic" well-posedness. We let A A An denote the eigenvalues of the 1: Zrooor system matrix A, and we let b b br denote the column 1! 21-0-1 vectors of the input matrix 8, i.e., Our approach will be to use the "minimum principle" to 134 determine the optimal control law. The Hamiltonian is H(x(t),p(t),u(t)) l + + 1 + + where T denotes the transpose of the matrix. The optimal state x*(t) and the Optimal costate p*(t) are solutions of the equations: _ 3H (x* (t) ,p* (t) ,u* (t) ) 3p*(t) x*(t) = Ax*(t) + Bu*(t) 23H (x* (t) ,p* (t) ,u* (t)) ax*(t) F*(t) = - = -ATp*(t) with the boundary conditions x*(to) = X x*(T*) = o' Xf The "Minimum Principle" 1 + + il + + (6.1.1) 135 holds for all admissible controls u(t)€U and for t€[t0,T*]. Equation (6.1.1) yields, in turn, the relation : or -1 if > 0 u§(t) = unknown if = 0 +1 if < 0 or u§(t) = -sgn{} j=l,2,...,r. The case where = o is called the singular case. We study the linear time-invariant time-optimal regu— lator problems for singular and non-singular cases separately. 136 6.2. Optimal Regulator Problem In Section 6.1 we linear time-invariant (bj:P*(t)> Now we consider three cases: Case 1 - We assume p*(t) = 0 ( If p*(t) = 0, then obtained the time-optimal Ill-Posedness of Singular Linear Time-Invariant Time- singularity condition for regulator problem as H O in order to have = 0 . H(x*(t),p*(t),u*(t)) = l . I Since final time is free, be zero and would yield a contradiction. zero. Case 2 - We assume bj = therefore the Hamiltonian must Therefore, p*(t) cannot be 0 in order to have 137 = O . In this case the system is completely independent of uj, that is, uj does not have any effect on the system. Case 3 - We assume bj ¢ 0 and p*(t) # 0 but (bj,p*(t)> = 0 . According to costate equation we have: p*(t) = -ATp*(t) . Without loss of generality, we can assume t0 = 0 then we have * — ' t * Now assume -ATt M3 = = = O . then, M§(t) = fi;(t) = ... = M§(n—l)(t) = 0. Therefore, 138 M; = = = = 0 also fi§(t) = = = o e-AtAn-lb.,p*(0)> = 0 M;(n-l)(t) = = < 3 Therefore the following relations must be satisfied for all t€[Tl,T2]: -At <9 bjlp*(0)> = O = 0 (6.2.1) '-At n-l <8 A bj,p*(0)> = 0 Let E. be the nxn matrix defined as then we can write Equation (6.2.1) as T -ATt -e 3 E p*(O) = 0 for all t€[Tl,T2] , or, equivalently, T E§(e-A tp*(0)) = 0 for all tEETl’TZJ ° T T Since e_A t is non-singular, E-A t is the "fundamental matrix") and p*(O) # 0 (according to our discussion in Case 1), there exists a nonzero vector T p*(t) = e’A tp*(0) such that E§p*(t) = o . I Therefore we conclude that the matrix Ej must be singular, that is, det E. = 0 J and thus we state the following theorem: Theorem 3 - The linear time-invariant minimum-time regulator problem is "singular" if and only if, for some 140 j, j=l,2,...,r, the matrix Ej’ given by is "singular". Proof - We have and we showed that, Now let's prove the prove that if then the problem is [b. Ab. . . . . . An—lb.] already proved the necessary condition if the problem is "singular" then sufficient condition, that is, let's det E. 3 ll 0 I "singular" and = o, bj ¢ 0,p*(t) ¢ 0 If det Ej then we have rank E. J [b. Ab. . . . . An-lb.] < n J 3 141 Therefore the columns of matrix Ej are linearly dependent and there exists a nonzero vector V such that or V b. = V Abj = V A b. = ... = V A b. = 0 (6.2.1.1) where c1,c 2,..., and cn depend upon aij’ the elements of matrix A. Therefore, we can write VTAnb. = c vTAn'lb. + c VTAn-zb. + ... + c va. 3 l J 2 J n 3 Using Equation (6.2.1.1) we obtain VTAnbj = 0 (6.2.1.2) We also have 142 Using Equations (6.2.1.1) and (6.2.1.2) we get VTAn+lb. = 0 Therefore by induction we have the following general equa- tion: T n+k V A bj = O for k = O,l,2,... (6.2.1.3) Now let's consider the following relation: 2 2 n n n n+k n+k n+k T _ A t (-1) A t (-1) A t V (I At + 2! +...+ n! + ..+ (n+k)! +....)bj w n n = VT( 2 (_l)n A t n=0 Using Equation (6.2.1.3), we obtain w n n m V A b. vT( 2 (-1)n A F )b. = z (-1)n ——,—l t“ = 0 n=0 n. j =0 n. 0° 11 n! )bj = ): (-1)nv_;‘_r_lt (6.2.1.4) 143 However, we know 2 (-1)n Antn -At =0 n! Therefore we rewrite Equation (6.2.1.4) as VTe-Atb. = o , J or = 0 , or = 0 . (6.2.1.5) Vector V is a nonzero vector and we choose V as(the initial conditions of co-state equations can be chosen ar- bitrarily): V= p*(O) where p*(t) is the solution of adjoint equation p*(t) = -ATp*(t) p*(O) = p0 , 144 and the solution is T p*(t) = e A tp0 Therefore, Equation (6.2.1.5) becomes: T T = = = (bj:P*(t)> = 0 I which is the condition of singularity, and the problem is "singular". Therefore we proved that the linear time-invariant time-optimal regulator problem is singular if and only if det E. = 0 . (6.2.1.6) Remark 1. If the problem is "singular" by the condi- tion bj = 0, then det Ej = det[bj Abj . . . . An'lbj] = det[0 o 0] is also zero and therefore the condition (6.2.1.6) is the general condition of "singularity". Remark 2. If the system has only one input (single 145 input system) where x(t) = Ax(t) + bu(t) (6.2.1.7) and b is a constant nxl column vector, then the condition of "singularity" (Equation (6.2.1.6)) will be det E = det[b Ab . . . . A b ] = 0 . (6.2.1.8) Equation (6.2.1.8) is the "uncontrollability" condition for system (6.2.1.7). That is, the "single" input linear time- invariant time-optimal problem is "singular" if and only if it is "uncontrollable". Remark 3. For the general case of linear time-invariant time-optimal regulator problems where u is an rxl vector, the "singularity" condition det E. = O 3 does not necessarily provide the "uncontrollability" con- dition n- rank E = rank [8 A8 . . . . A 18] < n Therefore, the "multiple input" linear time-invariant time-Optimal regulator problem can be "singular" while it 146 is "controllable" (Example 6.4.l)). Now let's present the following theorems concerning the ill-posedness of "singular" linear time-invariant minimum-time regulator problem. Theorem (6.2.1) - The "singular" linear time-invariant minimum-time regulator problem is ill-posed in the sense of Tikhonov and of Hadamard. Proof - We investigate two cases: Case 1 - Single input system. In this case, as we mentioned in Remark 2, the system is "uncontrollable" and therefore according to Theorem (5.2.A.l) the problem is topologically ill-posed. Case 2 - Multiple input system. In this case since the problem is "singular" there exist at least one matrix Ej for which det Ej = det[bj Abj ... An’lb.] = o and therefore, at least one state variable is independent of control uj in original, canonical, or combinational form (as we discussed in Section (5.2.A), see also Example (6.4.2)) and therefore, according to Theorem (5.2.A.l) the problem is topologically ill-posed. 147 Theorem (6.2.2) - Ill-posed singular linear time- invariant minimum-time regulator problem is unregularizable. Proof - Since the singular linear time-invariant mini- mum-time regulator problem is either uncontrollable (single input system) or independent of at least one control func- tion, uj, (multiple input system), then, according to Theorem (5.2.A.2) or Theorem (5.2.A.3) it is unregularizable and the solution is always unstable in the sense of Tikhonov or of Hadamard. Therefore, if the linear time-invariant minimum time optimal regulator problem is "singular", then it is also ill-posed (in the sense of Tikhonov and of Hadamard) and unregularizable (Example 6.4.3). 6.3 - Ill—Posedness of Non-Singular Linear Time Invariant Time-Optimal Regulator Problem We showed in Section (6.1) that, if the linear time- invariant time-optimal regulator prob1em is "non-singular" then we have n-l det E. = d t b. Ab. ... A b. 0 3 e [ j j 3] ¥ for all j=l,2,...,r rank Ej = n Therefore 148 rank E = rank [8 AB ... A 8] = since bi' Abi,...,An-lbi are linearly independent (i=l,2, ...,r). Therefore the "non-singular" linear time-invariant time-optimal regulator problem is always "controllable". Now consider the following theorem: Theorem (6.3.1) - The linear time-invariant minimum- time regulator problem is always ill-posed in the sense of Tikhonov and of Hadamard. Proof - We have two cases: (l) Singular (uncontrollable) problem which we have already proved, in Section 6.2, that is ill-posed and un- regularizable. (2) Non-singular (controllable) problem for which we have g(t) = Ax(t) + 8u(t) 149 i I, u(t)€UCL2 Let's suppose that the minimizing sequence(fifi}has the form [fi {.18 I I: I- + £3 niI .<_ 1. lugl _<_ 1 where u* is the optimal control. Since the problem is non- singular, then the Optimal control can only have two pos- sibilities (Section 6.1) u: = :1 for i=l,2,...,r. Therefore Since we choose an as a minimizing sequence, therefore we have (for fi ): n However an must satisfy the condition 150 llfin_|| = IIun, i1|l:l|u,,,|| +1. 1 l 1 Now we want to prove that, there exists at least one minimizing sequence fin for which x + x* n and J + J* n but ~ * un f u Without loss of generality we assume that the system is a single input system (u is scalar). Let's consider un as follows: +1, 2k 2 < t < (2k+1) E n — n un = (6.3.1.1) -1, (2k-l) 3 < t < 2k 1 n — — n and also assume fi = u + u* = u :1 . 151 The Optimal solution of linear time—invariant single input system x(t) = Ax(t) + bu(t) can be obtained from the Equation: x(t) = eAtxO + eAtflfe-ATbu(T)dT. (6.3.1.2) The final state is given as x(T) = xf The Optimal time T* can be obtained by solving the follow- ing equation for T*: AT* AT* e =x(T*)=e x + *— 0 T e AT [0 bu*(r)d1 (6.3.1.3) where u*(T) can be either 1 or -1 since the problem is "non- singular". Now if we apply an as the input control we get: At eAtfte-AT 2 (t) = e x + n 0 0 bun(T)dT . At the Final time t = T we have 152 - _ AT AT T -AT - xn(T) — e x0 + e 1") e bun(T)dT where lim xn(T) = xf . n-mo Therefore we have . ~ _ _ . AT . AT T -A1 - lim xn(T) — xf - lim e x0 + lim e &)e bun(T)dT n+oo n-mo n+oo or x = eATx + eAT[lim fTe-ATfi dT]b f 0 0 n n+oo If we substitute fin, then we get _ AT AT . T -AT * xf — e x0 + e [1im 4)e (un+u )dT]b n+oo or xf = eATx0 + eAT &?e-ATbu*(T)dr + eAT[lim 4?e-ATun(T)dT]b n+oo (6.3.1.4) 153 Now let's consider only the term lim 4? e-ATun(T)dT n+oo in Equation (6.3.1.4) for Let's substitute e-AT as m . i e.AT = Z (-1)1 (AR) , i=0 1° then we have: m . i 1im fTe'A'u (1)61 = limf T 2 (-1)1 15:1— 0 n 0._ i! n+m n+w 1—0 i! w . i . - z (-1)1 fi- 1im (FTlun(T)dT . = O n+oo Let's consider only the term lim (frlun(1)dT n—mo of Equation (6.3.1.5). Substitute un from (6.3.1.1) in (6.3.1.6) un(T)dT which un is given as (6.3.1.1). (6.3.1.5) (6.3.1.6) 154 HT/n i 2T/n_ 1im fTrlu (1)61 = lim[f d +1;r idT + . . 0 n 0 T/n n+oo n+oo nT/n . n+1 i J + ( 1) f(n-l)T/nT dT or _ i+l _ 2: 1+1 1 T i+l 1im fOT T lun (T)dT lim[l+l(% ) i+l(n ) + 1:1(n) n-mo n-mo _ n+1 l E: i+l _ n+1 l (n—l)T i+1 = '+1 1+1 - _ (T)1 l . 2 . _ n+1 n 1+1 - 17‘ [11m —-——1-;T - 11m (H) + 11m ( l) ((5’) n+oo (n) n-HJO n+oo n-l i+l _ (-;—) )1 - '+l - - (T)l . _ n+1 n 1+1 _ n _ 1 1+1 _ — —i:1—- 111T! ( l) ((3’) ('5 n) ) - n+oo '+l - (ml * . _ _ 1 1+1 = T (_l)llm (l (1 n) ) n+0) 155 0+1 . '—'_1:1—— il)i::(l 1+( 1) ';i:l + terms With degree of n in Denominator > degree of n in Numerator) = i+1 I=.£%%jf—(:1)(lim terms with n-k) for k 3.1 n+oo (T)i+l 1+1 (:1) (0) = 0 . Therefore 11m (frlun(1)dr = o n+oo and Equation (6.3.1.5) becomes lim Q? e-A n+oo Tun(T)dT = o . (6.3.1.7) Substituting (6.3.1.7) in (6.3.1.4), we obtain: x = eATx0 + e.AT &? e-ATbu*(T)dT (6.3.1.8) which has exactly the same form as Equation (6.3.1.3). 156 Therefore if we solve (6.3.1.8) for T then and thus an is a minimizing sequence for which the state in converges to its final value, xf, in minimum time T*. Now let's prove that an does not converge to its Optimal value u*, and therefore the non-singular (controllable) linear time-invariant minimum-time regulator problem is topologically ill-posed in the sense of Tikhonov. Let's measure Ilfin - u*l] in L2 metric: llfin - u*llL = IlunllL 2 where un is given as (6.3.1.1). Therefore _ T 2 _ T/n 2T/n T/n Ilunlle — f0 (un) dt — f0 dt + fT/n dt + f(n-l)T/ndt T _ IlunllL — fodt — T lim||u II = lim||fi -u*|| = T f 0 n+oo 1'1 L2 n+oo n Therefore, the minimizing sequence fin does not converge to its Optimal value u* as n+w for te[O,T], and the problem 157 is topologically ill-posed in the sense of Tikhonov. Therefore, the linear time-invariant minimum time regulator problem is always topologically ill-posed (at least one minimizing sequence does not converge to its Op- timal value), see Example (6.4.4). In previous Section (6.2) we showed that if the linear time—invariant minimum-time regulator problem is singular (uncontrollable) then it is ill-posed and unregularizable. That is, we cannot construct any minimizing sequence which converges to its optimal value, and the solution always remains "unstable" in the sense of Tikhonov and of Hada- mard. Now we want to prove that the linear time-invariant minimum time regulator problem, which is ill-posed, is regularizable if it is nonsingular. That is, we can al- ways construct the minimizing sequence which converges to its Optimal value even though the original linear time- invariant nonsingular (controllable) minimum-time regulator problem is ill-posed. In order to prove the regularizability of linear time-invariant non-singular (controllable) minimum-time regulator problem, let's first present the following definitions and background even though we have mentioned some in Chapter 4. Definition (6.3.1). Stabilizing Functionals. Let R(u) denote a continuous nonnegative functional defined on a subset U1 of U which is dense in U. Suppose that u* 158 belongs to the domain of definition of 9(u); where u* is the unique solution of the Optimal control problem. We also assume that, for d > 0 there exists a set of elements ueU such that 1 R(u) i d is a compact subset of U We shall refer to the functional 1. 0(u) possessing these properties as a "stabilizing func- tional". Definition (6.3.2). Ul is "s compactly continuously convexly embedded" in U if: (i) The balls Sr E {u; ugUl, ||u||:r} are compact in U. (1)}, (ii) For any two sequences {un {ué2)} of points in U such that (l),u(2)) + O as n+m , pF(un n we have (11(1) 0F n ’gn) + 0 as n+m 159 and OF(UAZ):€n) + 0 as n+m , Where an = 0.5(uél) 1 uéz)) . Definition (6.3.3). pl(ul,u2) majorizes the metric of the space U. The measurement pl (ul,u2) is said to majorize the metric of the space U if, for all ul, uzeUl 91(u1'u2) 3- 0F(u1'u2) where pF(ul,u2) = Max | ul(t) - u2(t)| . t€[tOIT] Definition (6.3.4). Regularization of controllable (non-singular) linear time-invariant minimum-time regulator problem. Consider the following linear time-invariant minimum- time regulator problem: 160 x(t) = Ax(t) + Bu(t),x(t0) = x 0 _ T J - f dt = T - t (6.3.4.1) t0 0 |ui(t)| i 1, ui(t)eL2 where xT(t) = [xl(t), x2(t), ... xn(t)] is an n-dimensional vector valued function defined on an interval t i t i T, x0 is a given vector, and u(t) = [ul(t) ... ur(t)] is an r-dimensional vector-valued function with range in an r- dimensional metric space U. We have proved that (Theorem 6.3.1) the non-singular (controllable) system (6.3.4.1) is topologically ill-posed. Now we want to apply Tikhonov's regularization technique in order to obtain the convergence (to its optimal) minimizing sequence. To apply the regularization method, it is sufficient to give an algorithm for constructing the minimizing sequence {un} which converges to an element u*, the unique optimal control of the system (6.3.4.1). Notice that for this problem the class of admissible con- trols, U is the space of functions of a single variable t 1! with the "uniform" metric and J is (obviously) a nonnegative continuous functional. To construct a minimizing sequence {un(t)} that con- verges to the function u*, consider the smoothing functional 161 Ba(u) = Tu - to + aQ(u) where R(u) is a "stabilizing" functional as we defined in Definition (6.3.1). Since a and R(u) are nonnegative and Tu > to, it is obvious that 8a(u) is also nonnegative. Therefore, it has a greatest lower bound on U which we call 83(u): Bg(u) = inf B“(u) neU Let's define Ul a subset of U that admits a metriza- tion pl(ul,u2) majorizing the metric of the Space U (Defi- nition (6.3.3)). where 30 is a fixed element of U1 and define B§(u) as B? (U) = Tu - to +00% (11,110) . We assume that {an} is a decreasing sequence of positive numbers that converges to zero. Let {ua (t)} denote a se- k quence of controls belonging to U1. Now consider two cases: Case 1 - UlCU where U is a "complete" m-dimensional 162 metric space, then: Theorem (6.3.2) - If the controllable (non-singular) linear time-invariant minimum-time regulator problem has a unique solution u*eUl, where u* = Eb or very close to no, i.eO ||u* - H < 6 OII where 6 is very small, then the sequence of functions [ua (t)] satisfying the condition k converges in the metric U to u*. Proof — We have (1k Gk Bl (“ak) = Tu ‘ to + ale(ua ) i B01 + “kc 0k k or _ *_ * Tu t +ak91(ua )gr t0+6knl(u )+akC . (6.3.2.1) ak 0 k 163 Since T > T* therefore T - T* = a > 0 uak uak k and we have for all k + a ) i a (u*) + a ak le(uak le kc or * ale(uak) : del(u ) + akC or (6.3.2.2) where d0 = constant independent of ck = 91(u*)+c. Therefore, all the elements of the sequence {ua }belong k to the compact set Ed where O Ud = (u; ueU 0 Ql(u) : do} (6.3.2.3) ll since 01 is "stabilizing functional therefore the set 01(ak) i do is compact. It is also obvious from (6.3.2.1), since a approaches k to zero as k + m then if we let k + m (6.3.2.1) becomes 164 lim[T k+oo (u )JElim[T*+ale(u*)+akc] (6.3.2.4) +0 0 nak k l Gk k+°° but we have 1imale(ua ) + 0 (ak+0 as k+w) I k+w k iimakfil(u*) + 0 (ak+0 as k+m) +m and limakc + 0 (ak+0 as k+w) k+oo Therefore (6.3.2.4) becomes lim T < 1im T* = T* k+oo Otk — k+oo and consequently 1im Tu + T* . k+oo (1k Therefore the sequence {ua } is a minimizing sequence. k Since we showed that {ua } belongs to the compact set k Ud (6.3.2.3), then it is "regularized" and consequently, 0 165 {u } + u . Ok 0 That is, since 91(ua ) is "stabilizing" function, then k 2 _ and the set 6d is also compact where 0 _ _ . 2 ... Udo — {u. ueUl, ol(u,u0) _<_ do} where and therefore Case 2 - U is Hilbert space and U is also "s-compactly l l continuously convexly embedded" in U as we defined in Defi- nition (6.3.2). Theorem (6.3.3) - If U1 has been defined as Case 2, then there exists an element ua(t)€Ul 166 that minimizes the functional 8%(u), where B‘i‘m) =T -t u 0 + a01(u) (21(11) = (€01.50) and 110 = u* or very close to u*, i.e., ||u* - E < 6 for very small 6 , 0II and converges to the Optimal value u*. Proof - Assume that {un} is a sequence minimizing a functional 8%(u) such that, the sequence {8$(un)} converges to 881 where a _ . a 801 — inf 8 (u) . ueUl In this case Bg(un) is Obviously a decreasing sequence, EU such that therefore for every n there exists u l 1 or u 0 T 1 - t + 001(ul) 3 Tun - t0 + 601(un) . (6.3.3.0) 167 Since Tu Z T*, where T* is an element minimizing the func- n tional J = T - to, then we can rewrite (6.3.3.0) as: * Tul + dQl(ul) Z T + an(un) or _. * Tul T 01(un) : 01(ul) +1 a T -T* '11 where —7r——— is a positive constant. Therefore, we have 01(un) i c (6.3.3.1) where the constant T -T* u1 C = Ql(ul) +' a is independent of n. If we substitute 01(un) in (6.3.3.1) we get (un,u0) 5. c 2 01 Since Ul is a Hilbert space then we have 168 2 - 2 01(unlu0) = Ilun - uoll ° Thus, - 2 Ilun - no!) < C I or [Inn - u0|| < /E or Hun _ E0H < C. and therefore (un}€Sr . However, we assumed that the balls Sr are compact in U thus {un} will converge to some limit point; denote this limit by EeU, and we show now that it converges strongly to an element u of 01' In order to prove this, let's show that u is "fundamental" in U1, i.e., for e > 0 there exists n(e) such that ||u - u n+p nII : e for n Z n(€) and p > 0 . 169 Let's suppose that this is not the case; therefore there exists so and numerical sequences {nk} and {mk}, where mk = nk + p such that k k Define g - O 5 (u - u ) k mk 11k and A = 0.5(u +u ) = u + g = u - g . (6.3.3.2) k mk nk nk k mk k . 6 However, unke{un}and umk€{un} of the functional 81(un),where {8%(un)} is a decreasing sequence; also, from (6.3.3.2), we have But 8%(u) is decreasing; therefore (considering (6.3.3.1)) 91(1k) — (21(un ) i 0 k and 170 m ) 3 O . Q (A ) 1 k k - Ql(u Thus, for sufficiently large k, we have: _ _ ' 91(Ak) 01(un ) 3 Ck k 91(Ak) - 01(umk) 3 8;; where 5*, efi + 0 as k+w . Now we have: — 2 — 2 Q1(Ak) : llxk - no!) = llunk + 5k - uoll _ _ _ 2 Define u — E = u , u - E = u . n 0 n m 0 m k k0 k k0 Therefore, we have: 2 2 91(Ak) = Ilunk + gkll llum - €k|| (6.3.3.3) 171 — 2 2 9 (u ) = IIu - u II = llu || 1 nk nk 0 nk0 2 2 k k 0 and (6.3.3.3) becomes: 2 2 IIun +€kll - IIun II z-e]; ko ko (6.3.3.4) 2 2 u ||umk - Ekll - llumk ll 3 6k 0 O or if we substitute the eXpansions of ||unk + gk||2 and 0 2 . . llumko - gkll , we obtain. 2 2 2 ||un II + 2(un 15k) + llikll - ||un || 2 -e ko ko ko (6.3.3.5) 2 2 2 II ||um || - 2 or -2||§ II2 > -(€'+€") k — k k H: ((2 < o 5(6'+e") k - ° k k ° However (€i+€fl) + O as k+m 173 therefore llakll2 + 0 as k+w or 2 ||0.5(um -un )II + 0 as k+m k k and consequently which contradicts the assumption according to which llikll = 0.5llumk-unkll 3 0.56 or Therefore {un} is a "fundamental" sequence in Ul that converges strongly to an element fi of U1. Since the metric in Ul majorizes the metric in U. Therefore we have where u* is the Optimal control of the time-optimal problem. 174 Therefore the minimizing sequence {un} converges strongly to an element fi z u* which is the optimal value of control function ueUl. Theorem (6.3.4) - The controllable (non-singular) linear time-invariant minimum-time regulator problem is always "regularizable". Proof - The proof is straight forward and it is a direct result of Theorems (6.3.2) and (6.3.3). Again we consider two cases: Case 1. The class of admissible controls Ul is a sub- set of U, where U is a "complete" m-dimensional metric space. In this case we have proved that (Theorem (6.3.2)) if the controllable linear time-invariant minimum-time regulator problem has a unique optimal solution u*, then the minimizing sequence which satisfies the condition c (6.3.4.1) converges to u* (Tikhonov's regularization). Notice in this case if we take {ua } as where 175 ||u; II + 0 as k+mr {u' }EU I k “k 1 then the sequence {ua } converges to optimal value u* as k k+m, and the condition (6.3.4) becomes a k _ _ _ _ , * Bl — Tu tO + akQ(uak) — Tu t0 + dk0(ua +u ) i a a k k k *_ 'k T t0 + de(u ) + akc . Since Tu > T* therefore O‘k _ * ' * * Tua T + akQ(uak+u ) i ak0(u ) + akc k or I * * de(u ak+u ) : de(u ) + akc Q(u& +u*) i Q(u*) + c = d . (6.3.4.2) k Since (u'a +u*) is a "stabilizing" functional and therefore k is a compact subset of U1, it is always possible to construct the sequence {ué } such that k ||u' || + O as k+w (convergence sequence) “k and u& satisfies the condition (6.3.4.2). For such ué , k k 176 then the sequence ua defined as u = u' + u* 0‘k 0'1: satisfies the condition (6.3.4.1) and converges to the Op- timal value u*. Case 2. The class of admissible controls Ul is a Hil- bert space and is also s-compactly continuously convexly embedded in U. In this case we have proved that (Theorem (6.3.3)) we can construct a sequence {ua} which minimizes the func- tional 8%(u) and converges to u*, the Optimal control of controllable linear time-invariant minimum-time regulator problem. Notice that since ua minimizes 8%(u) and con- verges to u* then we have: a = - 81(ua) Tua t0 + aQ(ua) Min 8%(ua) = T* - t0 + aQ(u*)(Q is nonnegative function). Therefore 0‘ = _. 1k- * 81(ua) Tua t0 + aQ(ua) + T tO + ag(u ) . Since 177 ua + u*, thus 9(ua) + Q(u*) and therefore _ — 'k Tua tO + 60(ua) + Tua to + a0(u ) + T* - t0 + dQ(u*) Therefore the sequence {ua} which minimizes 83(ua) and converges to u* (according to Theorem (6.3.3)) is a mini- mizing sequence for which Consequently for the controllable linear time-invariant minimum-time regulator problem, if the class of admissible controls is a Hilbert space and s compactly continuously convexly embedded in U, then the sequence {ua} which mini- mizes the functional a — .- 81(ua) — Tua t0 + 69(ua) converges to the optimal value u*. On the other hand, if 178 the class of admissible control is not a Hilbert space but is a complete m-dimensional metric space, then we can always construct a minimizing sequence, satisfying condi- tion (6.3.4.1), which converges to u*. Therefore the con- trollable linear time-invariant minimum-time regulator problem is tOpologically ill-posed (in the sense of Tik- honov) but "regularizable" (see Example 6.4.5)). 6.4. Examples Example (6.4.1). Consider the following linear time- invariant minimum-time regulator problem: rank E = 2. The problem is "controllable" . Now let's check El and E2 where 179 _ I El — [bl: Ab E1 is "singular" . 4 4 = [ ] + det E = 16 E2 is "non-singular". Therefore, it is possible to have "singular" linear time-invariant minimum time regulator problems (since E1 is singular) for which the system is controllable (E is full-rank). However, we should notice that, in this case we have xi = x1 + 4u2 x2 = x1 + x2 + 3u1 + 2u2 and x1 is completely independent of control input 111 (ill- posed and unregularizable (Section 6.2)). Example (6.4.2). Consider following linear time-in- variant minimum-time regulator problem: 180 l 0 4 2 E = [ :]+ rank E = 2 + "controllable" system 2 l 8 3 : 1 4 E1 = [bl: Abl] - [2 8] + det E1 = 0 + Singular : 0 2 E2 = [b2: Ab2] = [1 3] + det E2 = -2 + non-Singular . Therefore the problem is "singular" (det E1 = 0), but the system is "controllable". However, we claim that at least one state variable is independent of control ul. The state equations are: (1) x = 2x1 + 3x2 + 2u + u2 . (2) Multiply (l) by -2 and add to Equation (2): 181 x2 - 2xl = 2xl - x2 + u . (3) Now let's define and Equation (3) becomes x 3 = -x3 + u 2 where x3 is completely independent of control u1 (i11- posed and unregularizable (Section 6.2)). If we also use the "canonical coordinate transformation", we will obtain: eigenvalues of A = 4, -1 II A eigenvectors of A Therefore 1 -2 -l l 1 2 p = [ ] and p =— _ 2 l 5 [ 2 l] 9'1 x(t) = p'le(t) + p‘lBu or 1 1Apy(t) + p" Bu y(t) = p- 182 where _ 4 0 _ 5 2 0 -l O 1 Therefore 4 C y 5 2 u Y(t) =[ J[ 1]+[ ][1] O - y2 O 1 112 or m. 1.: (1- I .5 "< [_J + U1 C.‘ H + N I: where y2(t) is completely independent of control u (ill- 1 posed and unregularizable (Section 6.2)). Example (6.4.3). Consider the following linear time- invariant single-input minimum-time regulator problem: x = x1(t) x2 = u(t) J = font = T, |u(t)| 3 1 183 In this case 0 O E = [ ] + det E = 0 + E singular. l 0 Since the problem is single input, therefore for singular E the system is "uncontrollable". Now let's show that the singular interval exists. Consider the Hamiltonian: H = l + plxl + p2u . Using "the minimum principle" we get: ** ** ** 'k l + plx + p2u i l + plx + pzu l l or * * * p2u i p2u where USU and te[0,T]. Therefore, we have: 184 ' - 'k 1 p2 > 0 u* = unknown p; = 0 + singular interval. * +1 p2 < 0 For singular interval we have p5 = 0, therefore *=_ .*= *z pl pi and p2 0 + p2 0 *_ ‘1'. pl ‘ p10e - t x1 x1 xl(t) XlOe x* = u* + x*(t) = ftu*(T)dT + x 2 2 0 20' Now the Hamiltonian becomes: = * * * * = = H 1 + p1x1 + p2u 1 + p10e x e 1 1 plOXlO but therefore 1 + p10x10 = 0 2 p10 = ‘ i1; Therefore, there exists a singular interval and for that: 185 xi = xloet x5 = {fu*(1)dT + x20 for te[0,T*] (6.4.3.1) pi = - il—-e-5 [u*(t)] < l 10 p3 = 0 Remark. If the Optimal control is singular, it is singular throughout the interval of operation of the system. Now let's show that the above singular problem is ill- posed. We will consider three cases. Case_1 - Let's suppose for the time difference of 1/4 sec, we want AX < that is, if At = 1/4 sec then we want Ax < l/2. The above problem is singular and suppose the time interval is te[0,2], then we have (6.4.3.1): and 1 t+2r et) 1 _ _ t xl(t + z) - Xl(t) — x10(e — xloe (e - l) t Ax* E 1.3 xloe . 1 Let's suppose u* = get where 186 |6e2| < 1 + lal < 33 = 0.135 e (since |u*| 1 for all te[o,2], therefore if 1 the time changes l/4 second then the magnitude of the state variable xl changes with an amount larger than acceptable range. Since x1 is independent Of control ul, we cannot reduce the change in x Meanwhile the control will change 1. with an amount Au* = a(l.3)et + Max Au* : 9.6a tEEOIZJ Also we have 187 6x5 = 1.3 oet e Max Ax; : 9.6a NIH 9.66 < + a i 0.052 . Therefore if we choose a = 0.052, then for the time dif- ference of 1/4 second the magnitude of state variable x2 and of the control function u* will change with an amount less than 1/2 while the magnitude of the state variable x1(t) changes with a relatively large amount (larger than 1/2) of 1.3 et. Therefore the problem is ill-posed. * is independent of control 1 u*. Let's suppose due to some faults in the system the Case 2. The magnitude of x magnitude of xi jumps to some relatively large value, then xl keeps the value since it is not controllable and we can- not correct the magnitude of X1 by the control u*. How- ever this case cannot happen to x5 since if the magnitude of x2 changes to some large amount, then we can immediately correct that by choosing the appropriate value for a (con- trol). Now let's suppose we have more general form 3;. ll 2 Au(t) Then the singular solution is: 188 t x* = UK e 10 ‘ {fAu(T)dT + x X 1(- l 20 Assume u*(t) = aet where |u*| < l (singular case) + Ial < 0.135 for all te[0,2], then x* = &anerT = dAet - dA + x = dA(et-l) + X 2 20 20 ' Suppose due to some faults in the system x* + m then x* = aA(et-l) + X 1 2 10 can be controlled by choosing a very small. Case 3. Continuous dependence of the input and the state variables. Suppose u* + u* + s, then t * 'k = * x2 + x2 + &)gdt x2 + at and 189 Therefore x* remains the same and is independent of e. 1 Thus, if u* changes relation between x5 6 (ill-posedness in Example (6.4.4). by s then there exists a continuous and 6. However x*, is independent of the sense of Hadamard). Let's consider the controllable linear time-invariant single-input minimum-time problem which we will show it is non-singular, and then we proved that it is ill-posed (Section 6.3). The problem is x1 = x2(t) , x(0) = x0 x2 = u(t) where _ T _ J—fodt-T IU(t)| i 1 The Hamiltonian for this problem is H = 1 + plx2 + p2u(t) . The ”minimum principle": 1 + pix* + p5u* i 1 + pix* + pan 2 2 190 Therefore the optimal control is: +1 if p3 < 0 u* = -1 if p5 > 0 unknown if p3 = 0 + "singular interval" Now we show that the case of "singular interval" cannot happen. Suppose p; = 0 then: I33:—p1=—C+p5=—Ct+clo Since p5 = 0 (singular case) + -ct + cl = 0 for all t. Therefore, we must have: Therefore However, ll i—J + 0 ll H = 1 + p*x* + p§u* 1 ¥ 0 1 2 191 Therefore p3 = 0 violates the necessary condition that H(x*,p*,u*) = 0 for all te[O,T], and thus the "singular interval" does not exist. Therefore, the acceptable optimal control is: (+1 p* < 0 ‘p: = C u* = j where for all te[0,T]. -l p* > 0 (p3 = -ct + cl Now let's evaluate the state variables: >< II x N x . N ;; ll H- H + x NI: 7.; II I (1- + x N O l 2 * = *— *=— *=— u (t) :1, pl - c, p2 ct + Cl' xl :Zt + XZOt + xlo *= = — = x2 it + x10, H lict + cx20 + ct + cl 0 l + cx20 1 c1 = 0 + condition for H = 0 . Now let's show that the problem is ill-posed (Section 6.3). 192 Consider the following "minimizing sequence": where |/\ t (2k+1) “ 1-1 (2k—l) £113 < t E 2k slash-3 We have proved (Section 6.3) that the above sequence (fin) is a minimizing sequence for minimum time problem: . o 1 o X [ ] X + [ ] u o o 1 , At At ft -AT xn(t) = e x + e (3 e Bun(T)dT where l - 0 x (t) = eAtx + eAtfte-A18u*dr+eAtft . u d1 n 0 0 0 1 -T XT (T)=xf=eAT x0+eAT Q? eATB fldI+eAT&? [ ] u dT . un 193 Now consider the term T _ = T/n 2T/n nT/n .6 TundT lb 'T/n -+..u+ (n-l)T/n, Tu dT — = T/n _ 2T/n nT/n O TdT + '[I‘/n Td‘f +...._ f(n-l)T/n TdT 2 2 2 _ l _ T_ 4T _ T_ 2 _ _ T 2 _ _§( 2+___2 2+.. .+.(T (T n) )) n r1 n _ 1 T2 4T2 T2 2 2 T2 2T2 -§-(——2—+—T"—2‘+. oi(T -T -—'2"'—r1_-)) n I) n n 2 2 . T _ . 1 _ 2_ _ T_'_ 2T 1im QJ-Tundr — lim 2( 2 +...i ( —H—)) + 0 n+oo n+oo n n T _ T/n T 1 _ 2_'I_: :1: .. _T_ foundT - f0 +°°'if(n-1)T/nd1 n n + n +. (T T+n) lim 4? u d1 + n Therefore 194 Thus Therefore an is minimizing sequence, now let's evaluate fin in L2 metric: dt+0 O C llfinll = QfI6n+u*Izdt = f lunil llfin II= §(§)(1:1)2 + g(g)(-111)2 if n=even ||fin|| = §(E%1x111)2 + £(Eg1x-111)2 if n=odd If u* = +1, then llfinll = g(4) = 2T n=even ||fin|| = §(1-%)(4) n=odd limllfinlln=even or odd= 2T 6 +1 = u* n+oo If u* = -1, then 195 ~ - T - — ~ -1 1 _ 1 Hun” - §<4>-2T n-even llunll - 2(1+n)(4) — 2T(l+n) n=odd 1im1|un||n=even or odd= 2T 6 -1 = u* ’ Therefore ~ ||u f u* in L2 metric, nII and the problem is ill-posed in the sense of Tikhonov. Now consider the c metric: = Maxlil+u I = 2 ¢ :1 = u* c n ||fin||C % u* in c metric and therefore the problem is ill-posed in the sense of Tikhonov. Example (6.4.5). Let's consider the problem of Example (6.4.4), which is controllable (non-singular) linear time- invariant single input minimum-time regulator problem. We showed that (Example 6.4.4) the problem is topologically ill-posed, now let's show that the ill-posed problem is regularizable (Section 6.3). The problem is: 196 X1 = x2 . T x2 = u(t) |u| i,1 J = Q) dt (6.4.5.0) x(0) = x0 O'k Let's define 81 such that a a k k 81 (uak) i 801 + akc (6.4.5.1) “k 81 = Tu + de(uak) a k 0‘k = * 'k 801 T + ak0(u ) Therefore we have * * Tu + ak0(ua ) 3 Tu* + akQ(u ) + akc a k k Since T > T*, u “k therefore _ * * Tu T + ak0(ua ) : akQ(u ) + akc dk k 197 or a Q(ua ) i’d k Q(u*L + a c (6.4.5.1) k k k Now define Therefore (6.4.5.1) becomes * [T (u -u*)2dt < c O a — k Now choose u = u' + u* dk dk and choose _ l . _ -t i —t (where k # O and |% e-t|oo . “k Therefore the "regularized" sequence ua converges to the k Optimal value u* while the cost functional Ja converges k to J* and the state Xa also converges to its Optimal k value x* for the controllable linear time-invariant mini- mum-time regulator problem. CHAPTER 7 ILL-POSEDNESS OF THE FUEL-OPTIMAL REGULATOR PROBLEM 7.1. Ill-Posedness Of the Fuel—Optimal Regulator Problem for Singular and Non-Singular Linear Time-Invariant System In Chapter 6 we considered problems in which the Objec- tive was to transfer a system from an arbitrary initial state to a specific target set as quickly as possible. Let us now consider problems in which the required control effort, rather than time, is incorporated in the criterion of Optimality. Such problems arise frequently in aero- space applications, where Often there are limited control resources available for achieving desired objectives. There- fore we shall consider the problem of determining the fuel- Optimal control that transfers a given initial state x0 to a given terminal state xf. The fuel-optimal regulator problem for a linear time- invariant system, which we will discuss in this section, is the following: Given the dynamical system k(t) = AX(t) + Bu(t) (7.1.1) 205 206 l. The n vector x(t) is the state. 2. The system matrix A is an nxn constant matrix. 3. The gain matrix B is an nxr constant matrix. 4. The r vector u(t) is the control. Assume that the components ul(t), u2(t), ... , ur(t) of u(t) are constrained in magnitude by the relations |uj(t)| i l j=l,2,...,r for all t . (7.1.2) Also assume that at the initial time t0 = 0, the initial state of the system (7.1.1) is x(0) = x0 . (7.1.3) We are also given a terminal state x(T) = xf . (7.1.4) The problem is to determine thefuel-Optimal control u*(t) that transfers the system (7.1.1) from xO to xf in time T and that minimizes the fuel functional J(u) = J = &? IIMH |uj(t)|dt (7.1.5) 1 J r (The results will be the same for Z |uj(t)|2). i=1 207 b Let b ..., br denote the column vectors of the 1! 2' gain matrix B, i.e., B = [b b .. b J . (7.1.6) Our approach to determine the "singularity" and "non- singularity" conditions will be to use the "minimum prin- ciple" to determine the Optimal control law. The Hamiltonian is r H.u = z Iuj(t)l + + > i=1 where p(t) is the costate vector and is the solution of the linear homogeneous equation é(t)=-—=-Ap(t) . The "minimum principle" requires that r Z |u§(t)| + + i=1 r T g z |uj(t)| + + i=1 (7.1.7) for all admissible u(t)gU, and for all tg[0,T]. 208 Equation (7.1.7) yields, for |uj(t)| i l , u§(t) = 0 if ||<-1 u§(t) = -1 if >+1 0 i u3(t) 1 +1 if =-l -l i u§(t) i 0 if =+l Definition (7.1.1). For the fuel-Optimal problem to be singular, it is necessary that in the control interval [0,T], there be at least one subinterval [t ] such that 1't2 for some integer j, | = +1 for all te[t1,t2] (7.1.8) holds. Since the function is constant (+1) all its time derivatives must be zero. By repeated time differentiation of Equation (7.1.8) and by using the fact 209 that 6* = “ATP*(t) I we Obtain: §% = o + = o + = 0 or (Abjlp*(t)> = 0 then £1, * = 1 °* = _ T * = o dt 0 0 + (7.1.9) or = o n-l = 0 (Anbj,p*(t)> = 0 210 We pick the first n equations Of the set (7.1.9), and we re- write them in the following form: r -. 4— Ab , + 3 + Azb. + 3 p*(t) = 0 (7.1.10) + An-lb. + J + Anb + 3 .1 Now let's define the nxn matrix Ej as follows: -1 E. = b. Ab. A“ b. J [J J 3] Then Equation (7.1.10) reduces to the equation EgATp*(t) = O for all t€[tl,t2] . (7.1.11) However, according to the singularity condition, || = l, p*(t) cannot be zero; thus, for Equation (7.1.11) to hold, it is necessary that the matrix EE‘AT be a singular matrix. Therefore, we must have the rela- tion 211 T T det(EjA ) = (detA)(detEj) = 0 or det A O or det E. = 0 . Therefore, for the linear time-invariant fuel—Optimal regulator problem to be singular, it is necessary that (det A)(det Ej) = 0 for some j . Remark 1. For the "singular" linear time-invariant fuel-Optimal regulator problem if det Ej = 0, then Ej is singular. Then we have two cases. Case 1. Single-input system. In this case if det Ej = det E ll 0 ‘ then the system is "uncontrollable". Case 2. Multiple-input system. In this case, as we discussed in Chapter 6, det Ej = 0 does not provide the "uncontrollability" Of the system and the system can be "controllable" even though det Ej = 0. However, in this case, as we mentioned in Chapter 6, at least one state variable is independent of control uj. 212 Remark 2. The condition (det A)(det Ej) = o is a necessary condition for existence of the singular interval. It is possible to have (det A)(det Ej) = 0 for non-singular linear time-invariant fuel-Optimal regu- lator problem. Therefore, it is possible that the linear time-invariant fuel-Optimal regulator problem be 1. "Controllable" and "singular" (not possible in time-Optimal problem), or 2. "Uncontrollable" (or det Ej = 0 for some j) and "non-singular" (not possible in time-Optimal problem). Thus, in order to discuss the ill-posedness Of the linear time-invariant fuel-optimal regulator problem, we must divide the problem into two categories "controllable" and "uncontrollable" rather than "singular" and "non-singular", since the "controllable" fuel-Optimal problem can have "singular" and/or "non-singular" intervals and the "un- controllable" (or det Ej = 0 for some j) fuel-Optimal problem can also have "singular" and/or "non-singular" 213 intervals (Example (7.3.l)). However, in the case Of time- Optimal problem (as we discussed in Chapter 6) the "sin- gularity" also represented the "uncontrollability" or singularity of matrix Ej for some j, and "non-singularity" represented the "controllability" of the system. Theorem (7.1.1) - The linear time-invariant "singular" or "non-singular" fuel-optimal regulator problem is given as (7.1.1) — (7.1.6). If detE.=dtb. Ab. ...A .=0 3 e E 3 3 b3] for some j, then the problem is tOpOlogically ill-posed in the sense of Hadamard and Of Tikhonov. Proof - We investigate two cases: Case 1. Single input system. In this case as we mentioned in Remark 1, the system is "uncontrollable", and therefore according to Theorem (5.2.A.l) the problem is topologically ill-posed. Case 2. Multiple input system. In this case we have det Ej = 0 for some j Therefore at least one state variable is independent Of 214 control uj in original, canonical, or combinational form of state equations (as we discussed in (5.2.A)) and therefore according to Theorem (5.2.A) the Problem is topologically ill-posed. Theorem (7.1.2) — Assume that the ill—posed linear time- invariant singular or non-singular fuel-Optimal regulator problem, given as (7.1.1) - (7.1.6), is either "uncontrol- lable" or det Ej = 0 for some j . Then the ill-posed problem is "unregularizable". £392: - Since the problem is either "uncontrollable" or at least one state variable is independent of control uj (det Ej = 0), then according to Theorem (5.2.A.2) or Theorem (5.2.A.3) the problem is also unregularizable and the solu- tion of state equation is always unstable in the sense Of Tikhonov or of Hadamard. Therefore, if the linear time-invariant singular or non-singular fuel-Optimal regulator problem is "uncontrol- lable" or det Ej = 0 for some j , then the problem is topologically ill-posed and unregulariz- able. 215 Theorem (7.1.3) - The controllable linear time-invariant fuel-optimal regulator problem, given as (7.1.1) - (7.1.6)), is always topologically well—posed in the sense of Tikhonov. Proof - TO prove that the controllable system (7.1.1) - (7.1.6) is always well—posed, consider the following mini- mizing sequence for the jth component of control u jn = u; + ujn (7.1.3.1) where I i 1, Iu§I : l, Iu. Ifijm Jnl — Notice that, ujn is the control sequence depending on n, and any minimizing sequence, fijn’ can be of the form of Equation (7.1.3.1). Now let's substitute (7.1.3.1) in (7.1.5): r T T T Z Iujnldt = &)|uln|dt + &)|u2n|dt +...+&)|urn|dt = r rn T T + ulnldt + 4)Iu§ + uznldt +...+ &)Iu* + u Idt T T T T T i [O Iuildt + f0 |u§|dt+...+ f0 lugldt +f0 Iulnldt+...+fo Iurnldt 216 The Optimal cost functional is: T r T T T t = * = t * * J f0 jEl|uj| f0 lulldt + [O |u2|dt +...+ f0 |ur|dt Since fijn is minimizing sequence therefore J~ + J* as n+w u n or T T T T f0 IuIIdt +...+ f0 lugldt + f0 Iulnldt +...+ f0 lurnIdt Iflufldt +...+ Iflugldt as n+w . (7.1.3.2) Since Iu.n| Z 0 and the integral Of the positive function over the positive interval is always positive, we have T . &)Iujnldt 3 0 for j=l,2,...,r Therefore in order to have Jfi + J* as n+m or 217 T T T T f0 luildt +...+ ‘6 |ult|dt + f0 lulnldt +...+ f0 lurnldt + (fluildt +...+ Iflugldt as n+m we must have &?|ujnldt +.0 as n+w or |u. +0 for j=l,2,...,r as n+m . (7.1.3.3) Jnl Now consider the L2 measure Of the norm of the minimiz- ing sequence fin. We have = Ilujn+ ugllg llujnll + llugll forj=1,2,....r ||u. Izdt , T jnIIL2 _ I1Iujn where ujn is continuous or piecewise continuous function. Now we have 218 . _ . T 11m||ujn|| - 11m I1Iuinl dt n+oo n+oo T . = )0 11m|uin|2dt n+oo but according to (7.1.3.3) we have Iujnl +0 as n+w or |u. I2 + 0 as n+m jn therefore T . 2 4) 11m|uin| dt + 0 n+oo and Therefore, for the minimizing sequence fijn llujnll Ilujn||+0 as n +w. we get Ilfijn - ugll + 0 as n+w 219 or IIfijnIIL2 * IIU3‘IIL for j=l,2,...,r as n+w , and the minimizing sequence converges to its Optimal value u*. Now consider the c-metric measure of the norm of fin 7' — * * IthC-H%n+%H:H%nI+H%H ||ujn|| = Max |ujn(t)| tCEOIT] limIIu. II = lim Maqu. (t)] = Max liqu. (t)] 3n 3n jn n+w n+w te[O,T] te[0,T]n+w but according to (7.1.3.3) we have luinl +0 as n+w therefore lim| Iujnl Ic + 0 n+oo or limllfi. 3 + u? for j=l 2 ... r n+oo II jHC ’ ’ ' nllc 220 and the minimizing sequence fin converges to its Optimal value u*. Since any minimizing sequence can be represented as the form Of (7.1.3.1), then according to the proof given above, any minimizing sequence converges to the Optimal value u* and the controllable linear time-invariant fuel—Optimal regulator problem is always topologically well-posed in the sense of Tikhonov (see Examples (7.3.2) and (7.3.3)). 7.2. Ill-Posedness of More General Form of Fueljgptimal Regulator Problem Let us assume that the state equations of a system are of the form Mt) = g(x = 2 |uj +1 3 J 3 u? = o if -1 p*T(t)bj(x*(t)) < +1 singular 0 < u; < Mj if p*T(t)bj(x*(t)) == -1 interval 1 o if p*T(t)bj(x*(t)) = +1 Now consider the following theorem: Theorem (7.2.1) - The controllable fuel-optimal regu- lator problem, given as (7.2.1) - (7.2.3), is topologically well-posed in the sense Of Tikhonov. Proof - The proof is very much similar to the one of Theorem (7.1.3). Once more we can represent any minimizing sequence by the form ~ = * ujn uj + ujrl (7.2.5) 223 where u; = aj = constant, -M 5 aj i M j=l,2,...,r ~ T r T T Jn - f0 2 Ifijn|dt = [0 |u1n|dt +...+ f0 lurnldt = . Idt < _ T T '— b IOLl + u1nldt +°°°+ I)|ar + urn — T T T T {)|al|at +...+ I,|ar|dt + I)|uln|dt +...+ I)|urn|dt J* = Tylalldt +...+ {Flarldt Since fi'n is minimizing sequence, therefor + J* as n+m Lu :1: and T T T T [0 lalldt +...+ f0 larldt + [0 |uln|dt +...+ f0 lurnldt 1 (Flaildt +...+ &?|ar|dt . as n+w (7.2.6) 224 Since fT 0IujnIdt: 0 therefore for (7.2.6.) to satisfy we must have In. |+0 as n+oo jn Now we have ~ T 2 llujnllL - Hujn-ajll — 70 Iujnl at IIujnIIL2 + 0 as n+oo or ~ _ 2 00 II 3nIIL2 + IIOLjIIL — Taj as n+ or IIMH H 1 _ 2 m IIunIIL2 * IIu*IIL2 - T. aj as n+ 3 In c-metric we have: Maxlu. | jn tEEOIT] 225 Therefore ||u = llfi--a jnllc (Since as we showed previously |u. + 0 as n+w) JnI and thus, or C. ||~n|lc + ||u*llc as ...... Since any minimizing sequence can be represented as the form of (7.2.5), then according to the above proof, any minimizing sequence converges to the Optimal value u* and the controllable fuel—Optimal regulator problem, given as (7.2.1) - (7.2.3), is topologically well-posed in the sense Of Tikhonov. 7.3. Examples Example (7.3.1). In this example, we want to show that the condition det (A) det (Ej) = 0 (7.3.1.1) 226 which we discussed in Section 7.1, is a necessary but not sufficient condition; that is, the condition (7.3.1.1) can be satisfied while the fuel-optimal problem does not have a singular interval. Consider the following fuel- Optimal regulator problem with the single input: X '-3 II where 1 0 l l A=[ I: “[1 l 0 l 1 In this problem A and E are both singular since det A = 0 and det E = 0 . The Hamiltonian is: H = Iu(t)I + pl(xl+u) + p2(x1+u) 227 and the costate equations are: - 8H * _ _, __ = _ (7.3.1.2) p* = - 32L = O + p* = constant = c 2 8x2 2 2 The minimum principle must satisfy: Iu*(t)| + pix: + pgxi + (pi+p3)u* i |u(t)| + pix: + paxi + (pi+p§)u or |u*(t)I + (pi+p§)u* i |u(t)| + (pi+p3)u , Therefore the Optimal control is: *z— ° ** u 1 if pl+p2>1 u* = +1 if pi+p§<-1 *=._ ' - ** u if 1 p1+p2.LI‘_l u*(t) = 0 if -l1 u*(t) = +1 p*(t)<-l 0 -1f f(u*)dt a -t O }< o t ft. or (ua is a minimizing sequence) k J(uak) _>_ J(u*) and (8.1.1.2) becomes: J(uak) - J(u*) + ale(uak) : ale(u*) + dkB or del(ua ) IA * ale(u ) + dkB and consequently we get: S21(uak) _<_ 91 (11*) + B - | O H E. » I: u 0v + ID II o, + ‘m II D. or (8.1.1.3) Therefore, all elements of the sequence {ua } belongs to the k compact set 254 Udo = {u: ueUl, Ql(u) i do} . Also, it follows from (8.1.1.2) that, 0 : J(uak) - J(u*) : [J(uak) - J(u*)] + ale(uak) 5 ale(u*) + akB , since the stabilizing functional 91 (ua ) is nonnegative, k and therefore 0 i J(uak) - J(u*) i ale(u*) + akB. (8.1.1.4) Since OR is a decreasing sequence therefore we have: and (8.1.1.4) becomes: . _ * . * . 0 i 11m(J(uak) J(u )) i 11m ale(u ) + l1mak8 , k+oo k+oo k+oo since R u*) is a "stabilizing" functional, is bounded and l( B is a finite number, we have 0 < 1im (J(u ) - J(u*)) < O _ a _ k+m k 255 or J(ua ) + J(u*) as k + w and thus, the sequence {ua } is a minimizing sequence. k Since {ua } belongs to the compact set fido for which R 2 - 9 (1.1 ) = D (u Iu ) < d I l dk l ak 0 — 0 it is regularized and consequently : * m {ua } + u0 u as k + . Therefore, for the "controllable" optimal-control problem described by (8.1.1) and (8.1.2), a minimizing se- quence {ua } which satisfies condition (8.1.1.1) converges k in the metric of a complete m-dimensional subspace UlCU to u*. Theorem (8.1.2) - If Ul (with metric p1(u1,u2)) is a Hilbert space and if Ul is s-compactly and continuously convexly embedded in U, then for the "controllable" Optimal- control problem given as (8.1.1) and (8.1.2) there exists an element ua(t)eU, which minimizes the functional M%(u), where t 143m) = J(u) + (191(u) =ftf 2 - f(u(t))dt + (u,u ) , O 0“31 o 256 and converges to the Optimal value u* as a + 0. Proof - Assume that {um} is a sequence minimizing the functional M$(u) such that the sequence {M$(un)} converges to MS, a _ . a MO — inf Ml(u) ueUl Without loss Of generality we can assume that {M§(un)} is a decreasing sequence, so for every n, there exists uleUl such that Ma(u ) > Ma(u ) = J(u ) + an (u ) l l — l n n l n or g) (u) < imam )-J(u )3 (8.1.2.1) l n — a l 1 n ' Since f(u) is a nonnegative functional and tf f(u*)dt 3 o t f(u )dt > f o n '— O or J(un) Z J(u*) : 0 , 257 where u*,eUl is the Optimal control minimizing the func- tional J(u), then we can write (8.1.2.1) as: 81mm) : §[M§(ul) - J(u*)] = Constant = c , (8.1.2.2) where C is independent of n. Now we have: 9 (u ) = 02(u G ) = [In —G l n l ' 0 n 0 (where u is a fixed element of U1 very close to u*.) 0 Therefore, we can write: llun-fioll : llunll + Ilfioll : c and IIunIIjiC' (91 is bounded and 60 is also finite). Therefore {un}€Sr; but we assumed that the balls Sr are compact in U and thus the sequence {un} is also compact in U and will converge to some limit point; we label this point ECU. Let us show that E converges strongly to an element fieU 1' TO this end, we show that un is "fundamental" in U1, that is, for every 5 > 0, there exists an n(e) such that 258 the inequality holds for n 3 n(e) and p > 0 (un and un+p are in a neigh- borhood of u). Assume that this is not the case. Then, there exist an e and numerical sequences {nk} and {mk}, 0 where Now define Yk = 0.5 (um -un ). Then, = u - Y . (8.1.2.3) k “k “k k k Since un and u are elements Of a minimizing se- k k quence {un} of the functional M$(u) and since the sequence {M%(un)} is decreasing and we also have 259 or (1 Cl Ml()\k) - Ml(un ) 3 o. k (I (1 Ml()k) Ml(umk) 3 0. then for sufficiently large k we have: J(xk) + anl() (J(un ) + aalmn )) k k | v I m k) (8.1.2.4) J(Ak) + dQl(Ak) - (J(umk) + an(umk)) Z 'Efi where Si and efi approach zero as k + w. By virtue Of the continuous convexity of the embedding of U1 in U, lim J(un ) = lim J(um ) = lim J(Ak) . k-mo k k+oo k k+oo Therefore, we have: J(un ) = J(um ) = J(Ak) for all k :_N, where N is a large fixed number. Then (8.1.2.4) becomes (for large k): 260 J(Ak) + an(Ak) - J(Ak) - an(unk) 36k J(Ak) + dQl(Ak) - J(Ak) - an(um ) 3 e" k or Q (A ) - n (u ) > - El—-= - ' 1 k 1 nk — a 61k (8.1.2.5) E k _ " 01(Ak) - 91(umk) 3 —E— - Elk where l H m Elk + 0 and Elk + 0 as k + . Let us substitute for 01 as (using (8.1.2.3)): _ 2 - _ _- 2 - 2 - 2 =||u -u +YII =||u ‘U’YII nk 0 k Ink 0 k 2 2 =||u II +201 :Y)+||7|| nk0 nk0 k k 2 2 =||u II -2 0, k2(t) > 0 . However, in most cases taking R(u) as R(u) = fJIIu(t)II2 dt or 9(u) = |Iu-u*I|2 En will work as the "stabilizing" functional. 8.2. Singular Lineartguadratic Problem Consideration In this section, we will study the ill-posedness Of singular linear time-invariant-quadratic problems. 265 Consider the following problem: minimize J = % Q? xtpxdt (8.2.1) subject to x(t) = AX(t) + bu(t), x(0) = x0 , (8.2.2) where l. (A,b) is a controllable pair, 2. p is symmetric, 3. p, A, b are constant matrices, 4. x is an n-vector. The controllability assumption is imposed so that the prob- lem is a reasonable control problem, i.e., if (A,b) is not a controllable pair, then as we discussed in Chapter 5, the cost function could be completely independent Of the control action and independent Of the path, since u does not appear explicitly in (8.2.1) and the problem will be ill-posed and unregularizable (Chapter 5). Definition ([P3],[P4]) - Assume that the problem defined by Equations (8.2.1) and (8.2.2) contains at least one Optimal singular subarc. Let dzq/dt2q[3H/3u], where H is the Hamiltonian, be the lowest-order total derivative Of 266 Hu = BH/Bu in which u appears explicitly. Then, the opt imal control problem is said to be a "singular problem of order q . If u never appears explicitly in the differentiation process, then the Optimal control problem is called an " finite-order singular problem". In linear quadratic controller design, the matrix p in- is usually assumed to be positive semidefinite and in this case, it can be shown that ([P3],[P4]), if a singular Opti- mal exists, then the order of the singular problem is less than or equal to the dimension of the state space, i.e., q 3 n. If p is not a positive semidefinite matrix, then it can be shown ([P3],[P4] that the order of the problem is either less than or equal to n or equal to infinity, i.e. q 3 n or q = w. Furthermore, it has been shown ([P3], [P4]), that the Optimal control for the singular linear quadratic problem of equations (8.2.1) and (8.2.2) is m- order if there exists a symmetric matrix P which satisfi p=AF+FA, (8. Fb = 0 . (8. es 2.3) 2.4) Now let's find the singular interval for the problem: H = prx + AT(Ax+bu) 267 the "minimum-principle": T x* px* + 1*T(Ax*+bu*) 3 x*Tpx* + 1*T(Ax*+bu) or )‘* bu* < A* bu and if x*Tb = 0, then the singular interval exists. Now let's consider the case Of m-Order singular problems, for which we have then the cost functional becomes: _ 1 T T _ 1 T T T _ J — 3 A) x pxdt — 2 f6 x (A F+FA)xdt — = % Q? (xTATFx+xTFAx)dt . (8.2.5) However: x = Ax + bu or 268 V O also we have xT = xTAT + uTbT or O A xTF = xTATF + uTbTF = XTATF where II o O H 2"? v8 ll 0‘ '11 l 0‘ tr] II o Pb (F is symmetric). Therefore (8.2.5) becomes: J = % Q?[(i)TFx + xTijdt _ 1 T <1 T _ 1 T T ‘2oat‘xm'2XFXIo or _ l T T _ _ J - §{(x (T)Fx(T)) - (x (O)Fx(o)] — constant — C . Therefore J is constant and depends only on the initial and 269 final states, and if the initial and final states are fixed, then J is independent Of control u. In other words, the system is controllable therefore u effects the states but J is independent Of states and u as long as the initial and final states are given. Also if we have a singular linear— quadratic problem given as 8.2.1 - 8.2.4, then, because J is independent Of u, there exists an infinite number of controls which will produce the same cost functionals (algebraic ill-posedness). Theorem (8.2.1) - The 00-order singular linear quadratic problem given as 8.2.1 - 8.2.4 is ill-posed. £5993 - We have shown that for the problem given as 8.2.1 - 8.2.4 the cost functional is independent of con- trol function u. Therefore, the changes in u have no ef- fect on J, i.e., for arbitrarily large changes in u we measure no changes in J, and therefore the problem is ill- posed in the sense of Hadamard. We have also mentioned that the solution of the problem is not unique, and there- fore the problem is algebraically ill-posed. To show the ill-posedness in the sense of Tikhonov, we consider the following minimizing sequence, where 270 T T +1 2k H 3 t 3 (2k+1) 3 un = -1 (2k-1)3> (9.1.7) According to "minimum principle": H(p*,x*,u*,t,e) = minH(p*,x*,u,t,e) . (9.1.8) ueU Let the control function be expressed by means of 9.1.8 as a function u = F(p,x,t,e) . Let the function g in (9.1.1) be represented by the regular expansion g(x,u,t) = go(x,u,t) + egl(x,u,t) + 82... (9.1.9) The degenerate case arises if 90 does not depend on u, i.e., g(xlurtre) = 90(X(t)rt) + 691(X1u,t) + 62... (9.1.10) 278 In this case the system for e = 0 is not controllable and the Optimal control for e = 0 cannot be found. This system is called "weakly" controllable system which we have discussed in Chapter 5. The functions x(t) and p(t) can be presented in the form x = x°(t) + exl(t) + ... , p(t) = p0(t) + pl(t) + ... From (9.1.1) and (9.1.2) we have 0 %fif = 90(Xopt): XO(O) = X0 , (9-1-11) 0 3F(XO(T)) p (T) = - 8X . (9.1.12) Therefore the Hamiltonian becomes: H = (p0.go(xo.t)) + e[(p0.gl(x0,u,t) + H1] + C(82) where Hl denotes a term which does not depend on u. The minimization Of H according to (9.1.8) is in the first ap- proximation equivalent to minimization of the expression (p0(t),gl(xo(t),u,t) . min utu . (9.1.13) 279 For the zeroth approximation, the state variables and the adjoint vector are determined as the solutions of the initial problem (9.1.11) and (9.1.12), respectively. After this, the control u1 in the first approximation can be found by means of the condition (9.1.13). It will be interesting to study the accuracy estimate of this method [M2], and also apply the regularization technique and in- vestigate the convergence Of the sequence {un} for each iteration according to the following steps: integrating of the system for x(t) with some control function {un}; integrating of the adjoint system for p(t) with the Obtained xn(t) and the same un(t) from t = t to t = T; Obtaining 0 a new control sequence ui(t) which minimizes the Hamil- tonian with respect to u for the Obtained xn(t) and pn(t); estimate the accuracy by evaluating the difference between the minimal value J* and its value J: correSponding tO the control u; to make sure to have 1 * OilJn-JI:0(€) for all admissible controls. Notice that here we have two types Of iterations, one is the iteration of numerical technique and the second is the iteration steps of the minimizing sequence {un} according to the changes of positive integer n from zero to w [M2]. It is important to study the topological restrictions 280 on the functions 9 and ¢ on the metric spaces x and y and on the variable 5 since 8 is an artificial variable and it is not related to any physical properties Of the system (in some practical cases a can be related to some small parameters which have been ignored originally). 9.2. Ill-Posed Infinite-Order Singular Linear-Quadratic Problem with a Fixed Final State In Section 8.2 we studied the 00-Order singular linear- quadratic problem for which the final state was either free or fixed. We showed that, in the case of free final state, if the problem is ill-posed then it is regularizable. However, if the final state is fixed then the ill-posed problem is not regularizable since the cost functional J is constant and independent of control variable u and state variable x(t). We can modify this problem by in- troducing an artificial function of u such as g(u) to relate the cost functional to control variable u as follows: J = %'Q?[prx + g(u)]dt (9.2.1) subjected to k(t) = AX(t) + bu(t),x(o) = xo,x fixed (9.2.2) f = with the same conditions given as (8.2.2). 281 Let's find the singular interval: H = prx + g(u) + AT(Ax+bu) . Let ~ 901) = IUI then according to minimum principle we have €0Iu*| + x*Tbu* 3 €0Iu| + x*Tbu . The singular interval exists for We also have 3 = -§- f0T(prx+ Og(u))dt = % IOTE-agE-(xTFx) + 809(u)1 1 T dt + 5 Q) €0g(u)dt (been shown in Section 8.2). Therefore 8 O T _2— f0 Ill(t) Idt ~ J = constant + 282 ~ 8 J* = MinJ = const + 7? Min 4?|u(t)|dt . (9.7.3) uEU we can also let §(u) = u(t) then the singular interval exists for A b = -EO and we Obtain: ~ E0 T J* = MinJ = const + 7? Min Q) u(t)dt . (9.2.4) Now we can apply the regularization technique to system (9.2.2) with the new cost functional given as (9.2.3) and we can study the condition which should be imposed on e and g(u) in order to have the stable solution (in the sense Of Tikhonov) with a minimum difference between the cost functional 3* corresponding to g(u) and J* cor- responding to the Optimal control u*: o 3 [3* - J*I 3 0(t) 283 9.3. Convexity Studies We can study the convexity property of the control function in more detail in order to find the connection between the convexity of the control function un(t) and the convergence Of un(t) to u* due to convexity [T5]. The two most important cases that can be studied for non-convex but continuous and bounded control are: l. The minimum does not exist, and 2. The minimum exists, but the minimizing sequence does not converge to the minimum value. We can also study the algebraic ill-posedness Of some classes Of controllable Optimal problems since, if the problem is not algebraically well-posed then we cannot apply any approximation technique to find the stable (in the sense of Tikhonov or of Hadamard) solution. We can investigate the connection between existence, uniqueness (algebraic well-posedness) and convexity Of the control variable. 9.4. Simulations and Other Techniques We can apply the iterative numerical techniques such as, "the method of steepest descent", "variation Of ex— tremals", "quasi-linearization" and "gradient projection" along with the regularization method to solve some practical 284 problems such as the multiple-input, multiple-output, linear, time-invariant, controllable, minimum-time problem and compare the approximate solution with the theoretical (actual) solution and estimate the computational error. It would be interesting to study the application of other approximation techniques such as, "the selection method", "the method of quasi-solutions", "the method of replacement Of the original equation with an equation close to it," and "the method of quasi-inversion" (which we have seen their application to mathematical ill-posed problems) to ill-posed Optimal control problems [T5]. We can extend the results that we Obtained in Chapter 6, 7 and 8 to more general forms of Optimal control problems. However, in some general cases the applicability of the regularization technique is doubtful due to the complex relationship between the state variable, the control vari- able and the cost functional. There are also methods in which the minimizing series un(t) is explicitly constructed. For example, for mini- mization of the differential functional f(x) in the Hilbert space the "gradient method" or "the method Of conjugate gradients" may be used. The convergence of this method is proven for certain incorrect problems. SO far as suitable methods for incorrect variational problems having bounds ("the gradient projection method", "the Ritz conditional gradient method") are concerned, only particular results 285 are known ([L6],[L7]). It would be Of great practical and theoretical interest to Obtain further results concerning the strong convergence of these minimization methods for ill-posed problems with bound constraints. APPENDIX APPENDIX LIST OF MATHEMATICAL DEFINITIONS BANACH SPACE - complete normed linear space. BOUNDARY POINT — A point x is a boundary point of s ififevery neighborhood of f contains points of both 5 and its complement. BOUNDED LINEAR OPERATOR — A is bounded linear Operator on D ifBC such that,‘ for all ueD A A IIAuII _<_ CIIUII where IIAII = least Upper bound ISTII Hull #0 BOUNDED SEQUENCE - {xq} is bounded sequence if |xq| 3 r (r finite). BOUNDED SET — s is contained in some ball about origin. c-METRIC SPACE - The space c is defined to be the set Of measurable functions f such that ||f||C = Mafo(u)I|2au)%<., METRIC SPACE - Let s be a set. Associate with each pair Of elements f,ges a non-negative number d(f,g) such that for all f,g,hes: 24. 25. 26. 27. 289 d(f,g) d(flg) = d(ngh d(f,g) 1 d(f.h) + d(h+9); Oiff=g; d is called a metric on s, and s is called a metric Space. NORMED LINEAR SPACE - xl,x2€s, then (s is linear) llxl+x2lljllxlll + (1x21); IIaXll = Id! IIXII: OPEN SET - 5 contains none of its boundary points. RELATIVELY COMPACT SET - Let s be a subset Of Banach space B, then s is relatively compact if from every sequence {xn}es, a subsequence can be chosen which converges in B, that is, the limit need not to belong to s. WEAK CONVERGENCE — Let B be a Banach space. A se- quence (fn) in B is said to be weakly convergent if there is an element f (weak limit) in B such that lim f*(fn) = f*(f) for all f*eB* (f* is dual of f and B* is dual Space). BIBL IOGRAPHY [A1]. [A2]. [A3]. [A4]. [B1]. [BZ]. [B3]. [B4]. [B5]. [C1]. BIBLIOGRAPHY Arsenin, V. Y., and Ivanov, V. V., "On Solution Of Some Integral Equations of the First Kind, Con- volution Type, by Regularization Method," U.S.R. Journal of Computational Mathematics and Mathematical Physics, Vol. 8, NO. 2, 1968. Athans, M., and Falb, P. L., "Optimal Control", McGraw-Hill Company, 1966. Audley, D. R., and Lee, D. A., "Ill-Posed and Well- Posed Problems in System Identification," IEEE Transaction on Automatic Control, Vol. AC-l9, No. 6, 1974. Audley, D. R., and Lee, D. A., "Considerations Re- lated to Ill-Posed and Well-Posed Problems in System Identification," USAF Aerospace Res. Lab., 1973. Bakushinskiy, A. B., "A General Procedure for Con- structing Regularizing Algorithms for a Linear Ill- Posed Equation in a Hilbert Space," Journal Of Computational Mathematics and Mathematical Physics, Vol. 7, NO. 3, 1968. Bell, D. J., and Jacobson, D. H., "Singular Optimal Control Problems", Academic Press, 1975. Bellman, R., Kalaba, R., and Lockett, J., "Dynamic Programming and Ill-Conditioned Linear Systems," J. Math. Anal. Apl., Vol. 10, 1965. Bidaut, M. F., "Existence Theorems for Usual and Approximate Solutions of Optimal Control Problems," J. Optim. Theory Appl. 15, 1975. Bryson, A. B., "Applied Optimal Control," Blaisdell Publishing Company, 1969. Ching, L. T., "Structural Controllability," IEEE Transactions On Automatic Control, V01. 19, No. 3, 1974. 290 [C2]. [c3]. [Gl]. [G2]. [H1]. [H2]. [H3]. [11]. [12]. [I3]. [I4]. [Is]. [I6]. [J1]. 291 Chow, J. H., "Preservation of Controllability in Linear Time-Invariant Perturbed System," INT. J. Control, Vol. 25, NO. 5, 1977. Clements, D. J., "Singular Optimal Control," Springer— Verlag, 1978. Gabsov, R., and Kirillova, L. S., "Singular Optimal Control," Mathematical Concepts and Methods in Science, Vol. 2., 1978. Goncharskiy, A. V., "A Regularizing Algorithm for Ill-Posed Problems with Approximately Given Operator," Journal of Computational Mathematics and Mathematical Physics, 12, 6, 1972. Hestenes, M. R., "Calculus of Variations and Optimal Control Theory," John Wiley and Sons, Inc., 1966. Hestenes, M. R., "Optimization Theory in Finite Dimensional Space," John Wiley and Sons, 1975. Hutson, V., and Pym, J. 8., "Application of Func- tional Analysis and Operator Theory," Academic Press, 1980. Ivanov, V. K., "Integral Equation of the First Kind and an Approximate Solution for the Inverse Problem of Potential," Dokl, SSSR, 143, NO. 4, 1962. Ivanov, V. K., "Ill-Posed Problems in Topological Space, "Siberian Mathematical Journal, Vol. 10, NO. 5, 1969. Ivanov, V. K., "Linear Incorrect Problems," Dokl. ANSSSR, 145, NO. 2, 1962. Ivanov, V. K., "Incorrectly Formulated Problems," Matem. 8b., 61, NO. 2, 1963. Ivanov, V. K., "Uniform Regularization of Unstable Problems," Matem. Sb., 7, NO. 3, 1966. Ivanov, V. V., "A General Approximate Method of Solving Linear Problems," Dokl. AN SSSR, 143, No. 3, 1962. Johnson, C.-D., and Gibson, J. E., "Singular Solu- tions in Problems of Optimal Control," IEEE Trans- actions on Automatic Control, January 1975. [K1]. [K2]. [K3]. [K4]. [K5]. [K6]. [L1]. [L2]. [L3]. [L4]. [L5]. [L6]. [L7]. [L8]. [L9]. 292 Kokotovic, P. V., "Controllability and Time-Optimal Control of System with Slow and Fast Modes," IEEE Trans, 20, 1975. Kokotovic, P. V., "Singular Perturbation Of Linear Regulators: Basic Theorems", IEEE TranS., Vol. AC- 17, NO. 1, 1972. Krein, S. G., "Classes of Correctness Of Some Bound- ary Problems," Dokl. AN SSSR, 114, NO. 6, 1957. Kryanev, A. V., "Solution of Ill-Posed Problems by the Method Of Successive Approximations," Dokl, AN SSSR, 210, l, 1974. Kryanev, A. V., "An Iterational Method Of Solving Ill-Posed Problems, Dokl, AN SSSR, 14, l, 1973. Kyrillova, L. S., and Piontkovskii, A. A., "Incorrect Problems in Optimal Control Theory (Survey)", Auto- matic and Remote Control, NO. 10, 1968. Lavrentyev, M. M., "The Inverse Problem in Potential Theory," Dokl, AN SSSR, 106, 3, 1956. Laverentyev, M. M., "FOrmulation of Some Incorrect Problems of Mathematical Physics," Matem. 8b., 7, No. 3, 1966. Laverentyev, M. M., "The Accuracy of Solution of Systems Of Linear Equations, " Matem. 8b., 34, NO. 2, 1954. Laverentyev, M. M., "Integral Equations Of the First Kind," AN SSSR, 127, NO. l, 1959. Laverentyev,.M. M., "On Some Ill-Posed Problems Of Mathematical Physics," Siberian Division, Acad. Sci. Of USSR, 1962. Levitin, E. 8., "Convergence of Minimizing Series in Problems of the Conditional Extremum," Dokl. AN SSSR, 168, NO. 5, 1966. Levitin, E. S., "Constrained Minimizationlnethods," USSR Computational Math. and Math Physics, NO. 5, 1966. Leones, "Advanced In Control Systems," Lee, E. B., and Markus, L., "Foundation Of Optimal Control Theory", John Wiley and Sons, Inc., 1967. [M1]. [M2]. [M3]. [M4]. [N1]. [P1]. [P2]. [P3]. [P4]. [R1]. [R2]. [81]. 293 Mayne, D. Q., and Murdoch, P., "Modal Control of Linear Time-Invariant Systems," INT. J. Control, Vol. 11, NO. 2, 1970. Moiseev, N. N., and Chernousko, F. L., "Asymptotic Methods in the Theory Of Optimal Control", IEEE Transactions on Automatic Control, Vol. AC-26, No. 5, 1981. morozov, V. A., "Linear and Non-Linear Ill-Posed Problems, Contributions to Science and Technology, mathematical Analysis," 11, Moscow, VINITI Press, 1973. Morozov, V. A., "Solution of Functional Equations by the Regularization Method," Dokl. AN SSSR, 167, NO. 3, 1966. Nashed, M. Z., "Ill-Posed Problems in Systems Analysis and Identification," Proc. 1979 Conference on Information Sciences and Systems, The John Hop- kinds University, Baltimore, 1979. Phillips, D. L., "A Technique for the Numerical Solution of Certain Integral Equation Of the First Kind," Joint Association Of Computing Machinery, 9, NO. 1, 1962. Polyak, B. T., "Theorem on the Existence and Con- vergence of Minimizing Series for Extremal Problems in the Presence of Bounds," Dokl. AN SSSR, 166, No. 2, 1966. Powers, W. F., and Brotins, R., "The Infinite-Order Singular Problem," Optimal Control Applications and Methods, Vol. 1, 1980. Powers, W. F., "On the Order Of Singular Optimal Control Problems," Journal of Optimization Theory and Applications, Vol. 32, NO. 4, 1980. Replogie, J., and Holcomb, B. D., "The Use of Mathematical Programming for Solving Singular and Poorly Conditioned Systems of Equations," J. fiath. Anal. Appl., Vol. 20, 1967. Rutman, R. S., "Ill-Posed Inverse Problems of Control Theory," Dept. of Elec. Eng., Southeastern Mass. University, North Dartmouth, Mass, 1977. Seidman, T. I., "Nonconvergence Results for the Application of Least-Squares Estimation to I11- Posed Problems," Journal of Optimization Theory and Applications, Vol. 30, No. 4, 1980. [82]. [S3]. [T1]. [T2]. [T3]. [T4]. [T5]. [T6]. [T7]. [T8]. [T9]. [T10]. [T11]. [T12]. 294 Seidman, T. 1., "Time-Invariance of the Reachable Set for Linear Control Problems," Journal of Mathe- matical Analysis and Applications, 72, 17-20, 1979. Stak Old, I., "Green's Functions and Boundary Value Problems." Wiley, c1979. Tikhonov, A. N., "Methods of Solving Optimal Control Problems,""Journal Of Computational Math and Math Physics, 7, 2, 1967. Tikhonov, A. N., "Methods of Solving the Inverse Antenna Theory Problem," Computational Math and Programming,)(IV, Moscow State University Press, 1970. Tikhonov, A. N., "Some Problems of Optimal Control and Stable Methods for Solving Them," Dokl. AN SSSR, 163, NO. 3, 1965. Tikhonov, A. N., "On Solution of Ill-Posed Problems and Methods of Regularization," Soviet Math, Dokl, orig., Vol. 151, NO. 3, 1963. Tikhonov, A. N., and Arsenin, V. Y., "Solutions of I11- Posed Problems," Winston-Wiley, New York, 1977. Tikhonov, A. N., "Regularization Of Incorrectly Formulated Problems," Dokl. AN SSSR, 153, NO. l, 1963. Tikhonov, A. N., "Stability Of Inverse Problems," Dokl. AN SSR, 39, NO. 5, 1944. Tikhonov, A. N., "Methods of Regularizing Optimal Control Problems," Dokl. AN SSSR, 162, NO. 4, 1965. Tikhonov, A. N., "Stability of the Problem of Optimizing Functionals," J. Math and Math Physics, 6, No. 4, 1966. Tikhonov, A. N., "Approximate Solution of Friedholm's Integral of the First Kind," J. Math and Math Physics, 4, NO. 3, 1964. Tsypkin, Y. 2., "Algorithms of Optimization with a Priori Uncertainty," Proc. 7th International Congress Of IFAC, 1978. Twomey, 8., "On the Numerical Solution Of Fred- holm Integral Equations of the First Kind," J. Aplc. Comp. Math., No. 9, 1963. [V1]. [21]. [22]. [23]. [Z4]. 295 Vincent, T. L., and 60h, B. S., "Teminality, Nor- mality, and Transversality Conditions," Journal of Optimization Theory and Applications, Vol. 9, NO. 1, 1972. Zolezzi, T., "Some Topics in the Mathematical Theory of Optimal Control," J. Control Theory and TOpics in Functional Analysis, Vol. II, 1976. Zolezzi, T., "On Convergence Of Minima," Unione Math Italiana Bollettino, Series 4, NO. 8, 1973. Zolezzi, T., "Characterizations Of Some Variational Perturbations Of the Abstract Linear-Quadratic Problem," SIAM J. Control and Optimization, Vol. 16, NO. l, 1978. Zolezzi, T., "A Characterization of Well-Posed Optimal Control Systems," Appl. Math Optim. 4, 1978.