{3| ‘ 0 ll. hobfilduwtmwmmw 1 TA . 174.... Ha...“ .I .. , V .Ei‘lfi“ O-: ‘I. we; .5- iii; a». 'u I I. .r :1 3,: I hall .. J w v 7‘ . yl‘ Obi-4N3! 1 JP... :1”. ‘7; It: In. '11:... .35... ‘1- . T. 9. .. (qr. .w a t ‘ . , “.3. t . rqfium tglt It .amldf‘,’ ‘..C.H. .. I ‘ z. ( a1»; .mwmrs .nrul! ) lllbfhfi .93.. $3,”! I! I. .bllagfi... 1?! 0. ‘OvaXI. . .l! lnxwynflfll ! .(v [WEE 1)? A a : , Hour)”: THES!S IllmjlliLll’flllllllflIl'nll‘llififlll”MIMI 1 93 01771 8259 LIBRARY Michigan State Unlverslty This is to certify that the dissertation entitled A SEPARATION PRINCIPLE FOR THE CONTROL OF A CLASS OF NONLINEAR SYSTEMS ' presented by Ahmad Nazir Atassi has been accepted towards fulfillment of the requirements for Ph.D. degree in Electrical Eng Major professor Date May 3) lcl‘lfi MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 _ ‘l ’44,? Y . PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINE return on or before date due. MAY BE RECALLED with earlier due date if requested. I DATE DUE I DATE DUE I DATE DUE lw 067052003 “Ur“: 082311 2‘. L MAR 24 2005 12. n7 *' 1M comma-p14 A SEPARATION PRINCIPLE FOR THE CONTROL OF A CLASS OF NONLINEAR SYSTEMS By AHMAD NAZIR ATASSI A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Electrical and Computer Engineering 1999 ABSTRACT A SEPARATION PRINCIPLE FOR THE CONTROL OF A CLASS OF NONLINEAR SYSTEMS By AHMAD NAZIR ATASSI It is shown that the performance of a globally bounded partial state feedback control of a certain class of nonlinear systems can be recovered by a sufficiently fast high-gain observer. The performance recovery includes recovery of asymptotic stability, the region of attraction, and trajectories. We deal with stability with respect to an equilibrium point and stability with respect to a compact, positively invariant set. High-gain observers have been used in the design of output feedback controllers due to their ability to robustly estimate the unmeasured states while asymptoti- cally attenuating disturbances. The available techniques for the design of high-gain observers can be classified into three groups: pole-placement algorithms, Riccati equation-based algorithms, and Lyapunov equation-based algorithms. In this work, we show that the abovementioned separation results hold for all these observer design techniques. To my mother, Hanaa, whose life and death are an inspiration To my father, Fawaz, who showed me the way To my uncle, Wa’el, my mathematics professor To my grandmother, Thurayya, the mother of all To my wife, Aivy, whose love gives my life a meaning To Jeanne and Mary who repainted my memory To myself for never giving up iii ACKNOWLEDGMENTS I would like to thank my advisor, Professor Hassan Khalil, for his invaluable help and for sharing his vast knowledge with me. I also would like to thank my beloved wife, Aivy, for being my pillar of support and my shoulder of comfort. Finally, I thank my brothers, Sammar and Faraj, for their continuous support and encouragement. iv TABLE OF CONTENTS LIST OF FIGURES viii 1 Introduction 1 2 A Separation Principle for the Stabilization of a Class of Nonlinear Systems 11 2.1 Introduction ................................ 11 2.2 The Class of Systems ........................... 13 2.3 Partial State Feedback Control ..................... 17 2.4 High-Gain Observer ............................ 18 2.5 Performance Recovery .......................... 20 2.5.1 Boundedness ........................... 21 2.5.2 Ultimate Boundedness ...................... 27 2.5.3 Trajectory Convergence ..................... 29 2.5.4 Recovery of asymptotic stability of the origin ......... 31 2.6 Examples ................................. 41 2.6.1 Example 1 ............................. 41 2.6.2 Example 2 - Inverted Pendulum ................. 42 2.6.3 Example 3 - VTOL aircraft ................... 45 2.7 Conclusion ................................. 48 3 A Separation Principle for the Control of a Class of Nonlinear Sys- tems 60 3.1 Introduction ................................ 60 3.2 Definitions and Converse Lyapunov Results ............... 61 3.3 Problem Formulation ........................... 66 3.4 Performance Recovery - Semiglobal Separation Results ........ 68 3.4.1 Boundedness ........................... 70 3.4.2 Ultimate Boundedness ...................... 72 3.4.3 Trajectory Convergence ..................... 73 3.4.4 Uniform Asymptotic Stability .................. 75 3.5 Regional Separation results ....................... 82 3.6 Conclusion ................................. 85 A Separation Principle for the Control of a Class of Nonlinear Sys- tems - Examples 86 4.1 Introduction ................................ 86 4.2 Robust Control I - Finite Time Convergence to a Set [10] ....... 88 4.3 Robust Control 11 - F inite-time Convergence to a Set [50, Section 3] . 94 4.4 Tracking [28] ............................... 103 4.5 Servomechanism [26, 37, 38] ....................... 108 4.6 Servomechanism [21] ........................... 118 4.7 Adaptive Control [27, 2, 1] ........................ 125 4.8 Conclusion ................................. 134 Separation Results for the Control of Nonlinear Systems Using Dif- ferent High-gain Observer Designs 135 5.1 Introduction ................................ 135 5.2 High-gain Observers - A Comparative Study .............. 137 5.2.1 Pole Placement /Time—structure Assignment .......... 138 5.2.2 Riccati Equation-Based Algorithms ............... 138 5.2.3 Lyapunov Equation-Based Algorithm .............. 144 5.3 More Separation Results ......................... 145 5.4 Conclusion ................................. 147 A Smooth Converse Lyapunov Theorem for Robust Stability 149 6.1 Introduction ................................ 149 6.2 Definitions and the Main Results .................... 150 6.2.1 Strong Stability .......................... 150 6.2.2 Uniform Asymptotic Stability .................. 153 6.3 Proof of Theorem 6.1 ........................... 154 6.3.1 Case of a Complete System ................... 155 6.3.2 Case of a Forward Complete System .............. 164 6.3.3 Smoothing of Lyapunov functions ................ 165 6.4 Proofs ................................... 166 6.4.1 Proof of Lemma 6.1 ........................ 166 6.4.2 Proof of Theorem 6.2 ....................... 170 vi 7 Conclusions A Technical Lemmas B Cheap Control and H00 Disturbance attenuation C Stability Results C.1 Proof of Theorem 3.1 ........................... C.2 Proof of Theorem 3.3 ........................... BIBLIOGRAPHY vii 180 185 190 194 194 196 201 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 LIST OF FIGURES Recovery of region of attraction: 6* = 0 (solid), 6* = 0.007 (dashed), 6* = 0.057 (dash-dotted), and 6* = 0.082 (dotted) ........... 50 Peaking of the velocity: 2:2 (solid), 2‘72 (dashed) ............ 51 Trajectory convergence: state feedback (solid); output feedback with e = 0.0015 (dashed), e = 0.001 (dash—dotted), and e = 0.0001 (dotted) 52 Effect of nonlinearity in the observer - without uncertainty: state feed- back (solid), with linear observer (dashed), with nonlinear observer (dotted) .................................. 53 Effect of nonlinearity in the observer - with uncertainty: state feedback (solid), with linear observer (dashed), with nonlinear observer (dotted) 54 Trajectory convergence - without uncertainty: state feedback (solid); output feedback with e = 0.02 (dashed), e = 0.008 (dash-dotted), and e = 0.002 (dotted) ............................. 55 Trajectory convergence - with uncertainty: state feedback (solid); out- put feedback with c = 0.02 (dashed), e = 0.008 (dash-dotted), and e = 0.002 (dotted) ............................. 56 High-gain vs. nominal observer - without uncertainty: state feedback (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash-dotted) and c = 0.002 (dotted) .................. 57 High-gain vs. nominal observer - with uncertainty: state feedback (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash-dotted) and e = 0.002 (dotted) .................. 58 High-gain vs. nominal observer - importance of saturation: state feedback (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash-dotted) and e = 0.002 (dotted) ............. 59 viii CHAPTER 1 Introduction A separation principle is the property of a control system that allows the design of a dynamic feedback controller in two separate steps. In the first step we design a state feedback controller that achieves some desired properties such as asymptotic convergence to an equilibrium point or asymptotic tracking by certain outputs of some reference signals. This design assumes that all the state variables are available for measurement. Then in the second step we design a state estimator (observer) using measurements of some outputs (functions of the state variables). The state is then replaced by its estimate in the state feedback controller to produce the dynamic output feedback controller. The separation property facilitates the design of feedback controllers in case we can only use measurements of outputs. The literature provides several separation principles formulated for different classes of systems. Hereafter we give a quick survey of the main ones. For the case of linear time-invariant control systems at = A$+Bu y=Cx if the system is stabilizable and detectable then a dynamical output feedback control can be designed such that the closed-loop system has the origin as a globally asymp- totically stable equilibrium point. The control input takes the form it = —K.i: where K is designed to stabilize the closed-loop system under state feedback and 3‘: is the estimate of the state 3: provided by the observer :i:=A:i:+Bu+H(y—C:i;) where H is the observer gain. This observer (Luenberger observer) replicates the dynamics of the system and is driven by the output estimation error. The state es- timation error x(t) — i(t) approaches zero asymptotically. This separation principle means that the poles of the system and those of the observer can be assigned sepa- rately and that the poles of the closed-loop system are the set of the state poles and the observer poles. Thus, if global stabilization is achieved by some linear state feed- back u = —K:1:, the corresponding output feedback it = —K:i: will globally recover this property. We understand from the above discussion that the statement of a separation principle is a statement of recovery by an output feedback controller of performance achieved by a state feedback controller. In the case of linear systems the recovery includes recovery of asymptotic stability to an equilibrium point and asymptotic tracking of a reference. But we don’t achieve closeness, in some sense, of trajecto— ries under output feedback to trajectories under state feedback. This point will be illustrated later on. In what follows we always assume that the origin is an equilibrium point of the system considered and all the vector fields and state functions mentioned are at least continuous in some appropriate regions of interest. We also abbreviate asymptotic stability (or asymptotically stable) by AS, local or locally by L and global or globally by C, so we may have for example LAS or GAS. Vidyasagar in [53] formulated a separation principle for a general nonlinear MIMO (multiple-input multiple-output) control system i7 : f(t,$,’lt) y : TU, IL.) This principle states that if f (t, ., .) is uniformly locally Lipschitz and if the system is stabilizable and weakly detectable, then the closed-loop system composed of the stabilizing state feedback and a weak detector has the origin as a LAS equilibrium point. Stabilizability implies the existence of a state feedback law it = h(t, x) that renders the origin of the system LAS. On the other hand, weak detectability implies the existence of a dynamical system (weak detector) driven by the input u and the output y and whose state is not very far from the state of the original system for initial conditions close to the origin. This result is local, i.e. valid only in a neighborhood of the origin, and not constructive since it does not suggest a construction of the detector. The author actually stated a global version of his separation principle for the case of exponential stabilizability. This global version required, in addition to the above conditions, that the state estimation error decays exponentially and that the vector fields involved be globally Lipschitz. It is noteworthy that the formulation of a separation principle requires the system to have some stabilizability and detectability properties. In the examples hereafter we highlight this idea. It is useful for the rest to define a feedback linearizable system. It is well known [20] that if the nonlinear system X = f(x)+g(x)u y = h(X) has a vector relative degree (7‘1,...,rp), then it can be transformed into the form t = At+BIf1(t,z)+gl(t.z)u1 2'; : f2(€,Z,’U.) y=C€ where g1(., .) is invertible in the domain of interest and y is the only measured output. If the sum of the relative degrees equals the dimension of the system it is called fully linearizable, otherwise it is called input-output linearizable. The first part of the system is a chain of integrators driven by a nonlinear function of the states and the inputs. The second part is called the zero dynamics. Khalil and Esfandiari in [11] used a separation approach for a fully linearizable MIMO system with a high-gain observer that estimates the output and its derivatives. This observer is a chain of integrators of the form 1 . a . 3/1 = y2+—€l(y1-y1) . a .. y2 = y3+-€-22(y1-y1) yn = gal-(311-3}1)+f0(€,2)+90(€,2)u driven by the output estimation error and by nonlinearities that reflect our partial knowledge of the system at hand. The observer gain can be regulated through the parameter 6 in such a way that the observer is fast enough that the state under 1For convenience, we give the observer equations for 8180 systems. 4 output feedback stays close to the state under state feedback and thus the stabilizing property of the controller is not lost. This observer introduces peaking into the state variables, see [11], which requires the disabling (saturation) of the controller outside some region of interest. The disabling period is made short by an appropriate choice of the observer gain. Notice that the stabilizability and detectability properties are immediately satisfied due to the particular structure of the system studied. Tornambe in [52] proposed a local separation principle for a class of input-output linearizable non-minimum phase (unstable zeros dynamics) SISO nonlinear control systems. The system should be observable in the region of interest meaning that the state can be written as a function of the input, the output, and their derivatives. Considering the derivatives of the input and the output as state variables transforms the system into a double chain of integrators that can be linearized by state feed- back. The fact that some of the states of the transformed system form a chain of integrators composed of an output and its derivatives allows the use of a high-gain observer similar to the one mentioned above to estimate these states. Two differences exist between the two observers, one is that Tornambe’s observer did not use any knowledge of the nonlinearities of the system, and the other is that it did not deal with the peaking issue which makes the region of attraction of the system shrinks as the observer gain grows. Teel and Praly in [49] pr0posed an interesting and constructive global separation principle for a wide class of autonomous SISO (single-input single-output) nonlinear control systems. It is a combination and extension of the ideas used in Esfandiari and Khalil [11] and Tornambe [52]. They stated that a globally stabilizable and uniformly completely observable (UCO) system can be semi-globally stabilizable by dynamic output feedback. The UCO property implies that the state of the system can be written as a function of the input, the output, and a finite (not necessarily equal) number of their derivatives. This function is called the observability map. The semi-global stabilizability means that given any compact set in the state space, then a dynamic output feedback controller can be designed such that this compact set is contained in the region of attraction of the closed loop-system. The observer used is the high-gain observer proposed by Esfandiari and Khalil in [11]. One major limitation of this separation principle is that the exact knowledge of the observability map implies a lack of robustness of the controller. The last three results dealt with systems that can be transformed into a linear system using a function of the input, the output and their derivatives and an appro- priate state feedback. For the sake of completeness we mention two more separation results that deal with systems that are affine in the inputs and whose unforced dy- namics (for zero inputs) are Lyapunov stable (i.e. dissipative). The first [14] is a global separation principle for a class of 8180 bilinear systems of the form it = Ax+u(Ba:+b) y=Cx that are stabilizable and whose unforced dynamics are observable (the unforced dynamics are linear). Structural conditions were provided to check the stabilizability and observability properties and two globally stabilizing feedback controllers were proposed. The two observers replicate the dynamics of the system and are driven by the output estimation error; one requires the input to be small and the other requires the input to be persistent. The second [33] is a global separation result for a class of MIMO systems of the form m i: = ALE—l- Z ”(313)212- i: y = C3: that are stabilizable by bounded inputs ( a control input was suggested) and whose unforced dynamics are observable (linear dynamics). Also, structural conditions were given to check the stabilizability of the system. The observer replicates the dynamics of the systems and is driven by the output estimation error. This [result was shown to generalize the one for the bilinear system. The above separation results were mainly concerned with AS to an equilibrium point. The following separation results are for cases of adaptive control and ser- vomechanism where not all of the states approach the origin and the system depends on time-varying external signals (reference signals, disturbances, etc.). In [27, 2, 1] Khalil and Aloliwi considered a 8180 minimum phase nonlinear system which can be globally represented by an I/ O model of the form P P y(")=fo(-)+ z f.(.)6.+(go(.>+ z g.(.)0.-)u(m>+d(t,a,.) 2': 1 2': 1 where the different nonlinearities depend on the input, the output, and their derivatives, 0 is an unknown vector of parameters that belong to a compact set, and d(.) is a locally Lipschitz disturbance. Adaptive state feedback controllers were designed so that the output can asymptotically track a bounded reference signal with bounded derivatives. Later, a linear high-gain observer was used to semi-globally recover what was achieved under state feedback. In some of the scenarios studied the parameter estimation error 0 = 0 — 0 and some of the states were only proven to be bounded. In other scenarios the tracking error as well as the parameter estimation error were only ultimately bounded or were small in the mean-square sense. In [26] Khalil used the internal model principle to regionally (in a region of interest) solve the servomechanism problem for a class of SISO uncertain systems that can be transformed into a normal form with no zero dynamics. The proposed dynamic state feedback controller achieved asymptotic convergence of the tracking error but the overall state approached an invariant zero-error manifold. A similar result was developed in [38] for a class of SISO systems represented by an I/O equation which depends nonlinearly on a vector of time varying disturbances. In [21] Isidori has shown that his previously proposed solution for the general structurally stable regulation problem, see [22], can be coupled with the idea of high- gain observer suggested by Khalil in [11] to solve a problem of robust semiglobal output regulation in the presence of parameter uncertainties ranging over compact sets. The class of systems considered is of the form é = Z(M)Z+P0($1,w,#) 9': = Fx+Gu+P(z,z,w,u) e = H27 - q(w,u) where (F, G, H) is a chain of integrators, P(.) is a lower triangular vector of nonlinearities, Z ()1) is Hurwitz in a compact set, and w is generated by a linear neutrally stable exosystem. It is noteworthy that other separation results can be formulated for a specific application as is the case for polymerization reactors discussed in [54]. The purpose of this survey of literature is to show the classes of systems covered by different separation results and the observers used to achieve the goal of successful output feedback. Having done that we can exactly situate our version among the others and point out its merits compared to them. In a concise way we can say the following about our work: (1) class of systems: it includes I / O linearizable systems, fully linearizable systems, and observable systems (that can be represented by a higher order ODE in the input and the output); (2) observer: the robust high-gain observer of [11]; (3) state feedback: any globally bounded state feedback that stabilizes the system with respect to an equilibrium point or a compact positively invariant set. No Lyapunov function associated with the state feedback is needed; . (4) recovery properties: it recovers the AS property, trajectories, as well as the region of attraction (as opposed to local or global); (5) it unifies and generalizes the cases of [49, 52, 11, 2, 1, 27, 26, 37, 38] which encompass a wide class of systems and a wide class of control techniques and objectives. The goal of this work is to formulate, in a generic way, a separation principle for a wide class of nonlinear systems. This principle is based on the idea of fast estimation of the outputs and their derivatives cristalized in the high-gain observer concept. The resulting output feedback controller will be shown to recover a wide range of performance measures achieved by the state feedback controller. It is meant by ”a generic way” that the statement of the separation principle does not depend 9 on a specific state feedback nor on the knowledge of a Lyapunov function associated with this state feedback. This work can be divided into three major sections. The first formulates a sepa- ration principle for the case where the state feedback controller achieves asymptotic stability to an equilibrium point and is given in Chapter 2. The second formulates a separation principle for the case of asymptotic stability to a compact, positively in- variant set and is given in Chapters 3 and 4. The third shows that different observer design techniques yield separation results similar to those of Chapters 2 and 3, and is given in Chapter 5. Chapter 6 gives converse Lyapunov results for stability with respect to sets. 10 CHAPTER 2 A Separation Principle for the Stabilization of a Class of Nonlinear Systems 2.1 Introduction A few years ago, Esfandiari and Khalil introduced in [11] a new technique in the design of robust output feedback control for input-output linearizable systems [11]. The basic ingredients of this technique are (1) A high-gain observer that robustly estimates the derivatives of the output; (2) A globally bounded state feedback control, usually obtained by saturating a continuous state feedback function outside a compact region of interest, that meets the design objectives. The global boundedness of the control protects the state of the plant from peaking when the high-gain observer estimates are used instead of the true states. This technique has been the impetus for several results we have obtained over the past few years. It was used in [11] and [30] to achieve stabilization and semiglobal 11 stabilization of fully-linearizable systems, in [26] to design robust servomechanisms for fully linearizable systems, in [37] and [38] to extend the results of [26] to systems having nontrivial zero dynamics. It was used also in adaptive control [27], variable structure control [41], and speed control of induction motors [31]. As the results of [11] became known, other researchers adopted its technique in their work. Teel and Praly [49, 50] and Lin and Saberi [35] used it in a few papers to achieve semiglobal stabilization. Jankovic [23] used it in an adaptive control problem. Isidori [21] used it to unify his pioneering work on servomechanisms [22] with Khalil’s work [26]. Jiang, Hill, and Guo [24] used a reduced-order high-gain observer to achieve semiglobal stabilization for a nonlinear benchmark example. In most of these papers the controller is designed in two steps. First, a globally bounded state feedback control is designed to meet the design objective. Second, a high-gain observer, designed to be fast enough, recovers the performance achieved under state feedback. This recovery is shown using asymptotic analysis of a singu- larly perturbed closed-loop system. Our goal in the current chapter is to develop this recovery property in a generic form that can be applied to any globally bounded stabilizing state feedback control. In particular, we want a separation theorem that is independent of the state feedback design and is derived under the least restric- tive assumptions. To increase the utility of such a theorem, we want to allow for model uncertainty. Finally, we want to demonstrate that the performance recovery achieved with the high-gain observer is more than just asymptotic stability recovery. It includes recovery of the region of attraction and trajectories achieved under state feedback. These features of the theorem distinguish our work from the interesting separation theorem proved by Teel and Praly [49], where it is shown that global stabilizability by state feedback and observability imply semiglobal stabilizability by output feedback. The result of [49] assumes perfect knowledge of the model and shows only recovery of asymptotic stability and the semiglobal stabilization property. 12 The rest of the chapter is organized as follows. Section 2.2 introduces the class of systems with which the chapter is concerned. Section 2.3 states the requirements on the state feedback control. Section 2.4 introduces the high-gain observer used to estimate the states. Section 2.5 discusses and proves the recovery of performance by output feedback. Section 2.6 illustrates the previous results through simulations. 2.2 The Class of Systems We consider a multivariable nonlinear system represented by i: = Ax+Bq§(:r,z,u) (2.1) 2 = Maw) (2.2) y = Ca: (2.3) C = (102,2) (2.4) where u E U _C_ Rm is the control input, y E y Q RP and C E R3 are measured outputs, and a: E 26' Q RT and z E Z C; R6 constitute the state vector. The r x 1' matrix A, the r x p matrix B, and the p x r matrix C, given by P0 1 0. 0 0 1 0 A = block diag[A1,...,Ap], Az- = 0 0 1 b0 O-TiX’I‘z' 13 0 0 B=block diag[Bl,...,Bp], Bi: 0 L1.rz-x1 C=blockdiag[Cl,...,Cp], Ci: 1 0 0 IXTi where 1 S i S p and r = r1 + + rp represent p chains of integrators. This system satisfies the following assumption: Assumption 2.1 The functions (f) : X x Z x L! ——> RP and w : X x Z x U -—) Re are locally Lipschitz in their arguments over the domain of interest. In addition, ¢(0, 0,0) = 0, ¢(0,0, 0) = 0, and q(0, 0) = 0. Assumption 2.1 guarantees that the origin is an equilibrium point of the Open-loop system. The main source of the system (2.1)—(2.4) is the normal form of a nonlinear system having a vector relative degree (r1, ..., rp). It is well known [20] that if the nonlinear system has a vector relative degree (r1, ..., rp), then it can be transformed into the form t = A£+B[f1(t,z)+gl(t,z)u] 2° = f2(€,z,U) 14 31:06 In this case gl(., .) is nonsingular in the domain of interest. y is the only measured output, and equation (2.4) is dropped. Another source, where equation (2.4) is relevant, arises when the dynamics are extended by augmenting a series of integrators at the input side [27, 49, 52]. Ref- erence [27] considers a single-input single-output system modeled by the nth—order differential equation ym) = f0(-) + 90(-)#(n — p) where a is the input, 3; is the output, f0 and 90 are functions of y, 31(1), ..., y(n _ 1), a, ..., ”(n - p - 1). Augmenting (n — p) integrators at the input side, denoting their states by z,- 2 pa — 1), setting a = #(n - p) as the control input of the augmented system, and taking at,- = y“ ‘— 1) , results in a system of the form (2.1)— (2.4) with r = n and Z = n - p. In this case all the components of z are measured; hence q(:c, z) = z in (2.4). Another example of the use of extended dynamics can be found in [49]. Reference [49] considers a single-input single-output nonlinear system where complete uniform observability guarantees that the state x can be expressed as x = h(y, ..., y(ny), p, ..., ”(nul) where p is the input, y is the output, and h(.) is a known function. Furthermore, y(ny + 1) = d(x, u, ..., u(mul) where a is a known function. The dynamics are extended by adding lu = max {nu, mu} integrators at the input side. Taking x,- = y“), for 1 g i _<_ ny, z,- = #0:), for 1 S i g In, and u 2 pa“ + 1), the system can be represented as i: = Ax+Ba(h(:z:,z),z) 2 = Aoz-l- Bou 15 ysz where (A, B, C) and (A0, BO) represent chains of ny and In integrators, respectively. In this case (15 is independent of u and all the components of z are measured; hence q(:r, z) = z in (2.4). The model (2.1)—(2.4) may also arise in models of mechanical and electrome- chanical systems where displacement variables are measured while their derivatives (velocities, accelerations, etc.) are not measured. Examples of such models can be found in [31, 15, 36, 19, 51, 24]. A model of induction motor [31] can be represented in the form (2.1)—(2.4) with :1: = [(5, 5, (HT, where 6 = 0 — gref is the rotor position error, and z constitutes the rotor flux and stator current. The measured variables y and C are the rotor position error and stator current, respectively. Examples of mod- els that can be put in the normal form are the models given in [15] and [36] for the. inverted pendulum-on-a—cart system. These models, taking the cart displacement as the measured output, have a relative degree two but are non—minimum phase. In [19] and [51], the models given of the ball and beam system fit in the form of (2.1)—(2.4). These systems can not be represented in the normal form because, taking the ball’s position as one of the measured outputs, the relative degree is not well defined. A last example of systems fitting the model (2.1)-(2.4) is the model of the benchmark rotational/translational actuator given in [24] where the system has a well defined relative degree with respect to the cart’s position but only locally. The design of the globally stabilizing state feedback controller of [24] does not transform the system into the normal form. 16 2.3 Partial State Feedback Control Our goal is to design a feedback control to stabilize the origin of the closed-loop system using only the measured outputs y and C. We follow a two-step approach to this design problem. We first design a partial state feedback control that uses measurements of :1: and C. Then we use a high-gain observer to estimate :1: from y. We allow the state feedback control to be dynamic, which is the case, for example, in the adaptive control of [27], the speed control of induction motors of [31] which uses a flux observer, and the stabilizing control of [11] which includes a zero-dynamics observer. The state feedback control is assumed to be in the form 19 = WM) (2.5) u = 7(19,-’B,C) (2.6) A non-dynamic state feedback control it = 7(ar, C) will be viewed as a special case of (2.5)—(2.6) by dropping equation (2.5). We allow any state feedback design that holds the following three properties: Assumption 2.2 (1) F and 'y are locally Lipschitz functions in their arguments over the domain of interest, I‘(0, 0, 0) = 0 and 7(0, 0, 0) = 0; (2) F and y are globally bounded functions of x; (3) The origin (:1: = 0, z = 0, 19 = 0) is an asymptotically stable equilibrium point of the closed—loop system. The state feedback control may have additional properties like conformity to certain performance measures on the trajectories, and / or robustness to a certain class of un- certainties. The global boundedness requirement is typically achieved by saturation 17 of P(.) and y(.), or saturation of their x-input, outside a compact region of interest. The boundedness of y(.) may also follow from a design under control constraints. 2.4 High-Gain Observer To implement the control (2.5)—(2.6) we use it = 7(19, 5:, C) (2-8) where the state estimate 5: is generated by the high-gain observer 2*: = Ar + 33003, 4, u) + H(y — 05:) (2.9) The observer gain H is chosen as 0122/62 H = block diag[H1, . . . , Hp], Hz- : E (2.10) . ._1 afi-l/frz l i r: 0.52 r,/ rz-xl where e is a positive constant to be specified and the positive constants a;- are chosen such that the roots of r- ir-—1 i 1 i _ sl+alsl +---+a7.z,_ls +01”— 18 are in the Open left-half plane, for all i = 1,. . . ,p. The function do(;r,C,u) is a nominal model of d(rr, z, u). If d is a known function of :r, C, and u, we can take do = d. On the other hand, if such nominal model is not available, we can take do = 0, which results in a linear observer. The function do is required to satisfy the following assumption: Assumption 2.3 do(:1:,C,*y(t9,;r,C)) is locally Lipschitz in its arguments over the domain of interest and globally bounded in :r 1. Moreover, do(0, 0, 0) = 0. The high-gain observer suggested above is a full-order one. It is possible to design a reduced-order high-gain observer of order (r — p) [45]. We start by partitioning y the state vector :1: as x = . , where y = [y1, ..., yp]T, and rewriting the system (2.1)—(2.2) as y = Algv v = A22v+822d(y,v,z,u) where, A22, 822, A12 have the same structure as A, B, C, respectively, but with all the ri’s replaced by (r,- — 1)’s. Then, the reduced-order observer will be u) = (1422 - LA12)('w + Ly) + B22¢0(y, 17, C, u) v = w+Ly 1 The need for this global boundedness property will be made clear in footnote 3 in Section 2.5.1. Moreover, global boundedness of do can always be achieved by saturation outside a compact set of interest in the subspace of x. 19 where fli/e 53/62 L = block diag[L1, . . . , Lp], L,- = ' 5i, — 2/6” _ 2 ~(Ti—1)X1 The positive constants B;- are chosen such that the roots of sri-l+fiisri_2+-~+B,i.z,_231+fliz._1=0 are in the open left-half plane, for all i = 1, . . . , p. Remark 2.1 (1 } In the case where one or more of the p channels that compose the system (2.1)— (2.3) have relative degrees equal to one, then, there is no need to estimate their states; they can be used as they are in the output feedback controller. This will not modify the forthcoming analysis. (2) We can use the measured y in the output feedback controller instead of its estimate g = C5: even when we construct a full-order high-gain observer. This, also, will not change the forthcoming analysis. 2.5 Performance Recovery In this section we show that the output feedback controller (2.7)—(2.9) recovers the performance of the state feedback controller (2.5)—(2.6) for sufficiently small 6. The performance recovery manifests itself in three points. First, the origin (a: = 0, z = 20 0, 29 = 0, it = 0) of the closed-loop system under output feedback is asymptotically stable. Second, The output feedback controller recovers the region of attraction of the state feedback controller in the sense that if ’R is the region of attraction under state feedback, then for any compact set S in the interior of 72 and any compact set Q Q RT, the set S x Q is included in the region of attraction under output feedback control. Third, the trajectory of (I, z,19) under output feedback approaches the trajectory under state feedback as e —-> 0. For the clarity of the proof we will follow the way used in [49] which establishes asymptotic stability of the origin in three steps. The first step is to show boundedness of trajectories, second, ultimate boundedness of these trajectories, and third, local asymptotic stability. This allows us to deal with asymptotic stability as a local property that will require some additional assumptions on the nonlinearities of the closed-loop system, stated as a condition on the modeling error, which is not the case for boundedness and ultimate boundedness of trajectories. Thus, the proof will be divided into four sections. First we prove recovery of boundedness of trajectories; second, we prove recovery of ultimate boundedness of trajectories; third, we prove trajectory convergence; and fourth, we prove recovery of asymptotic stability of the origin. In the latter case, we distinguish between asymptotic and exponential stability in order to impose less conservative restrictions on the modeling error. 2.5.1 Boundedness Let us first, for the purpose of analysis, replace the observer dynamics by the equiv- alent dynamics of the scaled estimation error _Iv-%j nzj—W 21 for 1 S i S p and 1 g j 3 72" Hence, we have :i: = :1: -— D(e)n where 77 : [77111W1771r11177p1117lprplT D(e) = block diag[Dl,...,Dp] D,- = diag[eri — 1,...,1]rz. x r, The closed-loop system can be represented by j: = A2: + Bd(.r, 2, 7(19, 2: — man, 0) (2.11) 2 = to, 2,709,513 - Den, 4)) (2.12) .9 = no, a: - 0(6),), 4) (2.13) a; = A077 + eBg(:z:, 2,19, D(e)n) (2.14) where 9($,z,19,D(€)77) = d(x,z,'r(19,i,C))-¢o(i,C,7(19,i,C)) and TAO = D'1(e)(A — HC)D(e) is an r x r Hurwitz matrix. The initial states are (a:(0),z(0),19(0)) = (:ro,zo,t90) E S, and 12(0) 2 230 E Q, where S is any compact set in the interior of ’R and Q is any compact subset of RT; thus, we have 77(0) 2 D—1(e)(.vo — 530) = no. Remark 2.2 In the case of a reduced-order high-gain observer, the scaled error will be ”a —2,-,- Ti - 1 —j 772:7“:6 for 1 g i g p and 1 Sj S (r,- — 1). We also get the same structure of (211)—(214) but with 7'; replaced by (Ti — 1), :1: replaced by (y, v), and it replaced by (y, a) where A v = v — D(e)n. The results of this paper will be the same for this reduced-order 22 observer. The system (2.11)—(2.14) is a standard singularly perturbed one, and 77 = 0 is the unique solution of (2.14) when 6 = 0. The reduced system, obtained by substituting 77 = 0 in (2.11)~(2.14), is nothing but the closed—loop system under state feedback. For simplicity we write the system (2.11)-~(2.13) as x=sa¢mn> 05) where x = [$T, zT,19T]T and x(0) = [51%, 231,193]? Then, the reduced system is given by X=hflfl> mm) The boundary-layer system, obtained by applying to (2.14) the change of time vari- able 7' = 5 then setting 6 = 0, is given by 1’2_ (17' — A07) (2.17) To fix the notation, let (x(t,e),n(t,e)) denote the trajectory of the system (2.11)— (2.14) starting from (x(0),n(0)). The recovery of boundedness of trajectories is summarized in the following theorem: Theorem 2.1 Let Assumptions 2.1-2.3 hold; then, there exists e’i‘ > 0 such that for every 0 < e 5 31‘, the trajectories (x, 7)) of the system (211)—(214) starting in S x Q are bounded for allt Z 0. Proof: The recovery of boundedness can be shown in two steps. First, we show the positive invariance of an appropriately chosen set A. This set is arbitrarily small in the direction of the error variable 1). Second, we show that, any closed-loop trajectory, starting in the compact set 8 x Q, enters the positively invariant set A in finite time. 23 We know that the origin of (2.16) is asymptotically stable with a region of attrac- tion ’R. Then, the converse Lyapunov theorem of Kurzweil [32, Theorem 7] 2 assures the existence of a C1 Lyapunov function V(x) and three positive definite functions U1(x), U2(x), and Ug(x), all defined and continuous on R, such that: U1(X) S V(X) S U2(X) (2.18) lim U1(x) = 00 (2.19) X—) %fr(x,0) s —U3(x) (2.20) for all X E ’R. The properness of V(x) in R guarantees that with any finite c > maxX e S V(x), the set D = {x E ’R : V(x) S c} is a compact subset of ’R. and Sis in the interior of (2. For the boundary-layer system we define the Lyapunov function W(17) = nTPon, where P0 is the positive definite solution of the Lyapunov equation POAO + AgPO = —I. This function satisfies Amin(P0)||n||2 .<. W07) .<. Amax(PO)||n||2 (2.21) aw 2 — < — 2.22 87} A077 _ H77” ( ) Let A = Q x {W(n) 3 p62}. Due to Assumptions 2.1, 2.2, and 2.3 (i.e., continuity of the nonlinearities and global boundedness of P(.) and y(.) in x) we have, for all x E 9 (continuous functions are bounded on compact sets) and all n 6 RT (global 2This theorem is built around a stability notion called Strong Stability in an open set. The proof of Theorem 12 of [32] shows that asymptotic stability implies strong stability in the region of attraction which is an open invariant set. Thus, we can apply Theorem 7. 24 boundedness of the controller), llfr(x, D(€)n)ll S k1 (2-23) “900 D(6)77)|| S k2 (2.24) where k1 and k2 are positive constants independent of 6. Moreover, for any 0 < E < 1, there is L1, independent of e, such that, for all (x,17) E A and every 0 < 6 g '6', we have llfr(x, D(€)n) - fr(x,0)|| S L1|ln|| (2-25) In the rest of the paper we always consider 6 _<_ E. We start by showing that there exist positive constants p and £1 (dependent on p) such that the compact set A is positively invariant for every 0 < 6 g 61. This can be done by verifying that - 6V V S EEC-Mao) + 5’93 (226) for all (x, n) 6 W00 = C} X {W(n) S peg}, and - 1 w s -;||n||2 + 2|ln|l||P0||llB|lk2 (2.27) for all (x, n) E Q X {W(n) = p62}, where lC3 = L1L2\/P/’\min(P0): ||P0|l = Amag;(P0), and L2 is an upper bound for ”-68%“ over 9. Taking p = 16k§||P0||3 and 61 = fl/k3, where 3 = minX E 30 U3(X), it can be shown that, for every 0 < e S 61, we have V g 0 (2.28) for an (m) 6 {Von = c} x W07) 5 p9}, and W s 0 (2.29) 25 for all (x,17) E {V(X) g c} x {W(7]) 2: p62}. From (2.28) and (2.29) we conclude that the set A is positively invariant. Now, we consider the initial state (x(0),:i:(0)) E S x Q. It can be verified that the corresponding initial error 77(0) satisfies ”77(0)” 3 k/€(7'ma2: _ 1) for some non- negative constant k dependent on S and Q, where 7'me = max {7‘1, ..., rp}. Since the vector field fr(., .) is continuous, we can write t we) — x<0) = [O fr(x(r,6),D(€)n(T,6))dT (2.30) Then, using (2.23) and the fact that x(0) is in the interior of D, we have ||X(t, 6) - X(0)|| S kit (2-31) as long as x(t,e) E 9. Thus, there exists a finite time T0, independent of e, such that x(t, 6) E Q for all t E [0, T0]. During this time interval we have 3 - 1 W S — 2—Cllnll2, for W07) 2 p62 Therefore, W(n(t,e>) s gm: _ 1) exp (-01t/e) (2.32) where 01 = 1/2||P0|| and 02 = k2||P0||. Choose 62 > 0 small enough that def 6 02 _ _1 __ < _ , T(€) 1 n( 21” am) To (2 33) for all 0 < e g 52. We note that 62 exists since the left-hand side of the preceding inequality tends to zero as 6 tends to zero. It follows that W(n(T(e), 6)) S p62, for 3 Here we use an inequality similar to (2.27), obtained using (2.24), which is valid for (x,17) E Q x RT. Inequality (2.24) requires global boundedness of 450 in 2:. 26 every 0 < 6 S 62. Taking 6’1‘ 2 min (6,61, 62) guarantees that, for every 0 < 6 S 61‘, the trajectory (x(t, 6), 77(t, 6)) enters A during the interval [0, T(6)] and remains there for all t Z T (6) Thus, the trajectory is bounded for all t Z T(6). On the other hand, for t E [0, T(6)], the trajectory is bounded by virtue of inequalities (2.31) and (2.32).<1 Remark 2.3 The constant 67; depends on the sets 8 and Q. 2.5.2 Ultimate Boundedness Next, we show that trajectories of the system (2.11)—(2.14), starting in S x Q, come arbitrarily close to the origin as time progresses. This is summarized in the following Theorem: Theorem 2.2 Under the conditions of Theorem 2.1, given any 6 > 0, there exist 65 = 65(6) > 0 and T1 = T1(§) such that, for every 0 < 6 S 65, we have llx(t,6)ll + ||n(t,€)|| S E, Vt 2 T1 (234) Proof. From the proof of Theorem 2.1 we know that, for every 0 < 6 S 6?, the trajectory of the closed—loop system, starting from (x(0),:i:(0)) E S x Q, is inside the set A for all t Z T(6), where A is 0(6) in the direction of the variable 17. Take 63 = min{6’1",§\/x\mm(P0/p)}. Then, 63 2 63(5) S (Y and for every 0 < 6 S 63 we have l|n(t, e)” s 5/2, w 2 mg) d=f Tc) (2.35) In what follows we continue working with the Lyapunov function defined in the proof of Theorem 2.1. It can be shown that, for all (x, 77) E A, we have V g -U3(x) + k36 (2.36) 27 where k3- — L1\/p/’\min(P0) maxX E 9(‘9 0x H) Thus, we conclude that - 1 d f v s — §U3(x), forx i {x : U3(X) g 21.36 2 2(6)} (2.37) Since U3(x) is positive definite and continuous, the set {X : U3(X) S [.1.(€)} is a compact set for sufficrently small 6. Let c0(6) = maxU3(X) S ”(€){V(x x;)} c0(6) is nondecreasing and lim,E _, 0 c0(6) = 0. Consider the compact set {X : V(X ) S 60(6 6.)} We have {X : U3(x) S 72(6)} C {x : V(X) S c0(6)}. Choose 64 = 64(5) S 61 small enough such that, for all 6 S 64, the set {X : U3(x) S p(6)} is compact, the set {X : V(x) S c0(6)} is in the interior of Q, and {X : V(X) S 00(6)} C {X3 IIXII S {/2} (2-38) Then, for all x E 0 but X ¢ {x : V(X) S c0(6)}, we have an inequality similar to (2.37). Thus, we conclude that the set {X : V(X) S c0(6)} x {17 : W(17) S p62} is positively invariant and every trajectory in Q x {17 : W(17) S p62} reaches {x : V(X) S c0(6)} x {17 : W(77) S p62} in finite time. In other words, given (2.38), there exists a finite time T = T(5) such that, for every 0 < 6 S 64 le(t, all g 6/2. Vt 2 T (239) Take 65 = 65(6) 2 min (63,64) and T1 2 T1 (5) = max (T, T), then (2.34) follows from (2.35) and (2.39).<1 In what follows we use the results of Theorem 2.2. Although it is understood that different values of 5 give different values of 6’5, we use the same notation for simplicity. 28 2.5.3 Trajectory Convergence Let Xr(t) be the solution of (2.16) starting from x(0). The following theorem shows that x(t, 6) converges to Xr (t) as 6 ——> 0, uniformly in t, for all t 2 0. Theorem 2.3 Under the conditions of Theorem 2.1, given any 5 > 0, there exists (5 > 0 such that, for every 0 < 6 S 63‘ we have ||X(t, 6) - Xr(t)|| S 6, W 2 0 (2-40) Proof. We divide the interval [0,oo) into three intervals [0,T(6)], [T(6),T2], and [T 2,00), where both T(6) and T2 are to be determined later, and show (2.40) for each interval. This approach gives more insight into the factors that come into play in each of these intervals. 0 From Theorem 2.2 we know that there exists a finite time T2 2 T(6), indepen- dent of 6, such that, for every 0 < 6 S 6’2‘, we have le(t, 6)“ s é/2, Vt 2 T2 (2.41) From the asymptotic stability of the origin of the reduced system we know that there exists a finite time T2, independent of 6, such that ler(t)ll S 6/2, Vt 2 T2 (2.42) Take T2 = max{T2, T2}. Then, using the triangular inequality along with (2.41) and (2.42), we conclude that, for every 0 < 6 S 6’5, we have llx(t, 6) - Xr(t)|l S E, Vt 2 T2 (2.43) 29 0 From the proof of Theorem 2.1 we know that llX(ta€) - x(0)|l S W during the interval [0, T(6)]. Similarly, it can be shown that ||Xr(t) - x(0)|l S kit during the same interval. Hence, le(t, 6) — Xr(t)|| S 2k1T(6), Vt E [0, T(6)] (2.44) Since T(6) —> 0 as 6 —) 0, there exists 0 < 65 S 6’5 such that, for every 0 < 6 S 65, we have lleta‘E) - Xr(t)|| S E, W E [0,T(e)] (2-45) 0 Over the interval [T(6), T2], the trajectory x(t, 6) satisfies X = f7~(x, D(6)77(t, 6)), ‘with initial condition X(T(6), 6) Over the same time interval, the trajectory Xr (t) satisfies )2 = fr(X, 0), with initial condition XT(T(€)) From (2.44), we know that leme), e) — more)“ 5 2k1T déf 6(e) 30 where (5 (6) —-> 0 as 6 —> 0+. By continuous dependence of the solutions of differential equations on parameters over compact time intervals [28, Theorem 2.5], we conclude that ||X(t, 6) - Xr(t)ll S 6(6) exp[L(T2 - T(6))l + %\/p/Amme{exp[L(T2 — T(em — 1} s [6(a)+%fi/Amin(Po)eJexpiL(T2—T(e))] (2.46) where L is the Lipschitz constant of f7~(.,0) on 0. Thus, given (2.46), there exists 0 < 66 S 6’5 such that for every 0 < 6 S 66 we have ||X(t,6) — Xr(t)|| S E, W E [T(6),T2] (2-47) Take 65 = min(65, 66), then, using (2.43), (2.45), and (2.47) we conclude (2.40).<1 2.5.4 Recovery of asymptotic stability of the origin We treat first the case when there is no modeling error; then we proceed to the more general case when modeling error is present. In order to avoid very restrictive conditions on the modeling error, we separate the case where the origin of (2.16) is asymptotically stable from the case where it is exponentially stable. At this stage of the chapter we place ourselves in a small ball of radius 5 > 0 around the origin (x, 17) = (0, 0); the value of 5 will be determined later on. Theorem 2.2 guarantees that trajectories of the system (2.11)-—(2.14), starting in S x Q, enter this ball after a finite time and stay thereafter. Case 1: We deal with the case where the origin of (2.16) is asymptotically stable, and there is no modeling error; i.e., we perfectly know 43 in (2.1)—(2.4) and use it as 31 450. We know [32, Theorem 7] that there exists a Cl Lyapunov function V and a positive definite function U3, both defined on a ball B(0, r1) (_1 R for some r1 > 0, such that for all x E B(0, r1) 6V afdxfi) S -U3(x) (2-48) Choose 5 < r1; then given Assumptions 2.1, 2.2, and 2.3, we can show that, for all (Xfll) E BUM) X {”77” S 5} = A1, we have ||9(X,D(6)n)|| S L4.||77ll (2.49) Consider the composite function 17(X, 17) = V(X)+(W(T]))1/2 and choose 0 < 6: S 6’5 such that 1/(4EZ‘NIPOH) — L2L3 -— [IPOIIL4/(/Amin(P0) > 0, where L2 is an upper ov 73? set. Then, we conclude that, for every 0 < 6 S 6:: and for all (x, 77) E A1, we have bound for over A1 and L3 is a Lipschitz constant of f1~(., .) in 7'] over the same 4 1 v s -U3(X) — —-——l|nll (2.50) 4e IIP0|| Thus, the origin of system (2.11)—(2.14) is asymptotically stable. We summarize the above conclusion in the following theorem: Theorem 2.4 Let Assumptions 2.1—2.3 hold and assume that (150 2 ¢- Then, there exists 6: > 0 such that, for every 0 < 6 S 631‘, the origin of the system (2.11)--(2.14) is asymptotically stable. Remark 2.4 Theorem 2.4 covers the results obtained by Teel and Praly in [49]. It also gives a nonlinear generalization of the linear separation principle. During the proof of cases 2 and 3, we will use the following fact which is a special case of Young’s Inequality [18, Theorem 156]: 32 Fact: Vx,y E R+, Vp > 1, V60 > 0 we have 1 _P_. 131/3 -f6p+ (60)p0y” 1 0 where p0 = p—l—T' Case 2: We deal with the case where the origin of (2.16) is exponentially stable, whether or not we know (15. In this case, there exists a C1 Lyapunov function V2(x) [28, Theorem 3.13] defined over B(0,r2) Q ’R, for some r2 > 0, and four positive constants a1, a2, a3, and 624 such that, for all X E B(0, r2) we have «11”an < v2 0 is to be determined, as a Lyapunov function candidate for the system (2.11)—(2.14). Choose 5 < r2; then, given Assumptions 2.1, 2.2, and 2.3, we have, for all (x,17) E B(0, 5) x {”77” S 5} = A2, “906 D(6)77)|| S lelxll + lelnll (254) Using (2.52), (2.53), (2.54), and Young’s inequality, it can be shown that for all (x,77) E A2, we have ; (1 fl v s «231W — 521m“? — blllxn’ — b2||n||2 (2.55) 33 where bl = 613/2 - a4L7/60 - _BL5HP0lla 02 = U/(QC) — (1412760 - (4L5 + 2L6)HHP0H, L7 is a Lipschitz constant of fr(., .) in 77 over A2, and 60 > 0. Now, choose 6 small enough and 60 large enough such that b1 > 0, then, it can be shown that there exists 0 < 63 S 6’5 such that, for every 0 < 6 S 63, we have t/ s — min (a3/2,fi/(26)) [IIxH2 +l|nll21 (2.56) Thus, we can conclude that the origin of (2.11)—(2.14) is exponentially stable. The foregoing result is summarized in the following theorem: Theorem 2.5 Let Assumptions 2.1—2.3 hold and suppose the vector field fr(X,0) is continuously differentiable around the origin. Moreover, assume the origin of the closed-loop system under state feedback is exponentially stable. Then, there exists 6’5" > 0 such that, for every 0 < 6 S 63, the origin of the system (2.11)—(2.14) is exponentially stable. Case 3: We deal with the case where the origin of (2.16) is asymptotically, but not exponentially, stable. We start with an example that shows the need for some conditions on the modeling error in order for the output feedback controller to recover asymptotic stability of the origin. In addition, it gives an idea about how to formulate these conditions. Example: Consider the system .131 = .132 (2.57) .62 = f(x)+u (2.58) y = 1131 (2.59) 34 Suppose we know a nominal model f0(x) = "‘11 of f (x), and that the state feedback u = —x2 globally stabilizes system (2.57)w(2.59) for the actual function f. To implement the output feedback controller we use the high-gain observer , 2 - x1 = x2+;(x1—x1), . 1 . $2 = —x1+u+ :2-(131—1‘1) By passing to the error coordinates and applying the output feedback u = —§:2, we get the closed-loop system 931 = x2 (2.60) 62 = f($) - $2 + 722 (2-61) . 2 1 771 = 7771 + :72 (2.62) . 1 772 = f(x) + x1 — (2 + 6)771 (2.63) Suppose now that the actual nonlinearity f (x) = —x:13. By linearization of the system (2.60)—(2.63) around the origin, we notice that, for any 6 E (0,1), the linearized system has a positive eigenvalue, thus, the origin of the output feedback system is unstable. Clearly the Lyapunov analysis we performed in the exponentially stable case fails in this example. To see the source of the problem, let us note that the state feedback control u = —x2 stabilizes the origin of (2.57)—(2.59) when f = —x:1’, and the Lyapunov function V(x) = %x% + x1332 + x3 + $511 satisfies V = —x‘11 — mg (2.64) 35 Now, notice that, around the origin, we have If (x) — f0(x)[ ~ [231]. This modeling error is bigger than the absolute value of the derivative of the Lyapunov function V(x) along trajectories of (2.57)—(2.59). This observation motivates the upper bound on the modeling error which will be stated in Assumption 2.4. Realizing that the modeling error does not cause a problem in the exponentially stable case, we want to focus our attention on the part of the dynamics that is asymptotically, but not exponentially, stable. Towards that end we use the center manifold Theorem [28]. We need the vector field fr(X,0) in (2.16) to be twice continuously differentiable. Let us write the system (2.11)—(2.14) in the form 56 = fr(X,0)+A1(X,77) (2-65) 677 = A077 + 6BA2(X, 77) + 68600 (2.66) where Albee) = fr(x,D(6)n)-fr(x,0) A2007?) = l¢o($,C,7(t9,x,C))-¢o(i,C,7(69,i,C))l+ [65(33, 2, 7(19, in O) - 45(66, 3,709,130” 5(x) = ¢($:Z,’7(l9,$aC))—¢0($,C,’Y(l9,$,C)) Since the origin of (2.16) is asymptotically, but not exponentially, stable, Theorem 3.13 of [28] shows that %%(O, 0) has all its eigenvalues with either zero or negative real parts. Thus, there is a change of variables [67", 2T]T = TX (2.67) 36 such that (2.16) can be written as x 2 A15: + 91(x,2) Nl' = A25 + 92(57, 2) where A1 has all its eigenvalues with zero real parts and A2 is Hurwitz. In addition, Theorem 4.1 of [28] guarantees the existence of a continuously differentiable center manifold 2 :2 h(x), for all Hill S (11, for some (11 > 0. Shifting the center manifold to the origin via the change of variable a) = Z — h(x) (2.68) puts (2.16) in the form if: = A12 + 91(2, 77(2)) + N1(x,w) (2.69) d) = Azw + N2(i‘,w) (2.70) where llNz'(i,w)ll S Aillwll, Vll(f=,w)ll S al2 (2-71) for some d2 > 0. The positive constants A,, i = 1, 2, can be made arbitrarily small by choosing d2 small enough. By inserting all these changes into the system (2.11)—(2.14) we end up with 1'1“: — 90(27) + N1(2‘:,w) + M1(x,w, 77) (2.72) d) 2 142(1) + N2(:T:, w) + M267, w, 77) (2.73) 677 = A077 + 6BM3(:Z‘, w, 77) + 6861(22) + 6862(x, w) (2.74) 37 where 61(1) d__ef 6(x)|w ___ 0 is the projection of the modeling error 6(.) onto the center manifold, 62(x,w) d.—‘if 6(x) — 600),, = 0, [M1T,M2TJT = T231, M3 = A2, and 90(23) = A157: + 91(2, h(x)). Corollary 4.2 of [28] shows that the origin of the reduced system 57 = 90(5?) (2-75) is asymptotically stable. The condition on the modeling error, needed to establish the asymptotic stability of the origin of (2.11)-—(2.14), can be stated as follows: Assumption 2.4 There exists a 01 function V3(:'i:) defined on B5;(0, r3), a ball around :7: = 0 contained in the projection of 0 onto the subspace of 2:, that satisfies, for all :7: E Bi“), 73), 590(6) 5 -05(||i||) (276) “61(6)“ 3 coagunu) (2.77) [%’a s clagmrzn) (278) for some positive constants a, b < 1 such that a + b = 1, where (15 is a class K: function defined on [0, r3], c0 2 0, and c1 > 0. The existence of a Lyapunov function satisfying (2.76) is guaranteed by the converse Lyapunov theorem [28, Theorem 3.14], but what we need here is for (2.77) and (2.78) to be satisfied as well. Remark 2.5 When the reduced system (2. 75) is one-dimensional, we can take V3(:‘E) = - I62 g0(y)dy. Then 9523 = —g0(i') and Assumption 2.4 is satisfied, with a = b = 1/2, if |61(:r)| S d3 [g0(x)[ for some d3 > 0; i.e., [(510%)] cannot approach the origin faster than [g0(§:)|. 38 The recovery of asymptotic stability can now be stated as follows: Theorem 2.6 Let Assumptions 2.1—2.4 hold. Let the origin of the closed—loop sys- tem under state feedback be asymptotically, but not exponentially, stable, and let the vector field fr(X,0) be twice continuously difierentiable around the origin. Then, there exists (6 > 0 such that, for all 0 < 6 S 6’6, the origin of (211)—(214) is asymptotically stable. Proof. Consider the Lyapunov function candidate V(x,w, 77) = V3(2’:) + (wTPgw)0 + (W(77))0, where P2 is the positive definite solution of the Lyapunov equation P2A2 + A5132 = —1, and o =-. 1/(2a) > 1/2. Let 5 < min (d1, d2, 73) and let 6 S £5, then according to Theorem 2.2 there exists a finite time T3 after which we have llx(t, 6)” + I]77(t, 6)” S min (5/(2||T|[(1 + L)),5) where L is a Lipschitz constant of h(.) over Bi“), r3). Then, using (2.67) and (2.68) we can show that llflt, 6)“ + llw(t,6)ll (1+ L)lli‘(t,6)ll + |l2(t,6)ll |/\ < 2<1+ LineTv, e), 2TTn |/\ 2(1+ L)||T|||lx(t,6)ll |/\ (E, VtZT3 Due to Assumptions 2.1, 2.2 and 2.3 we have, for all (22,722, 77) E BE(O,5) X Bw(0, 5) x {Hull S E} =43, ||52(i,w)|| S lelwll (2-79) M1(i,w,77) S L9||T||ll77|| (2.80) M2(fi,w,n) IIM3(i,w,n)ll S L10||77|| (2.81) 39 Using (2.71), (2.76)—(2.78), and (2.79)—(2.81), we can show that, for all (in), 77) E A3, we have V s -05(llill) + polluwnagumn + p1lln||ag(llill) - pawn?“ + _ P p3A2llw||20 + p4||n|lllwll20 1— éllnllQ" + pawn?” + p7a‘5‘(llill)llnll20 -1+ psllwllllnll2" *1 (2.82) where Po = 01, P1 =61||T||L9, P2 =071 p3 = 2072HP2II, p4=2072||P2||||THL9, 95:073 705 = 20‘74llpollL1o, p7 = 2074||P0|l00, pg = 2074||P0||L8 are positive constants, and where X W ’71 = ()‘min(P2))U ‘1 71 = ”1’2”" ’1 2'. P 0 _1 ___ A . P U "1 72 ”2” 1) 17021 72 (mm( 21)) )if1/2So<1 73 = (Amin(P0))U _ 73 = ”Poll" _ 74 = upon" '1 , 74 = ammo)” -1 , The positive constants A1 and A2, defined by (2.71), can be made arbitrarily small by choosing 5 small enough. Next, we separate the five cross-product terms in (2.82) by repeatedly using Young’s Inequality with p = 20 and 60 = q1, Q2, q3, q4, and q5, respectively. Con- sequently, given that a = 1/(2a) and a + b = 1, we have all the terms in ”77“ or in ”call with the power 20 and all the terms in a5(||:'1':||) with the power 1. Now, choose A1, A2 and q1, q2, Q3, q4, and q5 such that —1/2 + pOA1(q1)p1 + 701(612)p1 + P7/614 < 0 and —p2/2 + p0/\1/611+ 03A2 + 734013)”1 + 708/615 < 0, where 40 p1 = 1/(20 — 1). Then, it can be shown that there exists 0 < 6’6 S 6’5 such that, for every 0 < 6 S 6’5 we have —p5/(26) + p1/Q3 + p6 + 707(q4)p1 + p8(q5)p1 < 0 which implies that for every 0 < 6 S 6’5 and for all (in), 77) E A3, we have - 1 _ P2 20 as 20 < —— — — , — —— . v _ 205(H33ll) , lel ,6 llnll (2 83) Thus, the origin of the system (2.11)~——(2.14) is asymptotically stable.<1 Remark 2.6 Theorems 2.4, 2.5 and 2. 6, along with Theorems 2.1 and 2.2, show the recovery of the region of attraction. 2.6 Examples We apply our technique to different systems in order to illustrate the theoretical results and go beyond the theory to demonstrate some reasonable intuitions. We also take advantage of these examples to show that the results obtained in this chapter apply not only to input-output linearizable systems as in the previous work [11, 27, 26, 30, 31, 38, 37, 41] but to any kind of system that fits the model (2.1)—(2.4). 2.6.1 Example 1 We consider a second order system having an exponentially unstable mode, together with a bounded linear controller that achieves a finite region of attraction. The system is 5131 = 1:2 (2.84) x2 = 21131 + 10 tanh (u) (2.85) where the control is u = —x1 - x2. 41 We consider a full-order high-gain linear observer (i.e., 450 = 0) with (11 = 02 = 1. In this example we show how the output feedback controller recovers the region of attraction achieved under state feedback. Figure 2.1 shows the region of attraction under state feedback control, in addition to three compact subsets that are recovered using the high-gain observer. In each case the compact subset is specified, then a design parameter 6* is found through multiple simulations at different points of the subset such that for every 6 S 6* the output feedback controller is able to recover the given subset; i.e., it is a part of the region of attraction of the new closed-loop system. The bound 6* is tight in the sense that for 6 > 6* there is a part of the given set that is not included in the region of attraction. The bounds 6*’s for these subsets are 0.082, 0.057, and 0.007, respectively, starting from the smallest subset. Notice that the bigger the subset the smaller the bound 6*. In all cases we take 52(0) = 0. 2.6.2 Example 2 - Inverted Pendulum We consider the inverted pendulum-on-a-cart problem given in [15]. The system Consists of an inverted pendulum mounted on a cart free to move on a horizontal plane. The equations of motion are given by: i'l = :62 (2.86) , 1 . 2 . x 2: —mgsrn x cos x +mlx Sln x — 2 M+msin2($3)l ( 3) ( 3) 4 ( 3) bxg + u] (2.87) (E3 = 274 (2.88) , 1 . 2 . x = M+m gsm .73 —mlx Sln it cos x + 4 [(M+msin2(x3))[( ) ( 3) 4 ( 3) ( 3) bx2 cos (x3) — ucos (x3)] (2.89) 42 where M is the mass of the cart, m is the mass of the ball attached to the free end of the pendulum, l is the length of the pendulum, g is the gravitational acceleration, b is the coefficient of viscous friction opposing the cart’s motion, x1 is the cart’s displacement, x2 is the cart’s velocity, x3 is the the angle that the pendulum makes with the vertical, and x4 is the pendulum’s angular velocity. The values of the different parameters of the model are M = 1.378Kg, m = 0.051Kg, g = 9.81m/sec2, l = 0.325m, and b = 12.98Kg/sec. The nominal value of b is b0. It is shown in [15] that the state feedback control u = [mg sin (:63) cos (x3) — mlxi sin (x3) + boxg + (M + msin2 (x3))v] (2.90) v = —900x2 + 900[3.22x1 + 12273 + 7.44(lx4 + x2 cos (x3))] (2.91) 4 stabilizes the origin . Let the measured outputs be (x1, 233). We use a full order high-gain observer to estimate all the state variables, and use these estimates in the stabilizing control. In order to avoid peaking in the state variables, induced by peaking of the observer variables, we saturate the control input such that our region of interest is included in the system’s region of attraction; we use u =2 100 tanh ( / 100). The nominal function p0 is made globally bounded by saturating x2 and $4 at 15 and 20, respectively. We design a full-order high-gain observer with multiple poles at —1/6. Figure 2.2 shows how the velocity estimate peaks due to the difference in the initial conditions between the position and its estimate. This is done for a linear —‘ 4This is a special case of the tracking control of [15] when the desired trajectory is taken to be zero. 43 observer and the following choice of initial conditions and design parameters: x(0) = (1,0, —0.5,0), 3(0) 2 0, b0 : 12.98, e = 0.001 Figure 2.3 shows how our output feedback controller recovers the trajectories achieved under state feedback. We use a linear high-gain observer with three values of 6. This is done for the following choice of initial conditions and design parameters: x(0) = (1,—4,0.7,—9),:8(0) = (0,0,0,0),bO=12.98, 6 = 0.0015, 0.001, 0.0001 Intuitively we expect that a nonlinear observer that includes a model of the system’s nonlinearities would outperform a linear one when the model is accurate. Figures 2.4 and 2.5 show that this intuition is justified. In Figure 2.4 we compare between a linear observer and a nonlinear one with no modeling error; i.e., the nonlinearity is perfectly known; thus b0 = b. In Figure 2.5 we do it with modeling error induced by a nominal value b0 = 15.58 (20% error). In these two simulations we use the following initial conditions and design parameters: 23(0) = (1,0,0.8, 0), 2(0) = (0,0,0, 0), e = 0.01 It is clear that when b0 2 b the nonlinear observer outperforms the linear one. But as the model uncertainty increases, the performance of the nonlinear observer degrades towards that of the linear observer. 44 2.6.3 Example 3 - VTOL aircraft We consider the simplified PVTOL (Planar Vertical Take off and Landing) aircraft modeled in [39] by :81 = x2, x2 = —u1 sin(x5)+uu2cos(x5) x3 = x4, x4 = ul cos(;1:5)+pu28in(x5)—g (i5 = (176, 236 = Au? where x1, x3, and x5 are the horizontal coordinate, the vertical coordinate, and the inclination of the aircraft, respectively. We also consider, like [39], the linearizing dynamic feedback 737 = x8, x8 = —V1 sin (x5) + V2 cos (x5) + x7xg “1 = x7 + gxg 1 u2 = E(_V1 cos (x5) — V2 sin (x5) — 21138176) The equilibrium point of the closed-loop system under state feedback is x = (231,0,123,0,0,0,g,0), u = (9,0), and 17 = (0,0). The linearizing effect of this dy- namic controller can be seen by applying the change of variables X1 : $1—§Sin(x5), X2 = .752 — Lil-1‘6 COS ($5) X3 = —x7 sin (x5), X4 = —x8 sin (x5) — x7226 cos (x5) X5 2 (173+ ‘ECOS (1'5), X6 : $4 — §$6 sin ($5) x7 = x7cos(x5)—g, X8 = x8cos(x5)—x7x68in(x5) t0 obtain X4 2 V1 and X8 = V2. In the new coordinates the equilibrium point is X .1— (51,0,§;3 + p/A,0, 0,0, 0,0). We choose the equilibrium point to be at xeq = 45 (2,0, 2,0, 0, 0, g, 0). To stabilize the aircraft at this equilibrium point, or to make it track this constant trajectory, we define the change of variables 521 2 X1 — 2, X3 = 3:3 - 2 — p/A, with the rest of the X variables unchanged. Then we take the control kph + 19222 + (63923 + k424 (2.92) 1’1 112 = [$121 + (C2522 + [€323 + (€424 (2.93) where k1 to k4 are chosen to stabilize the origin )2 = 0. For the purpose of simulations we take 9, p and /\ to be 1, 1, and 0.5, and k1, k2, k3, and k4 to be —24, -—50, —35, and —10 Now suppose we only measure the position variables :31, x3, and x5, and set y1 = $1, y2 = 2:3, and y3 = 2:5. We want to design an observer to estimate the velocity variables 2:2, $4, and 2:6. Noting that the nonlinear functions sin (235) and cos ($5) depend only on the measured output variable 2:5, the system takes the form 2': = A2: + B¢(u,y), y = C1: If the nonlinear function ¢(u, y) is exactly known, we can design an observer that yields linear error dynamics [20, Section 4.9]. In particular, the full-order observer 3%: = A5: + B¢(u, y) + L(y — C53) results in the error equation é: (A—LC)e Where 6 = a: — :i'. Such observer design does not need the high-gain observer theory presented in this paper. For the purpose of comparison with our method, we design 46 a reduced-order observer that yields linear error dynamics with eigenvalues located at -—1, assuming perfect knowledge of the parameters p and A. We will refer to this observer as the nominal reduced-order observer. On the other hand, when there is uncertainty in modeling the nonlinearity ¢(u, y), the error dynamics will no longer be linear. This case is covered by our high-gain observer which is designed to be robust with respect to uncertainties in d). We design a reduced-order high-gain observer with eigenvalues located at —1/6. We saturate the dynamic feedback controller as follows: 2'37 = 20 tanh (./20), 2'38 = 200 tanh(./200), “1 = 40 tanh (./40), and 112 = 200 tanh (./200). These bounds have been figured Out from extensive simulations done to see the maximal values that the state trajectories would take when the initial state is in a region of interest around mag. The model nonlinearities are naturally globally bounded so we do not need to saturate the nominal nonlinearities in the observer. To illustrate the capability of our output feedback controller to recover the state trajectories we perform simulations with different values of the design parameter 6. Figure 2.6 shows that as e approaches 0 the trajectories under output feedback approach the trajectories under state feedback. To illustrate the robustness of the output feedback controller we perform sim- ulations with a 15% error in A. Figure 2.7 shows that our design does its job in recovering stability and trajectories for 5 small enough. The above simulations are done with the following choice of initial conditions and parameters: 1(0) 2 (1,0, 1,0,0,0, 1,0), 33(0) 2 (0,0,0), 6 = 0.02, 0.008, 0.002 Figures 2.8 and 2.9 show that our observer outperforms the nominal observer in terms of convergence rate and trajectory recovery. One may think that by increasing the gain of the nominal observer we can have as good of a performance as the 47 high-gain observer. This is not true, and Figure 2.10 shows it. In the nominal observer, the input and the nonlinearity are not globally bounded. Therefore, peaking in the estimate passes to the state through the input. For this reason we saturate our dynamic feedback. The above simulations are done with the following choice of initial conditions and parameters: 37(0) = (1,0, 1,0, 0,0, 1,0), 1(0) = (0,0, 0), e = 0.01, 0.002, nominal observer gain 2 100 Remark 2.7 Of course, the trajectories recovered are the entire state 3:1 to 11:8 but Figures 2.6 to 2.10 only show a few of them. 2.7 Conclusion We presented and proved a separation principle for a certain class of nonlinear sys- tems. An output feedback controller using a sufficiently fast high-gain observer re- covers the performance achieved under a state feedback controller. This includes boundedness, ultimate boundedness, convergence of trajectories, and exponential stability of the origin. We also found that we can recover asymptotic stability of the origin when the modeling error is zero, but, when this error is not zero, we need to impose some additional conditions. It is worthwhile to note that our results can only Show semiglobal stabilization under output feedback even when the state feedback control achieves global stabilization. Global separation results are more challeng- ing as the discussions of [13] reveal. We performed simulations on various types of Systems and controllers to illustrate the results obtained in this paper. These sim- UIations showed the advantage of including a model of the system’s nonlinearity in the observer, when a good model is available. Furthermore, they demonstrated the 48 effectiveness of the combination of high-gain observers and saturation in recovering asymptotic stability and trajectories achieved under state feedback. 49 20 15" -5— -20 —20 20 Figure 2.1. Recovery of region of attraction: 6* = 0 (solid), 5* = 0.007 (dashed), 6* = 0.057 (dash-dotted), and 6* = 0.082 (dotted) 50 x . xz-estimate / _100 l 1 r 1 I l 1 l I 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 time 150 I I I I I I r I l 100 ‘ _150 l l l l l l l l I 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 time Figure 2.2. Peaking of the velocity: 2:2 (solid), i2 (dashed) 51 ’\ I I \\ 0 ' I « , ’I .A x"-1 ' ' .II’/ _2 - I] J [ l I, (I _3 L 0 1 2 3 4 3 4 1 4 -1.5 m ‘ 0 1 2 3 4 3 4 time Figure 2.3. Trajectory convergence: state feedback (solid); output feedback with = 0.0015 (dashed), e = 0.001 (dash-dotted), and e = 0.0001 (dotted) 52 O 0.5 1 1.5 2 2.5 3 3.5 4 time Figure 2.4. Effect of nonlinearity in the observer - without uncertainty: state feedback (solid), with linear observer (dashed), with nonlinear observer (dotted) 53 O 0.5 1 1.5 2 2.5 3 3.5 4 time Figure 2.5. Effect of nonlinearity in the observer - with uncertainty: state feedback (solid), with linear observer (dashed), with nonlinear observer (dotted) 54 - p _ .— I— p— time Figure 2.6. Trajectory convergence - without uncertainty: state feedback (solid); output feedback with e = 0.02 (dashed), e = 0.008 (dash-dotted), and e = 0.002 (dotted) 55 2.5 l I I I I I I I I p— p— .— — Figure 2.7. Trajectory convergence - with uncertainty: state feedback (solid); output feedback with e = 0.02 (dashed), e = 0.008 (dash-dotted), and e = 0.002 (dotted) 56 8 10 Figure 2.8. High-gain vs. nominal observer - without uncertainty: state feedback (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash-dotted) and e = 0.002 (dotted) 57 2.5 . . . . 4 Figure 2.9. High-gain vs. nominal observer - with uncertainty: state feedback (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash-dotted) and e = 0 - 002 (dotted) 58 zol 1 Figure 2.10. High-gain vs. nominal observer - importance of saturation: state feed- back (solid), nominal observer (dashed); high-gain observer with e = 0.01 (dash- dotted) and e = 0.002 (dotted) 59 CHAPTER 3 A Separation Principle for the Control of a Class of Nonlinear Systems 3.1 Introduction In Chapter 2 we introduce separation results for the stabilization of a class of systems having a chain or more of integrators in their structure. Therein, we consider state feedback controllers that make the origin of the closed-loop system an asymptotically stable equilibrium point. In this chapter we are interested in the output feedback implementation of controllers that achieve boundedness of trajectories under state feedback control but not necessarily with convergence to an equilibrium point. Such a situation can be encountered in adaptive tracking and regulation [2, 27] where only the tracking error or both the tracking error and the parameter error converge to zero. Another example is the convergence to a zero-error manifold as in the servomechanism problem discussed in [26, 37, 38, 21]. Additional examples can be found in stabilization problems in the presence of disturbances as in [50, 10] where only finite-time convergence to a set can be achieved. In all these cases, it can be 60 shown that the trajectories approach an attractive, positively invariant, compact set. In this chapter, we consider a class of systems similar to the one considered in Chapter 2 and characterize the performance of the state feedback controller as rendering a certain compact set positively invariant and asymptotically attractive. Furthermore, as in Chapter 2, we require the control law to be globally bounded and implement it using a high-gain observer. Then, we recover the same set of perfor- mance measures that was recovered in Chapter 2. It includes recovery of the region of asymptotic stability of the attractive set (i.e., recovery of arbitrary compact sub- sets of this region), as well as the convergence of trajectories under output feedback control to those under state feedback control as the observer gain approaches infinity. We start with semiglobal separation results using results form [34]. Then, we give similar separation results for a possibly finite region of attraction. For this task we adapt the results of [34] to this case because [34] deals only with global convergence to a set. In order to illustrate the theory developed hereafter, we present in the next chapter several examples taken form [10, 50, 38, 21, 2]. This chapter is organized as follows: Section 3.2 states some definitions and recalls results from [34], Section 3.3 formulates the problem. Section 3.4 discusses the set of performance measures to be recovered by output feedback and proves this recovery in a semiglobal setting. Section 3.5 adapts the results of [34] and Section 3.4 to the finite region of attraction case. 3.2 Definitions and Converse Lyapunov Results Consider the system 61 where for each t E R, :1:(t) E R” and d(t) E D, and where D is a compact subset of Rd. The map f : R” x D -> R” is assumed to satisfy the following properties: 0 f is continuous in its arguments. o f is locally Lipschitz in x uniformly in d. This means that for each compact subset K of R” there is some constant c such that ||f(1‘,d) - f(z,d)|| S Clll‘ - 2|! for all :r,z E K and all (1 E D. Let MD be the set of all piecewise continuous functions from R to D. For each d 6 MD, we denote by a:(t,a:0;d) the solution at time t of (3.1) with 22(0) = 2:0. This solution exists and is defined on some maximal interval (T + . $0,d’Tx0,d) With — + _OOSTmo,d 0, there exists a constant 61 = 61(5) such that |x(t,x0,d)|A g 'e for all d E MD, whenever IxOIA S 01(6) and t Z 0 (3.2) 2. Uniform Attraction: there is an oz > 0 and, for any 6 > 0, there is T = T(6) > 0, such that for every (1 E MD: |x(t,x0, dllA < 6 whenever [xOIA < a and t 2 T (3.3) Moreover, the system (3.1) is Uniformly Globally Asymptotically Stable (UGAS) with respect to A, if (3.2) holds with a class (Coo function 6(6) and (3.3) holds for any r > 0 with T = T(e,r). Finally, the system (3.1) is Uniformly Exponentially Stable (UES) with respect to A, if there are three positive constants r, k, and 7 such that the solution x(t,€,t0,d), starting from 5 at time to under the input d(t), exists for all t 2 t0 and satisfies lx(t,$0,t0, dllA S 1967“— t0)l$0|A, W Z 1*0 for all x0 6 {5: IEIA S r} dgf {20 and all d E MD- 63 Remark 3.1 Proposition 2.5 of [34] shows the equivalence between UGAS and bounding |x(t,x0; dllA with a class IC£ function. Herein, we state the definition of a Lyapunov function for (3.1) with respect to the compact, positively invariant set A. Definition 3.2 A Lyapunov function for the system {3.1) in the open set R with respect to a compact, positively invariant set A (_I R is a function V : R —) R> 0 such that V is smooth on R/A and satisfies the following properties: 1. There exist two class [C functions 0‘1 and a2 such that for any as E R, 0105M) S V(f) S a2(|€|A) (3-4) 2. There exists a continuous, positive definite function (13 such that for any g E R/A, and any d E MD, L f dvu) s -a3(|€|A) (3.5) A smooth Lyapunov function is one which is smooth on all of R. In the case R = R”, we require 01 and 02 to be class ICOO. In some situations the Lyapunov function candidate is time-dependent. In this case we need a Lyapunov stability theorem for invariant sets where the Lyapunov function could depend on time. Theorem 3.1 Let A Q R be a compact, positively invariant subset of R" for the system (3.1). Then, (3.1) is UAS with respect to A if there exist a C’1 positive defi- nite, with respect to A, function V(t, {) : [0, 00) x U —> R, where U is a neighborhood of A, two IC-functions 01,02, and a continuous positive definite function 03 such 64 that, for allt Z 0, we have 0106M) S V(té) S a205M) (3-6) for any 5 E U, and 8V , E- + Lde (5) S -a3(|€|A) (3-7) for any 5 E U/A and any (1 E D. In addition, ifU = R" and al is class (Coo, then {3.1) is uniformly globally asymptotically stable with respect to A. Proof. see Appendix C.1.<1 Remark 3.2 Theorem 3.1 along with [34, Proposition 2.5] constitute a powerful machinery for showing U GAS. For example, we can show UGAS using two Lyapunov functions, the first one shows ultimate boundedness and the second one shows local UA S. For GUAS, we state the converse Lyapunov theorem given in [34] as Theorem 2.9. Theorem 3.2 Let A C R” be a compact, positively invariant set for the system (3.1). Then, (3.1) is uniformly globally asymptotically stable with respect to A if and only if there exists a smooth Lyapunov function V(x) with respect to A. Moreover, we give a converse Lyapunov theorem for UES. Theorem 3.3 Assume that the system (3.1) is uniformly exponentially stable with respect to the compact, positively invariant set A. Then, there exists a function V(t, x), defined and continuous an R2 0 x (20, where {20 = {leA 3 r0, r0 > 0} 65 and contains A, such that lxlA s vex) s klxIA (3.8) |V(t,$)-l“'(t,i‘)| S MILE—ill (3.9) . V(t + h, x) — V(t, x) 1 < —AV t, , Vd e M 3.10 h 310 h ‘ ( x) D ( ) for all £5,573 6 $20, and allt _>_ to, where L and A < *y are positive constants and 5: = x(t + h, x, t0;d). Proof. see Appendix C.2.<1 3.3 Problem Formulation In many state feedback controller designs, the trajectories of the closed-loop system do not converge to an equilibrium point. For example, in the servomechanism cases discussed in [26, 38] the objective is to make the tracking error converge to zero while keeping the states of the zero dynamics bounded. The same objective is achieved in [2, 27] using an adaptive controller. Furthermore, in robust control, as in [50, 10], we often achieve ultimate boundedness. In this section we formulate these design objectives as steering the trajectories towards a compact, positively invariant set. Of course, there are problems which can neither be cast as stabilization of an equilibrium point nor as stabilization of a compact, positively invariant set. These problems, such as [2] (when we only have partial persistence of excitation) and [29], will be the subject of future research. The class of systems considered can be represented by the multi-input multi- output nonlinear model it = Ax+Bq§(x,z,d(t),u) (3.11) 66 Z = u'.!(x,z,d(t),u) (3.12) y = Cx (3.13) C = q(r. z,d(t)) (3.14) where u E Rm is the control input, C E R3 and y 6 RP are measured outputs, x 6 RT and z E R6 constitute the state vector, and d(t) 6 Rd is a vector of signals that belongs to MD- The matrices A, B, and C, represent p chains of integrators as in Section 2.2. The state feedback control is assumed to be in the form {9 = F(19,x, (,d(t)) (3.15) u 2 7(29, x, C,d(t)) (3.16) We allow any state feedback design that satisfies: Assumption 3.1 (1 ) I‘ and 'y are locally Lipschitz functions in 19, x, and C uniformly in d over the domain of interest; (2) I‘ and '7 are globally bounded functions of x; (3) The closed—loop system is uniformly globally asymptotically stable with respect to the compact positively invariant set A. The system (3.11)—(3.14) with the controller (3.15)—(3.16) satisfies: Assumption 3.2 The functions q, (f) and w are locally Lipschitz in x, z, and u uni- formly in d over the domain of interest. Moreover, d(x, z,d,7(19,x,(,d)) is zero in A uniformly in d. To implement the control (3.15)—(3.16) we use the state estimates, it, generated 67 by the high-gain observer in = A5: + B¢O(e, c, d(t), u) + H(y — Cr) (3.17) The observer gain H is chosen as in 2.10. The function ¢0(x,C,d(t),u) is a nominal model of d(x,z,d(t),u) which is re- quired to satisfy the following assumption: Assumption 3.3 (to is a locally Lipschitz function in x,(, and u uniformly in d over the domain of interest. Furthermore, it is globally bounded in x and zero in A, uniformly in d. 3.4 Performance Recovery - Semiglobal Separa- tion Results The main objective of this section is to show that the suggested output feedback implementation of the control law recovers the performance of the state feedback controller (3.15)—(3.16) for sufficiently small 6. The performance recovery manifests itself in three points. First, the compact set A x {x — a": = 0} is a positively invariant set of the closed-loop system under output feedback and the closed-loop system is asymptotically stable with respect to A x {x - x = 0}. Second, the output feedback controller achieves semiglobal stabilization; that is, for any compact set S which contains A, and any compact set Q g RT, the set 8 x Q is included in the region of attraction under output feedback control. Third, the trajectory of (x, 2,19) under output feedback approaches the trajectory under state feedback as e —+ 0. The analysis is done in three steps. First, we show the recovery of boundedness of trajectories. Second, we show the recovery of ultimate boundedness of these trajectories. Third, we show the recovery of local asymptotic stability with respect 68 to A. This allows us to deal with asymptotic stability as a local prOperty that could require some additional assumptions on the modeling error. Let us first, for the purpose of analysis, replace the observer dynamics by the equivalent dynamics of the scaled estimation error (the scaling is similar to that of Section 2.5.1). Then, as in Chapter 2, we have :1“: = x — D(e)n. Hence, the closed-loop system can be represented by :1; = Ax+B¢(x,z,d(t),y(.)) (3.18) é==w@&fl®mWfl-Dmmcflm) mm) .9 = I‘(19,x—D(c)n,C,d(t)) (3.20) 67') = A077+eBg(x,z,i9,D(e)77,d(t)) (3.21) where g(.) = d(x,z,d(t),7(.)) — ¢0(x,C,d(t),y(.)) and A0 is a constant Hurwitz matrix. The system (3.18)-(3.21) is a standard singularly perturbed one, and 77 = 0 is the unique solution of (3.21) when 6 = 0. Similar to Section 2.5.1, the reduced system is the closed-loop system under state feedback. For simplicity we write the system (3.18)—(3.20) as X = fr(x, d(t), B(fln) (3-22) where x = [xT, 2T,i9T]T and x(0) = [933,253,19ng- Then, the reduced system is given by X = fr(x, d(t), 0) (3-23) The boundary-layer system is dn 69 where r = f. Let (x(t,e),n(t,e)) denote the trajectory of the system (3.18)—(3.21) starting from (x(0), 71(0)). We know that (3.23) is uniformly globally asymptotically stable with respect to the compact positively invariant set A. Then, Theorem 3.2 ensures the existence of a smooth Lyapunov function V(x) in addition to two class Koo functions a1, a2 and a continuous, positive definite function 03 such that, for all x E R", where n = r+€, we have: V(x) = 0 4:» x E A (3.25) a1(|><|¢4) S V(x) S 02(leA) (326) For the boundary-layer system we define the Lyapunov function W(n) = nTPOn, where P0 is the positive definite solution of the Lyapunov equation POAO + AgPO = -I. This function satisfies inequalities similar to (2.21)—(2.22). 3.4. 1 Boundedness Let the initial states (x0, 20, 190) E S, and x0 6 Q, where S is any compact set in R" which contains A and Q is any compact subset of RT. The recovery of boundedness of trajectories is given in the following theorem: Theorem 3.4 Let Assumptions 3.1—3.3 hold; then, there exists 6’] > 0 such that for every 0 < e S 6’], the trajectories (x(t, e), 77(t, 6)) of the system (3.18)—(3.21) starting in S x Q are bounded for allt 2 0 and all d E MD- Proof. As in Section 2.5.1, we show the positive invariance of an appropriately chosen set A, then we show that, any closed-loop trajectory, starting in the compact set S x Q, enters the positively invariant set A in finite time. 70 The properness of V(x) guarantees that the set 9 = {x E R" : V(x) g c}, for some c > 0, is a compact set and that S is in the interior of 9. Let A : I) x {W(77) 3 p62}. Due to Assumptions 3.1—3.3 we have, for all x 6 Q, all d E D, and all r) 6 RT, llfT(Xa d1 D(€)7l)[l “900 d, D(€)n)ll S k2 (3-29) |/\ k1 (3.28) where k1 and k2 are positive constants independent of 6. Moreover, for any 0 < E < 1, there is L1, independent of e, such that, for all (x,17) E A, all (I 6 D, and every 0 0 small enough and a time T (6) such that W(n(T (6), 6)) 3 pos2 for every 0 < 6 S 62. Taking 6’] = min (6,61,62) guarantees that, for every 0 < 6 S 61‘, the trajectory (X(t,€),7](t,€)) enters A during the interval [0,T(6)] and remains there for all t _>_ T (6) Thus, the trajectory is bounded for all t Z T (6) On the other hand, for t E [0,T(6)], the trajectory is bounded by virtue of inequality (3.33) and an inequality similar to (2.32).<1 3.4.2 Ultimate Boundedness Hereafter, we show that trajectories of the system (3.18)—(3.21), starting in S x Q, come arbitrarily close to the set A x {n = 0} as time progresses. This is summarized in the following theorem: Theorem 3.5 Under the conditions of Theorem 3.4, given any 5 > 0, there exist 6* = 6*(45 > 0 and T = T (6 such that, for every 0 < 6 S 6*, we have 2 2 1 1 2 |X(t, 6)IA + |ln(t,€)ll S E, Vt 2 T1 (3-34) for all d E MD. Proof. From the proof of Theorem 3.4 we know that, for every 0 < 6 3 61‘, the trajectory of the closed—loop system, starting from (x(O),:ic(0)) E S x Q, is inside the set A for all t 2 T(6), where A is 0(6) in the direction of the variable 17. Thus, 72 we can find 63 2 63(5) 3 61‘ such that for every 0 < 6 g 63 we have Hna e)” g 6/2, w 2 my.) ‘1? Ta) (335) for all d E MD: In what follows we continue working with the Lyapunov function defined in the the proof of Theorem 3.4. It can be shown that, for all (x, n) E A, we have V g —a3(lxl_A) + k36, Vd E D Without loss of generality a3 can be chosen to be class ICOO. Thus, we conclude that - 1 — d f v s — ,aguxa). forlxlA 2 33131.36) 2 Me), w e 72 Choose 64 : 64(5) 3 6’] such that u(64) < a§1(c) and ai—1(a2(u(64))) < 5/2. Then, by a proof similar to that of [28, Theorem 5.1, Corollary 5.1], we conclude that there exists a finite time T = T(5) such that, for every 0 < 6 g 64 Ix(t. «0L4 s é/2, Vt 2 T (3.36) for all d E MD' Take 65 2 65(5) = min (63, 64) and T1 = T1 (5) = max (T, T), then (3.34) follows from (3.35) and (3.36).<1 3.4.3 Trajectory Convergence Let x7(t) be the solution of (3.23) starting from x(0). The following theorem shows that x(t, 6) converges to x7»(t) as 6 -+ 0, uniformly in t, for all 73 Theorem 3.6 Under the conditions of Theorem 3.4, given any 5 > 0, there exists 63' > 0 such that, for every 0 < 6 S 6?; we have ”x(te) - Xr(t)|| S 6, Vt 2 0 (3-37) for all d E MD‘ Proof. We divide the interval [0,00) into three intervals [0,T(6)], [T(6),T2], and [T2,oo), where both T(6) and T 2 are to be determined later, and show (3.37) for each interval. 0 From Theorem 3.5 and the uniform asymptotic stability with respect to A of the reduced system we conclude that there exists a finite time T2 2 T(6), independent of 6, such that, for every 0 < 6 3 6’5, we have le(t)|A 5 6/2, Vt 2 T2 (3.33) |X(t, 6)|A S 5/2, W 2 T2 (3.39) for all d E MD' Then, we can write ”x(t, 6) - Xr(t)l| S ”x(t, 6) - dill + ”20““) - xll, Vt 2 T2 (340) for all x E A and all d E MD- By taking the infimum of (3.40) over A and using (3.38)-(3.39), we have llX(ta€)—Xr(t)ll S lX(ta€)lA+er(t)lA g 6, Vt 2 T2, Vd 6 MD (3.41) 74 0 As in Section 2.5.2, we can show that llx(t,6) - Xr(t)|| S 2l€1T(6), W 6 [01(6)] Since T(6) ——> 0 as 6 ——> 0, there exists 0 < 65 3 65 such that, for every 0 < 6 S 65, we have ”x(té) - Xr(t)ll S 6, W E [0,T(6)l (3-42) for all (l E MD' 0 Over the interval [T (6), T2], the trajectory x(t, 6) satisfies, for all 61 E MD, >2 = fr(x, d(t), D(€)n(t, 6)), with ||X(T(€), 6) - Xr(T(6))l| S 5(6) where D(6)n is 0(6) and 6(6) —> 0 as 6 —> 0+. Thus, as in Section 2.5.2, we conclude that there exists 0 < 66 3 6’5 such that, for every 0 < 6 g 66, we have ||X(t,6) - 2000” S 5, W E [T(6),T2] (3-43) for all (1 E MD: Take 65 = min(65,66), then, using (3.41), (3.42), and (3.43) we conclude (3.37). 0 such that, for every 0 < 6 g CZ, the system (3.18)—( 3.21 ) is uniformly asymptotically stable with respect to the compact positively invariant set A x {77 = 0}. Proof. We know from Theorem 3.2 that there exists a Cl Lyapunov function V and a class (Coo function 623, both defined on a ball B(A, r1) g Q for some r1 > 0, such that for all X E B(A, r1) (9V Choose 5 < r1; then given Assumptions 3.1—3.3 we can show that, for all (x,n) E B(A, 5) x {”71” g 5} = A1 and all d, we have ||9(X,d,D(€)77)ll S L4||77||, d E D (3.45) Consider the composite function V(x, n) = V()()-l-(W(17))1/2 and choose 0 < 62’ 3 6’5 such that 1/(4EZ‘HIPOII) — L2L3 — llPOllL4/1(’\min(P0) > 0, where L2 is an upper bound for % over A1 and L3 is a Lipschitz constant of f7-(., ., .) in 17 over the same set. Then, we conclude that, for every 0 < 6 g 6: and for all (x, n) 6 A1, we have 4 1 V S —ag(|x|A) - ————-—||n||, d E D (346) 46 ”Poll Thus, according to Theorem 3.1, the closed-loop system (3.18)—(3.21) is uniformly asymptotically stable with respect to A x {77 = 0}. <1 76 Case 2: We deal with the case where the system (3.23) is uniformly exponentially stable, whether or not we know a. The result is summarized in the following theorem: Theorem 3.8 Let Assumptions 3.1—~33 hold and assume that the closed-loop system (3.23) is uniformly exponentially stable with respect to the set A. Then, there exists 6:; > 0 such that, for every 0 < 6 _<_ (’5, the system (3.18)—(3.21) is uniformly exponentially stable with respect to the set A x {n = 0}. Proof. Let X2(t) be the solution of (3.23) that starts from X at time t = 0. In this case, according to Theorem 3.3, there exists a Lyapunov function V2(t, x) defined over B(A,r2) g Q, for some r2 > 0, and three positive constants a1, a2, and a3 such that, for all x E B (A, r2) we have IXIA S V2(t, X) S allXIA (3-47) hm V(t + h, X2(t + h)) - V(t, X) h —> 0 h ||V2(t,X1) - V2(t,X2)|| S asllxi - X2l|.VX1, X2 6 B(Afl‘z) (3-49) S -a2V(t, x), Vd E MD (3.48) Let us consider V(t, X, 77) = V2(t, x) + fi‘/W(n), where S > 0 is to be determined, as a Lyapunov function candidate for the system (3.18)-(3.21). Choose 5 < r2; then we have Claim: Given Assumptions 3.1—3.3, we have, for all (x,r)) E B(A, 5) x {”17” S 6} = A2, ”900 d, D(€)fl)|l S L5|XIA + L6l|77|| (3-50) 77 for all d 6 MD' Proof. It suffices to show that ll¢(X, d) - ¢0(X,d)ll S LlelA (3-51) for all d E MD. Since both ()5 and (to are zero in A, we can write ll¢(X, 01) - 450(X, d)|| S ||¢(X, d) - ¢(Xa, d)|| + ll¢0(X, d) - 450(Xa, d)ll (3-52) for all Xa E A and all d E MD: Using the local Lipschitz property in (3.52) yields ||¢(X, d) - (2003 d)“ S L5||X — Xall (3-53) for some L5 > 0. Taking the infimum over A yields (3.51).<1 Let x3(t) be the solution of (3.22) that starts from x at time t = 0. Then, using (3.48), (3.49), and (3.50), it can be shown that, for all (x, n) 6 A2, we have V(t + h, X3(t + h), n(t + h, 6)) — V(t, X, 7?) lim < —a V t, +a L h—+0 h _ 2 2( X) 3 7||n|| B B - —-——||n|| + ||P0||(L5|XL4 + Lellnll) (3.54) 26 ”Poll \Amin( 0) where L7 is a Lipschitz constant of fr(.,.,.) in 7) over A2. Then, there ex- ist positive constants 5 such that —(a2/2) + fiIIPOIIL5/‘//\m,-n(PO)) < 0, and 0 < 63‘ 3 6’5 such that, for every 0 < 6 3 fig, we have —,B/(4€‘/][PO”) + a3L7 + fillPOHLG/ilhmiflpo» < 0. This implies that, for every 0 < 6 3 6’5", we have 1m h ——> 0 h 78 |/\ 1 5 --a2V2(t, X) - ___—Hull 2 46 “Poll -—}V (t X n) (3.55) |/\ — min{—2-a2,——— 1 46 ”PO ll Thus, there exist positive constants b2 and b3 such that “X“? din“? 6))IA X {0} S b2e—b3tl(X(0)in(0))lA x {0}, Vd E MD (356) Thus, according to the definition of UES in Definition 3.1, the closed-loop system (3.18)—(3.21) is uniformly exponentially stable with respect to A x {n = 0}. <1 During the proof of case 3, we will use Young’s Inequality given in Section 2.5.4. Case 3: We deal with the case where the system (3.23) is uniformly asymptoti- cally stable with respect to the compact, positively invariant set A. A condition on the modeling error has to be imposed and is stated as Assumption 3.4 There exists a 01 function V3(t, x) defined on [0, 00) x U, where U = {x : M A 3 r3, r3 > 0} is a neighborhood of A in Q, three functions ibl and i/Jg, and $3 defined and continuous on U which are positive definite with respect to A (i.e., positive everywhere and zero only in A), such that, for all t Z 0, we have w1(X) S V30, X) S $200 (357) 5%? + afit—(x, d. 0) s —w3(x) (3.58) ||¢( :6 z dew 1‘ C, d)) - ¢0($,C,d,v(19,w,é,d))llS 6010300 (3-59) II%( (t x) 3 C1 «(2300 (3-60) 79 for all x E U and all (1 E D, for some positive constants co _>_ 0, Cl > 0 and a, b < 1, such that a + b =1. The existence of a Lyapunov function satisfying (3.57)-—(3.58) is guaranteed by The— orem 3.2, but what we need here is for (3.59) and (3.60) to be satisfied as well. Remark 3.3 Assumption 3.4 is similar to Assumption 2.4 of Chapter 2 in the sense that it relates the modeling error magnitude and the rate of convergence of trajectories near the attractor (which is a set in the case at hand). It can also be viewed as an extension of Assumption 2.4 to the case of asymptotic stability with respect to a set. However, in Assumption 2.4 we used the center manifold decomposition to reduce the size of the set over which the assumption is satisfied and to reduce the structural complexity of the modeling error by projecting it on the center manifold. From different examples, presented later on, it seems reasonable to allow the Lya- punov function candidate V3 to depend on time. The recovery of asymptotic stability can now be stated as follows: Theorem 3.9 Let Assumptions 3.1-3.4 hold. Then, there exists 6’5 > 0 such that, for all 0 < 6 3 6’6, the system (3.18)—(3.21) is uniformly asymptotically stable with respect to the compact positively invariant set A x {17 = 0}. Proof. Consider the Lyapunov function candidate V(t, x,n) = V3(t, x) + (W(77))0 with a = 1/(2a) >1/2. Let 5 < r3 and let 6 3 6’5, then according to Theorem 3.5 there exists a finite time T3 > 0 after which we have |x(t, 6)]A + ||17(t, 6)“ S 5 for all t 2 T3. Using Assumptions 3.1—3.4, we can show that, for all (x, 17) 6 A3 = B(A, 5) x {“77” S E}, we have llfr(X.d,D(€)77)-fr(X,d,0)|| S lelnll (3-61) ||9(X,d,D(€)77)ll S 00¢3(X)+L9||77|| (3-62) 80 for all (1 E D. and . p ‘) __ v s —./.’3(x>+p1nnnwg(x> — 311nm?“ + pawn?” + pwgumnn-U 1 (3.63) where P1 = OILS: 92=071 p3 = 2Uvzlll’ollLe, p4 = 2Unlll’ollco are positive constants and z ,\ . P 0‘1 = P 0"] ’71 (min( 0)) “-021 71 H 0” ifl/ZSU<1 '72 = ”POHO —1 ’72 = (AminUDOllo _1 Next, we separate the two cross-product terms in (3.63) by repeatedly using Young’s Inequality (see Section 2.5.4) with p = 20 and 60 = q1, Q2, respectively. Conse- quently, given that o = 1/ (2a) and a + b = 1, we have all the terms in “77“ with the power 20 and all the terms in w3(x) with the power 1. Now, choose ql, Q2, such that —1/2+p1(q1)p1+p4/q2 < 0, where p1 = 1/(20—1). Then, it can be shown that there exists 0 < 66 3 6’5 such that, for every 0 < 6 S 63 we have —p2 / (26) + p3 + p4(q2)p1 < 0 which implies that, for every 0 < 6 3 6’5 and for all (x,n) 6 A3, we have ' 1 [’2 20 < __ __ 3. 4 for all (1 6 D. Thus, according to Theorem 3.1, the closed-loop system (3.18)-—(3.21) is uniformly asymptotically stable with respect to A x {0}. <1 81 Remark 3.4 Theorems 3. 7, 3.8 and 3.9, along with Theorems 3.4 and 3.5, show semiglobal stabilization. 3.5 Regional Separation results In many cases, asymptotic stability with respect to a set, achieved under the state feedback controller, is not global and the region of attraction is a subset of the state space. This subset may be finite (bounded) or infinite. Examples of such cases can be found in adaptive control [2, 27], and robust control [50]. In order to extend the previous separation results, we need, as we did in the previous chapter, a converse Lyapunov theorem that yields a Lyapunov function which goes to infinity at the boundary of an estimate of the region of attraction. This can be done by extending the converse Lyapunov results of [34] to an estimate of the region of attraction. In order to perform this task, we need to restrict the set of time-varying parameters to the set, denoted by M’ , of all continuously differentiable functions from R to D where the derivative (1’ (t) of d(t) belongs to a compact set D1 C Rd. First, we give a definition of the region of asymptotic stability of a closed set. Definition 3.3 The region of uniform asymptotic stability of the system (3.1) with respect to the closed positively invariant set A is the set of all points 2:0 E R" such that |:c(t,x0,d)|A —+ 0 as t —> +00 uniformly in d E MD‘ This region is not empty because it contains A. 82 Second, we give the needed converse Lyapunov theorem. Theorem 3.10 Let d E MD' Let A C R" be a compact, positively invariant set for the system (3.23). Assume that the system (3. 23) is UAS with respect to A. Let R be an open and connected subset of the region of attraction that contains A. Then, there exists a smooth Lyapunov function V in R and three positive definite, with respect to A, functions U1, U2, and U3, all defined on R, such that V(x) = 0<=>xeA (3.65) 11100 S V(x) S 11200 (3-66) xgmaRUflX) = 00 (3.67) ggffixfifi) g —U3(x),Vd€D (3.68) Proof. see the proofs of Theorem 6.1 and Corollary 6.1.<1 Remark 3.5 The set R can be the region of attraction when the system is au- tonomous. In this case we can recover the same set of performance as in the previous section. First, let us replace Assumption 3.1 with the following assumption: Assumption 3.5 The time-varying parameter belongs to the set M5D. Items 1 and 2 of Assumption 3.1 hold. The closed-loop system under the state feedback controller is uniformly asymptotically stable. The separation result is summarized in the following theorem: Theorem 3.11 Let Assumptions 3.2—3.4, and 3.5 hold. Let R be an open, connected subset of the region of attraction that contains A and S be any compact subset of R which contains A. Then, the conclusions of Theorems 3.4, 3. 5, 3. 6, 3. 7, 3. 8, and 3.9 hold. 83 Proof. Using Theorem 3.10, the proof of boundedness is similar to that of Theorem 3.4. The proof of ultimate boundedness is similar to that of Theorem 2.2. We show that - 1 d f V S - 511300, forx 6? {X : U3(X) S 21636 3 #(6)} (3-59) We define c0(6) = maxU3(X) S ”(€){V(x)}. Then, we have {X : U3(x) g u(6)} C {X = V(x) S 60(6)}- Finally, we choose 64 = 64(5) 3 6’f small enough such that, for all 6 g 64, the set {X : U3(x) S ,u(6)} is compact, the set {X : V(x) g c0(6)} is in the interior of Q, and {x = V(x) S 60(6)} C {X = IXIA S 6/2} (370) Then, we conclude that the set {X : V(x) S C0(€)} x {n : W(17) 3 p62} is positively invariant and every trajectory in (2 x {r} : W(17) 3 p62} reaches {X 3 V(x) S 60(6)} X {77 : W(n) S p62} in finite time. The proofs of trajectory convergence and local uniform exponential stability are similar to those of Theorems 3.6 and 3.8, respectively. The proofs of local uniform asymptotic stability (with or without modeling errors) are similar to those of Theorems 3.7 and 3.9. In this case we locally replace, using Lemma A.3, the functions U4, i = 1,2,3, with class [C functions 62,-, i = 1, 2, 3, such that alum) s U1(X),02(|xl,4) 2 U200, and a3(lxl,4) s U300 on a certain ball around A. = {(xT.e.T>T = vs s c? W(x) s m} (4.9) for some c3 which can be made arbitrarily small and some c1 > c3 which depends OI] Cs. The derivative of W along the trajectories of the closed-loop system is 8W 8W W = 337M313 — Bf(x) + E6(x, —f(x), t)] + EBlfflfcsflf‘) - f1(5ts,:tf)] + %V?VE[6(x, —f1(x3,xf),t) - (5(33, —f1(Is,$f)tt)l (4'10) Using Assumptions 1 and 2 in (4.10) yields W s —fl4||x||2 + (6433 + flz)llrll + 5cm 6W 6W _ p _ _ “ , + [II 6,, B” k3” 1n + H a. E” tltgnrlul ”at. as.” (4 11> Using the bounds on 0W/6x and V3 into (4.11) yields _1_ . 2 _ Vs 2 W S - 54W?“ ’ (5cfl3 + A2) + 2717}; 7 “513““ 5c51 (4-12) where 271 = k2(HBH + IIEIIk1)k3 and ’7}, =HI’1”. Let —3 1 1 V 5 Cr (\fi/g) dgf 5E;- [(5c33 + 32) + 2717h (.773) ] NIH 2 + 4546081 1. 2 as 1 V + 5B; [(5cfi3 + 32) + 271?}; (i) 92 Notice that C17(0) : (3*. It is straightforward to see that 01’ W < 0 when “x” > cg; (fig) (4.13) W < 0 when W'(x) > 77ng (VI/3) (4.14) Using (4.14) and (4.8), it can be seen that the closed-loop system is globally uniformly ultimately bounded with respect to the compact, positively invariant set defined in (4.9) where cl > 0%(c3). The separation results apply to the system ita it if yf ys Aaaxa + Aafyf + Aasys + Ball. Abbxb + Abfyf + Bbu Afxf + Mfyf + Bfu+ EleaiEa + Dbxb + Dfxf + 6(x,u,t)] Ckxf 0313b This system fits into the model (3.11)—(3.14) with the vector of bounded disturbances being 6(0,0, t) (bounded by Assumption 1), the x state component being x f’ the 2 state component being (xa, xb), and the C output being ys. The state feedback controller considered is (L'a = Aaaia + Aaf’yf + Aasys + Ball :13b 2 Abbib + Abfyf + Lb(3/S “ 0353b) + Bbu 93 u : —f1(jaaib7$f) Global boundedness is achieved by saturation outside a region of interest. We have shown that the closed-loop system under this controller is globally asymptotically stable with respect to the compact positively invariant set (I). To implement the controller we use the high-gain observer if = Afif + Mfyf +Bfu+Ef[Dai‘a + Dbib+Dfifl + Ld(yf - Cfi‘f) The structure of the gain Ld is given in [10, Appendix B]. This structure is exactly the one suggested in Section 3.3 with the difference that in [10] all channels of equal relative degree are grouped together. According to Theorems 3.4 and 3.5, trajectories of the closed-loop system under output feedback come arbitrarily close to the set (I) x {e f = 0} in finite time, where e f is the estimation error. Moreover, Theorem 3.6 ensures convergence of trajectories under the output feedback controller to those under the state feedback controller as the observer gain approaches infinity. 4.3 Robust Control 11 - Finite-time Convergence to a Set [50, Section 3] Consider the control system 2 = A(z, u,d(t)) (4.15) y = C(z, d(t)) (4.16) 94 where the state z E R" and the input u E R. The functions A(.) and C(.) are smooth in their arguments, and d(t) is a time-varying smooth disturbance signal contained in a compact set D C Rd. For simplicity we denote d(t) and its time derivatives d(t), d(t), - .. by the same symbol d, i.e., d = (d, d, d, - - ~). We start with the following two definitions: Definition 1: [50, Definition 2] (Uniform Complete Observability). Consider the dynamical system C=AKwLy=CK) The above dynamical system is UCO if there exist two integers ny and nu and a C1 function ‘11 such that, for each solution of C:A(C1u0)1u0:u1imaunu =‘U we have, for all t where the solution makes sense, at) = we), - - -,y("y),uo(t), - - -,un..(t)). Remark 4.4 In [50] the authors used the more general notion of UCO of a function u(() with respect to a dynamical system. In such a definition we have mm» = we), - --.y("y)(t),uo(t),---,unu(z) (4.18) 96 where @(z) is continuous on Q and positive definite on {z : 193 g V(z) 3 cf} for some positive real number 195 < W. In order to use the state feedback {1(2) we need to know the output and ny of its derivatives in addition to the input and nu of its derivatives. Thereafter, we need to prepare to estimate the output and some of its derivatives, and design an input whose derivatives (a number of them) are known. It can be shown that there exist 77.3, + 1 smooth functions Ci and an integer mu 3 ny such that, for each solution of z' = A(z,u0,d(t)) it0:111 we have, for all t where the solution makes sense, Let yo = y and y,- : yi, i : 1, - - -,ny. Then, if we write the system as 90 = 311 311 = 92 we use a high-gain observer with the nonlinearity Cny + 1(0, uO, . . - , umu, 0) to 97 estimate the output and its derivatives that are needed in 11(2). Let [u = max{nu, mu} + 1. Thus, by adding in integrators to the system (4.15), we can have u and its required derivatives (nu for a and mu for the Cny + 1) as measured states of the system z' = A(z,u0,d(t)) it0:111 iteu—l = ’0 (4.22) For the moment, we are left with the task of designing a state feedback controller for the system (4.22). Let {1 = uO — 11(2) and C,- = %-_TII for i = 1, - - -,£’u, with K being a positive real number to be specified later on. Thus, the system (4.22) can be written as def 2 = A(z,t+e,d(t)) = heads» «51 = K52-Z—:(z)A R) 0 such that the set {2 : V(z) g c+ 1} is a compact subset of (21, and we have %h) s —11(z) is continuous on $21 and positive definite on the set {z : 19 < V(z) S c+1}. Claim: There exists a C1 function V1(z) that satisfies Assumption 2. Proof. In order for the system (4.23) to satisfy Assumption 2 we need to adjust the function V(z) and the various coefficients given is Assumption 1 as follows: Pick 191 as an arbitrary number in (O, 1 / 8). Let K. be a C1 class [Coo function such that 16(293) = 191, k(i9g) _>_ 8191, k(c5) Z 1, k(Cg) > 1 + [C(Cs) (4.24) This function exists since Assumption 1 states that 0 < 193 < 19g 3 63 < cg. Now, let V1(z) = k(V(z)) and c1 = k(c3) 2 1. Thus, we have {z:Vl g 8191} C {z:V(z) 36g} C ICzs lng C {2 : V(z) S c3} C {2 : V1(z) 3 Cl} (4.25) The set {2 : V1 3 Cl +1} is a compact set and is contained in the set {2 : V(z) g 99 Cg}. Finally, we have an, 82 (zt0td(t)) S -‘1’1(Z) where (1)1(z) = fi—fi(Vz)(z) is continuous on Q and positive definite on the set {z : 191$ V1(z) 3 C1 +1} C {293 S V(z) S Cg}. Hence, from Assumption 1 and the above adjustments, we conclude that the system (4.23) satisfies Assumption 2 with the C1 function V1(z) defined on the open set {21: Q and with '19 2291 <1 and c: c1214 Hereafter, we apply Lemma 2.3 of [50] to the system (4.23). Let the polynomial p(s) =3€u+1+a€usgu +~~+a1 be Hurwitz and let Ac be the companion form matrix corresponding to p(s). Let PC be the positive definite solution of Ach + PcAc = —I. Let ngg be an arbitrary compact set where we choose to initialize C. Lemma 2.3 of [50] suggests the following controller ’U = —K€u(a1€1+'”+agu§gu) = —K€u(a1[u0 — 22(2)] + - . . + aguggu) (4.26) and provides the Lyapunov function tat/1(2) + M1€TPc€ 01+1-V1 u1+1-€TPc€ W1(21€) : 100 where T #1 maX{ ,5 21315656 c8} 2 1 Furthermore, we have [Cze X ICEg C {($45) : W1(z,§) S C? + It?) 0 and, by letting p = 31, there exists K* 2 1 such that, for all K 2 K*, the derivative of W1 along trajectories of the closed-loop system (4.23), under the controller (4.26), satisfies W1 3 —2(2.€) (4.27) where (1)2(z,C) is a positive definite function on {(z,C) : 191 + p _<_ W1(z,C) S c? + a? + 1}. Thus, we have proved the following: For any pair of compact sets (lCszng), neighborhoods of 0 with lCzs C ICZg, we can find compact sets ICCs and Ing, gains az- ’s, a bound K*, integers Eu and ny, and functions a and \I' such that for each K 2 K71; the dynamic state feedback it = K62 52 = K63 Stu = [fl—6"” v = —K€“(a1[u0—u(z)]+~~~+aguCgu) (4.28) in closed-loop with the system (4.15)—(4.16} makes all solutions starting in lng x [ng enter the set K23 >< K53 in finite time. 101 The theory developed in the previous chapter applies to the system 90 321 T $12 (4.29) : 0713/ + 1(2,’U.,K€2,' ' °,Kmu€mu +11d(t)) : K€2 = K 53 (4.30) = Kl—guv (4.31) where 2 can be expressed as function of the state variables given that the original system is UCO. This system fits into the model (3.11)—(3.14) with (4.29) being the x-dynamics and (4.30) being the z-dynamics. Furthermore, the vector of bounded disturbance consists of d(t) and some of its derivatives, all denoted here by d(t). The additional output C is represented here by the vector (u,C1, - . . ,Cgu). Since we are dealing with a regional result we need to restrict the time-varying parameter d to belong to M’D. We consider the state feedback controller. where = —K€u(alfl + ° ° ° + agutfgu) = —K€u(a1[u0 — x(z)] + - ~ - + agu€eu) Global boundedness is achieved by saturating the 11(2) outside the set {2 : V(z) g Cg}. We have proved that trajectories of the closed-loop system under the state feed- back controller, starting in the compact set IC 2 g x ngg (arbitrarily chosen), enter the compact positively invariant set {(2, C) : W1 (2, C) s 191 + p} in finite time (positive invariance is clear from (4.27)). An estimate of the region of attraction is the set no = {(2,5) : W1(2,C) g c? + 11% + 1}. To implement the controller we use a nonlinear high-gain observer with the non- linearity Cny + 1(0, uO, - - - , umu, 0) as a nominal model for the nonlinearity Cny + 1(2, uO, - - - , Umu, d(t)) (this is the same observer used in [50]). Theorem 3.11 guarantees that the trajectories of the closed-loop system under output feedback controller starting in S x Q, where S is a compact subset of (20 and Q is a compact set in Rny, are bounded and will come arbitrarily close to the set {(2,6) = W1(z.§) S 191+p} >< {6 = 0} where e is the estimation error. Moreover, Theorem 3.11 shows that the trajectories under output feedback control converge to those under state feedback control as the observer gain approaches infinity. 4.4 Tracking [28] We show that a tracking problem can be viewed as asymptotic stability with respect to a positively invariant and compact set. Then we give an example where the separation results of the previous chapter apply. 103 Consider the globally defined single-input single-output system Tl : f0(7li 113) x = Acx + Bab—(£53m — a(x)] y = x1 (4.32) with n E R" _ T, b(x) 7E 0 for all x 6 RT, and (Ac, BC) defines a chain of integrators. Remark 4.5 The above system is not the most general input-output linearizable system because the nonlinearities a(.) and b(.) depend only on a part of the state, namely x, and does not depend on 1}. Let (4.32) satisfy the following input-to—state stability assumption: Assumption 1: There exists a C1 function V1(n) such that 01(Ilnll) S V107) S a2(ll77ll) (4.33) 8V 371400747) 5 —a3(nnn) + aunt“) (4.34) for all 17 E R” — T and all x 6 RT, where 612-, i = 1, - . - ,4 are class Koo functions. We need the output y to asymptotically track the reference signal y R' We assume (1) (r - 1) (7‘) that y R1 y R , - - -, y R are bounded and continuous and y R is bounded and piecewise continuous. Set YR = (yR, yg),-~, yg _ 1)) and e = x — YR. The system (4.32) can be written in the error coordinates as 7') = mm(3 + YR) 104 -——)[u — a(x)] — ygl} d§f g(e + YR, u, t) 61 = y—yR (4.35) The objective is to make e approach zero asymptotically. Consider the state feedback u* = a(e + YR) + yg)b(e + YR) + Ke where K is such that Ac + BCK is Hurwitz. Then, using Theorem 5.2 of [28], we conclude that ”60)” s 71e‘72tlle(0)ll (4.36) for all t _>_ 0, for some positive constants 71 and '72. Let r0 = supt Z 0 ||YR(t)|], p0(.) = a3_1(20z4(.)), and c = a2(p0(2r0)). Notice that p0 thus defined is a class lCoo function. Since V1(n) is continuous and radially unbounded, the set 4={n=V1(n)Sc} is a compact subset of R" - T. Thus, we conclude that the set A = {e = 0} x A is a compact subset of R". We recall from Exercise 3.33 of [28] that a class [C function a, defined on [a, 0), satisfies a(sl + 32) g 0(281) + a(232). Given this result and (4.34), we can write 6V 1 firemen/R) s jagtnnm + 44(21th for mm c (4.37) 105 for all t 2 0. Inequality (4.37) can be written as 8V . 1 ' - time... + YR) _<_ 743017;“) + attend», v 77 ¢ A (4-38> Now, define the function [ln(1+ s -— c)]2 for s 2 c 0 for s E [0,c] The function A(.) is a C1 function on [0,oo). It is also nonnegative and strictly increasing for s 2 c. Furthermore, using L’Hopital’s rule of differentiation (to show X(s) -—> 0 as s —> 00), we have /\'(s) g k, k > 0 (4.40) for all s _>_ 0. Consider the Lyapunov function candidate W(n) = /\(V1(n)). The following is a technical result needed in the forthcoming analysis: Claim 1: The function W(n) is C1 and radially unbounded. Furthermore, there exist three class ICoo functions 61, 62,and 63 such that W(n) and %%(Vl(n))a3(llnll) can be bounded as follows 51(I'Ill4) S W(n) S 520mg) (4-41) 5d—8(V1(n))03(|l77||) 2 5307M) (4-42) for allne Rn—r. Proof: The continuous differentiability of IV follows from that of V1 and x\. As for 106 (4.41) and (4.42), they follow from Lemma A.3.<1 Given the results of Claim 1, we can write 8W E—f0(n,e + YR) S ‘63(l7l],§) + ka4(2llell)a V 77 ¢ A (4'43) where k is defined by (4.40). Let p1(s) = 62(63_1(2ka4(2s))). Then, we can write 0W Wf0(n,e + YR) 3 4641414), v n t 4, and Mn) 2 p1(||e||) (4.44) for all t Z 0. Now, according to [48, Lemma 2.6], there exist a class ICE function 51 and a class KI function 7 such that, for every 77(0) E R” — r, we have |n(t)|4 S 51(|n(0)|4.t) + 7(0 )v = 0 (4.53) ' _ -f(01 C1 ”U” C ‘ ”42¢ + B2 l g(o. c. to») l “'54) In the forthcoming development we will use a state feedback controller of the form t = gale, c. tem-foe. c. V(t)) + t] where gO(.) and f0(.) are nominal models of f () and g(.), respectively. The following assumption identifies the internal model: Assumption 2 [38, Assumption 2, Remark 3]: (1) There exists a unique mapping C = A0(V) which solves the partial difi'erential equation 8A0 ]—f(01 <1 V(t))] ——SI/=A/\ u+B 4.55 61/ 0 2 0( ) 2 9(0.C.V(t)) ( ) (2) There exist q constants d1,---,dq such that, on the zero-error manifold, the control component 17 is to») 3:3 —gO(o. A000, v)g‘1(0.40(v).v)f(0./\o(v).v) + foe, 103), t) 111 and satisfies Lgcu) = 4150/) + d2L36(z/) + - . . + 4,13 ‘ 1c(1/) (4.56) for all V 6 D1, where L3C = $5012. Moreover, the polynomial equation sq—dqsq_1—---—d2s—d1 =0 has distinct roots on the imaginary axis. A routine calculation shows that there exist a q x q matrix S, q 2 p1, and a 1 x q constant matrix F such that 6’52”) = SV(u) (4.57) C(11) = FV(V) (4.58) for all u 6 D1. In fact this happens for ( 0 1 0 0) 0 o 1 o S = 0 o 0 1 (d1 d2 d3 dq] ( C(u) ) L3c(u) 1’0!) = [ft—25(5) (Lg-15(5) ) 112 I‘=(100-~0) Assumption 2 means that S has distinct eigenvalues on the imaginary axis. Remark 4.6 If on the zero-error manifold we have C = /\0(V), then A(V) diff T1(0, A0, V) is a well defined function of V. Now, we proceed with the design of the dynamic state feedback controller. First, we need a minimum-phase assumption. Before we state such an assumption, we give the dynamics satisfied by x and 2 = 2 — /\(V) : x : Ax + B[f(x,C, V) +g(x,C,V)v] 2 = 40(3, §,V) (4.59) wheregb ()=1/2( 7 A _8A 0 . x, 2 + (V), V) 63.90% Assumption 3 [38, Assumption 4]: There exists a C1 function W defined on Rm, and four class IC functions 011,012, (131, and a, such that l/\ a1(llill) 8W - ~ ~ ~ 13—2'¢o(:t.z.v) s —¢1tVIIZIIZGUIZ“) W(E) S a2(||$|l) for all (x,z,V) 6 N1 x N2 x D1. Set Qa2 dgf {z : W(2) S a2}, a2 > 0. Next, we consider a dynamic state feedback of the form ('7 = So+Je (4.60) 113 ’U : (p(xiai Ca V) (4'61) where p(x,o,C,V) is locally Lipschitz in (x,o) uniformly in (C,V). Then, [38, As- sumptions 6,7,8, Lemma 1] ensure that the controller achieves convergence to the zero-error manifold A = {x = 0} x {a = LV(V), 2 = A(V)} for some constant matrix L. This manifold is compact, invariant, and asymptotically attractive uniformly in V 6 D1. Hereafter, for the sake of completeness, we state these assumptions and lemma. The control strategy consists of two steps. First, the controller ensures ultimate boundedness of trajectories. Then, it achieves asymptotic convergence to an equilib- rium point. a a . c To state the next assumptlon let C = and C = h1(C, C, V), 1.e., part of the dynamics of the closed-loop system. Assumption 4 [38, Assumption 6]: There exists a C1 function V : R" + q —> R4. which satisfies 510%”) S V(E) S 420%”) W —¢2(ll€ll). we) 2 4(4) 55,11 (E1 Ca V) |/\ for all C 6 X1, uniformly in (C, V) for all (C, V) 6 U2 x D1, where fi, 21, 52 and (152 are class [C functions and u > 0 is a design parameter. Define X1 = Rq x N1 and Cal 2 {C : V(C) 3 a1} where a1 > 0 is chosen such that Cal Q X1. It can be shown that the set 5201 x Qa2 is positively invariant for some a2 > 0. Assumption 4 along with Assumption 2 imply that the trajectory (C(t),z(t)), starting in Cal x Qa2 will, for all V 6 D1, eventually enter 114 the positively invariant set R7) dgf A” x Flt where A” déf {C 6 X1 : V(C) g (301)} and r“ déf {z : W(2) g 7(a)} for some class [C function 7. The set R” is a neighborhood of (C, 2) = 0 whose size can be made arbitrarily small by choosing a small enough. Assumption 5 [38, Assumption 7]: There exists a compact, positively invariant set S” Q A” such that C (t) enters S” in finite time. Furthermore, inside S”, the control component 17 takes the form 77 = K06 + f2(C.l/) where f2 satisfies f2(/\0(V),l/) = L2V(V). Lemma 4.1 [38, Lemma 1] Suppose that Assumptions 2 and 5 hold. Then, there exists a q x q matrix L such I‘ = —(K01L + L2), LS = SL Furthermore, the set {a = LV(V), x = 0, C = /\0(V)} is an integral manifold of the closed-loop system (4.50) and (4.60)—(4.61). Now, we need to design a state feedback controller (choose K0 and f2) such that the zero-error manifold {o = LV(V), x = 0, C = A0(V)} is regionally attractive. To a study this attractiveness, we set 6 = o -— LV(V) and 17 = x . With this change of ~ 2 variables, the zero-error manifold reduces to the origin 17 =- 0 which is an equilibrium point of the closed-loop system (4.59) and (4.60)—(4.61). The next assumption is needed to show attractiveness of the origin 7'] = 0: 115 Assumption 6 [38, Assumption 8]: The origin of the closed-loop system ~ 17 = h1(77, V) is locally asymptotically stable. The theory developed in the previous chapter can be applied to the autonomous system i: : A$+B[f($,C,V)+g($,C,I/)’U] C = A2C+BQU e = Cx (4.62) This system fits the model (3.11)—(3.14) with Ci’s, being the additional measured outputs C. The smoothness of f and g inherited from f and g implies a local Lipschitz property in (x, C) uniformly in V 6 D1- We consider the state feedback controller (7 = So+Je v = cp(x,o,C,V) Global boundedness of the control law is achieved by saturation outside a region of interest. We have shown that the system with this controller is asymptotically stable with respect to the zero error manifold A dgf {o = LV(V), x = 0, C = /\0(V)} with Qal x Qag being an estimate of the region of attraction. Lemma 4.1 shows that A is a center manifold for the system (4.62) with the controller. Since V E D1, then A is a compact subset of R" + 2‘1 + m. 116 To implement the controller we use a linear high-gain observer to estimate the tracking error and its derivatives (let the estimate be it). Let x be the scaled estima- tion error. Then, the closed-loop system under the output feedback controller can be written compactly as Q: 77 = h3(77.X. V) 6X 2 A1X + €B[f1 (x, 2, V) + 91(x, 2, V)tp(o,x, 2, V)] = A1X+elt2(n,x.V) , (4.63) Theorem 3.11 guarantees that the trajectories of the closed-loop system under output feedback control (starting in the compact set Qal x Qaz x Q, where Q is a compact subset of R") are bounded and come arbitrarily close to the set A x {x—x = 0}. Moreover, Theorem 3.11 shows that the trajectories under output feedback control converge to those under state feedback control as the observer gain approaches infinity. To establish asymptotic convergence of the closed-loop system (4.63) to the equilibrium point (17, x) = (0,0) some conditions on the nonlinearities have to be imposed. These conditions are given in the following assumption: Assumption 7 [38, Assumption 9]: There exists a C1 function V(n) R" + q + m —> R4. and a continuous positive definite function (133 such that 5117~ “71207.0. mu 5 4143(4). 412 o (4.65) 317 - .. 5114mm) -h3(n.0.V)l s 4243(6)”qu (4.66) 0€1 + Ne u = M77+T€1 (4.74) Then, it is possible to prove the following property: Proposition 1 : Suppose Assumptions A and B hold. Suppose that (4.74) asymptotically stabilizes the linear approximation of (4.67) at the equilibrium point (C1, z,x) = (0,0,0), (w, u) = (0,0). Then, there exists a q x q matrix 11 satisfying II 2 11(1), TH = F (4.75) where <1) and F are defined as in (4.72). As a consequence, the closed-loop system 61 = C1+NHx i = Z(u)2+p0($1.w.tt) x = Fx+C(M17+TC1)+P(z,x,w,/i) d) 2 SC) (4.76) 122 has a globally defined center manifold Mo = {(61.Z.Ivtw) =61 = mam“), Z = ((0141). 3? = 7r“(66ml (4-77) at (C1,2,x,w) = (0,0,0,0). Proof. similar to that of Proposition 1 in [21], which applies to the output feedback case.<1 Now, let us design the state feedback controller that makes MC semiglobally attractive. The issue here is to choose N, T and M such that this goal is achieved. Let C1 = C1 — Hra(w, 71). Then, in the coordinates (2, 2, C1), the closed-loop system becomes 51 = 451 + NHx 3' = Z (142 +130(Hi.exp(3t)w0,u) :if = F5: + C(Mn + TC~1) + P(z, x, exp(St)w0, a) (4.78) where (.10 represents the value at time t = 0 of the state of the exosystem. System 0 are unknown. We (4.78) is an uncertain system because the actual values of u and w assume that the initial value 620 belongs to an a priori known compact set W 6 Rd. The invariant manifold reduces to the origin (C1, 2, 2) = (0,0, 0) where the regulation error e = 5:1 is zero. Thus, output regulation is achieved if the origin is attractive. In order to be able to use the separation results of Chapter 3, Assumption C of [21] has to be modified as follows: 123 Assumption C: There exists a positive definite smooth function V(2) satisfying a1llil|2 5 WE) s 02||5||2 (4.79) 6V g(zflfli+P0(Hi,eXp(St)thu)) < —C¥3]]2]]2+C]H.’L‘]2 (4.80) for all 2, :Z‘,t and all ((120, a) E W x P, where 612- ’s are positive constants and c > 0. For N choose any matrix such that the pair (Q, N) is controllable. Then, given any compact set S of initial conditions (C1(0),2(0),:Z:(0)) 6 R4 x R" T T x RT, find (via backstepping methods and high-gain feedback, for example) a pair of matrices M and T such that the origin is locally exponentially stable with a basin of attraction that includes the set S. In order to apply our separation results, we consider the system i = Z(V)Z+po($1.w.u) 7'7 = An+G(U+1fir(2.n.w.u)) (.2) = Sw e=H17 This system fits the model (3.11)—(3.14) with a being the vector of bounded distur- bances (constant in this case, thus it belongs to Mb). We consider the state feedback controller. 51 = €1 + Ne 11 = M77+TC1 124 This controller achieves semiglobal tracking uniformly in w and a. Global bounded- ness is achieved by saturation outside a region of interest. We showed, by construction, that the system with the controller is exponentially stable with respect to the compact positively invariant zero-error manifold MC with 8 being an estimate of the region of attraction. To implement the controller we use a linear high-gain observer. Boundedness, ultimate boundedness, and convergence of trajectories under the output feedback controller (starting in 8 x Q, where Q is a compact subset of R") are guaranteed by Theorem 3.11. Moreover, Theorem 3.11 guarantees exponential stability with respect to the compact positively invariant set MC x {17 — C0 = 0}, where C0 is the estimate of 17. 4.7 Adaptive Control [27 , 2, 1] We consider the system represented globally by the n-th order differential equation g(nl = f0(.) + 2?: 1f,-(.).9,- + [gO(.) + 2;”: 1gi(.)02-]u(m) (4.81) where u is the control input, y is the measured output, y(z) denotes the i-th derivative of y, and m < n. The functions f2- and 91 are known smooth nonlinearities which may depend on the output, the input, and their derivatives up to the (n —1)-th order and (m — 1)-th order, respectively. The constant parameters are unknown, but the vector 6 = [01, - - - , 9p]T belongs to Q, a known compact convex subset of RP. The objective is to design a state feedback controller that renders all the signals bounded and makes the output asymptotically track the bounded reference signal y7-(t). We assume that all derivatives of yr up to the n-th order are bounded and that y,(~n) (t) is piecewise continuous. 125 Letxi=y(i"1),ei=xi—y,(-z_1)fori=1,---,p;2i=u(i_1),i=1,---,m, and v = W " 1). Let yr = (4.4),...4)" ‘ ”(017" and 342 = lyr,y(n)(t)lT- To ensure that the system is input/output linearizable we make the following assumption: Assumption 1: [g0(x,2) + 0Tg(x,2)| Z k > 0, Vx E R", 2 E Rm and 0 6 521, where 91 is a compact set that contains (2 in its interior. We assume that there is a global change of variables C = T1(y,-~,y("—1),u,~-,u(m_1)) such that the system (4.81) can be writ- ten 5 = Ame + b(Ke + f0(e + yr, 2) + 6Tf(e + yr. 2) + [90(6 + yr, 2) + 0Tg(e + yr, 2)]v - y,(~n)} (4.82) where Am 2 A — bK, (A,b) is a chain of integrators, and K is such that Am is Hurwitz. Before designing the controller, we have to make a crucial minimum-phase assumption: Assumption 2: The system C = F (g,y,~,0) has a unique bounded steady-state solution C. Moreover, with C = C — C the system C : F(€+C,€+yr,0)—F(C,y,0) : F2(<~16+y’r1616) 126 has a continuously differentiable function V1(t, C), possibly dependent on 6, that sat- is fies Tulléll2 S V1085) S 772M“2 6V1 av, - 2 ~ 3,: 35F“ ”nsllCH +774||C||||€l| |/\ where 171,172, 173 > 0, and 174 2 0 are independent of 32,, C, and 6. The state feedback controller is designed to adaptively cancel the nonlinearities and stabilize the system (4.82). It consists of the dynamic controller 6 = Fp(6+6,e,z,yR) (4.84) v = 1(1(6+ 6,6, 2,)7R) (4.85) where 6 = 6 — 6 (6 is the estimate of 6) and --KC + y7(~n) — f0(e +yr,Z) — 6Tf(e+yr,z) g0(e + yr, 2) + 6Tg(e + Yr, 2) «(4.) = (4.86) The functions PP and w are locally Lipschitz in their arguments uniformly in y R and 6. The adaptive law is a projection-type law (for more details see [27]). Consider the Lyapunov function candidate V = eTPe + é-6F—16, where P is the positive definite solution of the Lyapunov equation PAm + AgP = —Q, where Q = QT > 0 and F—1 is a positive definite matrix. The derivative of V along the trajectories of the closed-loop system (4.82)—(4.86) is v = —eToe + 871—1739 — r3] (4.87) 127 where (73(e, z, 3213,61) = 2eTPb[f(e + yr, 2) + g(e + yr, 2)w(e, 2, 32R, 6)] The adaptive law is chosen to ensure that éTr”1[é— 1‘6] 3 0 (4.88) and 6(t) E (26 for all t 2 0 and all 6(0) 6 Q, where 96 is a compact set chosen such that Q C {25 C S21. Inequality (4.88) ensures that V S —eTQe. Therefore, all signals are bounded for all t Z 0. Since 32,. is bounded, we conclude that x(t) is bounded, which implies, in view of assumption 2, that z(t) is bounded. By using [28, Theorem 4.4] we can conclude that e(t) —> 0 as t —1 oo (4.89) To discuss parameter convergence let us define the regressor vector wr (t) as W(t) = f(yr, 5) + g(yrt Ell/40121371210) (490) where 2 is the steady-state solution of the zero dynamics, determined uniquely from C- : T1(y7'1 Z)‘ Assumption 3: The regressor vector is persistently exciting. Let w(t) = f(e + yr, 2) + g(e + yr, z)1,b(e, 2, 3213, 6) 128 Then, the 62- and 6-dynamics can be written as e = Ame—b6Twr+b6T(wr—w) = PM) @1- Now let us work on (tar — 6)). Claim: We can write (9T T9 wr-w=(f—f)+(§tl—gtl)+4_ . (491) 90-1-6 where f(-) = f(yr. 2). s(-) = 906. 2) f0 = foO’r. 2). tot) = 90m, 2). i4.) = 440. 2.32189) Proof. Let "(h(.) = 1,12(0,2,31R,6). Then, we have wr-w = (f+§t/3)-(f+gib) (f—f)+(4t/3—gtt)+§(i—i) The claim is proved if we show that Q'T = .. w g0+0Tg T — ~ (lb-111) We have it”) — 10 — 6T1" _ it”) - 10 - 17‘)? £70 + 0T4 so + 6T4 6535—”) — f0 - 077) + 6117(4) + 0T5) (90 + 9T§)(§0 + 6%) $1 | S1 129 Using the expression of w and car we can write 9T" ' 9T ' 91/) + f 1/3—1/3 = -_—.——- ——.—— 90+0Tg 90+0Tg By using the above claim, the 62— and 6-dynamics can be written as ' A —b T A . e = m go, 6 + 30 (4.92) 9 21:94)..pr 0 9 Ae(.) where 400., 2) + 0740., 2) 4004, 2) + 9T6», 2) 4.4.) = (>610? —f)+(443—gt)1 Ae(.) = Fp(.)—2FQwaTPe > Kg2 Since f, 901 g, and 1]) are Lipschitz functions in their arguments uniformly in 32. and 6 and since 6 is bounded, we have IIAs(-)ll S 51||6|| +52||C~|| (4-93) l|4e(-)ll S 53||€|| (4.94) for some 6,- _>_ 0, i = 1, - --,3. Inequality (4.94) becomes clearer from the explicit form of Fp(.) (see [27]). From well known results in adaptive control theory (see for example [28, Section 1342]) and the fact that car is persistently exciting and Q is bounded, it can be 130 shown that the system 6 A —5ng e L = m 7' ~ (4.95) 9 2Fgw7~bTP 0 9 is exponentially stable. Then, from the converse Lyapunov theorem, there exists a Lyapunov function V2(t, e, 6) whose derivative along (4.92) satisfies 4119461)”? 5 V2(t.e.5) 5 82119.91? (496) V2 < -64||e||2 - 6510112 + 651412 + 6714119) +58||ellll5|l +59||5||||5ll (4.97) (4.98) 6V - “(W 2 ]] — /\3ll(e,9)ll2 (4.99) a(e ,6) for some positive constants 64, 65, and A7, i = 1,2,3, and some non-negative con- stants 62', i = 7, - - -,9. The derivative of V along the trajectories of the closed-100p system (4.82)—(4.86) is V _<_ —eTQe s —It1IIeII2 (4.100) where k1 > 0. Consider the Lyapunov function candidate W(t, e, C, 6) = aV(e, 6) + 6V1 (t, C) + V2(t, e, 6). Then, using (4.92)—(4.94), (4.97), (4.100), and Assumption 2 it can be shown that the derivative of W along trajectories of the closed-loop system 131 (4.82)-- (4.86) satisfies 9 . T — - Hell Hell W s - uéu M uén <4-101> _uc‘n , _ “in, where M is given by pak1+54—56 —§2,Z _18_774_2_+_‘§8 - M = -222 55 _5 a a _j—"Bam fig [3'73 , Choose 6 large enough to make 65 329 6 -129 6773 positive definite; then choose or large enough to make M positive definite. Therefore, we conclude that (e = 0,6 = 0,6 = 0) is exponentially stable. We consider the system ('2 = Ae+b{f0(e+yr,z)+6Tf(e+yr,z) + [90(6 + yr, 2) + 9T9(6 + yr, 2)]v - 9291)} C : F2(€ay7'901€) This system fits the model (3.11)—(3.14) with the vector of time-varying bounded perturbations being d(t) = (yR(t),6,C—(t)) and the additional outputs C being the input and (m - 1) of its derivatives; i.e., 2. Since we are dealing with a regional result 132 and in order to apply the separation results of Chapter 3, we need to restrict the time-varying parameter d to belong to M5D. This can be guaranteed by assuming that y)” +1)(t) exists and is continuous and bounded for all t 2 0 (notice that ((t) E M’D by Assumption 2). We consider the state feedback controller. Gr || Pp(é + 63 8, Z1 3’12) ’1) : ¢(5+9,8,2,yR) This controller achieves exponential tracking of the reference signal with R" x Rm x (25 being an estimate of the region of attraction. Global boundedness is achieved by saturation of 1M.) and I‘p(.) outside a region of interest. 1 In the case at hand the compact positively invariant set A reduces to the origin (e, 6, C) = 0 which is exponentially stable. Thus, we are dealing with a stabilization problem of a time-varying system. In [2] a case where only a part of the regressor vector is persistently exciting is considered. This case does not fit into our formulation because we we do not have stability with respect to a compact, positively invariant set. Now, in order to implement the controller (4.84)—(4.85) and recover the perfor- mance achieved under it, we use a linear high-gain observer to estimate the tracking error and its derivatives. Boundedness, ultimate boundednes, convergence of trajectories under output feedback control, as well as local UES, are guaranteed by Theorem 3.11. Remark 4.8 As in Remark 4.7 we choose the nominal value of the system ’3 non- linearity 43 to be zero ( which results in a linear observer), because otherwise we have 1Actually, the saturation of l‘p(.) may not be needed since the projection-type adaptive law guarantees boundedness of 6. 133 to know the steady state solution 2 and we have to choose a nominal value which is zero at the steady state. 4.8 Conclusion In this chapter we presented separation results for design cases Where the trajec- tories converge to a compact, positively invariant set. For each of these cases, we showed how the state feedback controller performance can be cast as convergence to a compact, positively invariant set. Then, after suggesting an output feedback im- plementation of the control law, we applied the previous chapter’s results and listed the set of performance measures that can be recovered. 134 CHAPTER 5 Separation Results for the Control of Nonlinear Systems Using Different High-gain Observer Designs 5.1 Introduction In this chapter we are concerned with the separation approach to the design of stabi- lizing output feedback control using high-gain observers. In the separation approach the design is pursued in two steps. The first step focuses on the design of a stabiliz- ing state feedback controller, while the second step is concerned with the design of a high-gain observer that successfully provides state estimates such that the overall closed-loop system is asymptotically stable. High-gain observers are attractive because of their robustness, namely their ability to estimate the unmeasured states while rejecting the effect of disturbances. The available techniques for the design of high-gain observers can be classified into three groups. First, pole-placement algorithms which lead to either a two-time scale 135 structure as in [11] or a multiple time-scale structure as in [46]. Second, Riccati equation-based algorithms which lead to either an H2 algebraic Riccati equation (ARE) as in [9] and [43, Section 4.4.1] or to an H00 ARE as in [42] and [43, Section 4.4.2]. Third, Lyapunov equation-based algorithm as in [17]. For linear time-invariant systems the recovery of asymptotic stability through the use of high-gain observers is shown in [43], and references therein, in a Loop Transfer Recover (LTR) context and in [42] in an H00 context. As for nonlinear systems, Esfandiari and Khalil in [11] and [30] use pole placement/singular pertur- bation to design a one-parameter observer gain and recover the robustness properties of a controller designed to stabilize a fully linearizable system. This design results in a standard two-time scale singularly perturbed system. In [46] Saberi and San- nuti design a multiple-parameter observer to recover the global stabilizability of an uncertain linear system. This design results in a standard multiple time-scale singu- larly perturbed system. Tornambe in [52] uses a similar pole placement technique to recover local asymptotic stability for a class of input-output linearizable systems. Nicosia and Tornambe in [40] use singular perturbations to recover local asymptotic stability for the case of a robot with elastic joints. Teel and Praly [49] combine ideas from [11] and [52] to achieve semiglobal stabilization for a wide class of nonlinear systems. Isidori in [21] shows that his previously proposed solution for the general structurally stable regulation problem [22] can be coupled with ideas from [11] to solve a problem of robust semiglobal output regulation. In [7] Busawon et at present local and global separation results for a class of nonlinear systems using a high-gain observer designed with a Lyapunov equation-based algorithm. A characteristic of a high-gain observer is the peaking phenomenon of its transient response. This phenomenon is examined carefully in [11]. Peaking occurs in the observer variables and propagates to the state variables through the control law. This peaking could be destabilizing in the case of a finite region of attraction, which 136 only allows recovery of local asymptotic stability as in [52], [40], and [7]. However, despite peaking, global asymptotic stability results can be obtained at the expense of imposing restrictive global Lipschitz conditions, as is the case of [46] and [7], and tolerating a clearly unacceptable transient response. To remedy this problem the idea of saturating the control law outside a region of interest was introduced in [11]. It is this saturation feature that leads to the semiglobal results of [11, 30, 49, 21] and the regional results of Chapter 2 and 3. Aside from some special cases like the case of relative degree-one systems, all the different ideas to design high-gain observers boil down to various asymptotic methods to approximate the derivatives of the outputs. In [25] Khalil illustrates this observation through an example and in this chapter we prove it rigorously. In this chapter we show that separation results, similar to those of Chapters 2 and 3, can be obtained if the other available algorithms for observer gain design are applied in combination with global boundedness of the control law. Section 5.2 describes the different algorithms that can be used to design the gain of a high-gain observer and shows that in all of them the gain asymptotically matches the gain structure used in Chapters 2 and 3. Section 5.3 argues the separation results of Chapters 2 and 3 when we use alternative observer gain designs. 5.2 High-gain Observers - A Comparative Study We consider the system represented by (2.1)—(2.4). The estimation error dynamics for the subsystem (2.1) can be written as e = (A — HC)e + B(i(2:,:i:, z,u) (5.1) 137 where 6(.) = ¢(:z:,z,u) - ¢0(i:,C,u). We need to design an observer gain H that stabilizes (A — H C) while rejecting the effect of the disturbance 6(..) This is achieved if we could design H such that the transfer function between the disturbance input and the estimation error (31 — A + H C')-lB is identically zero or arbitrarily close to zero. We design H as a function of a parameter c or a set of parameters 62‘, i = 1, - - - ,p such that this transfer function approaches zero as e or ei’s tend to zero. We present three different approaches to the design of H and show that in all three cases the gain H has approximately the structure (2.10). 5.2.1 Pole Placement / Time-structure Assignment In pole placement we assign one or multiple time-scale eigenstructures to the ob- server matrix and ultimately make the closed-loop system under output feedback transformable into a standard singularly perturbed system. This is the approach used in Chapter 2. A detailed exposé of this technique as applied to full or reduced order observers can be found in [43, Section 4.3], [11], and Chapter 2. 5.2.2 Riccati Equation-Based Algorithms Optimization-based techniques to design the high-gain observer can be reached through two paths: Loop Transfer Recovery and disturbance attenuation. Both can be applied to general stabilizable and detectable linear time-invariant systems 2': = A$+Bu (5.2) y 2 Ca: (5.3) The LTR algorithm consists of two steps. First, design a state feedback controller u 2 F2: to shape the loop transfer function as desired. For example, by breaking the feedback loop at the input side the 100p transfer function is given by T(s) = 138 F (31 — A)_lB. Second, design a Luenberger observer to estimate the state a: by :6 and implement the feedback control u = F 1%. The error E between the target loop transfer function and the realized one is given by E(s) = M(s)[I + M(s)]—1(I + F(sI — A)_lB), see [43, Lemma 2.2.1], where M(s) : F(sI — A + HC)_lB. It is also shown that, for all 0 3 led] < 00, E (jw) = 0 if and only if M ( jw) = 0. The observer gain can be designed to exactly make E (s) = 0 or to depend on a small positive parameter p; i.e. H = H (,u), such that the loop transfer function under output feedback control will approach that under state feedback control asymptotically as u tends to zero; i.e. to asymptotically make E(s) tend to zero. The observer design for asymptotic LTR can be achieved by pole placement or optimization-based methods. Pole placement is discussed in the previous section. In optimization-based algorithms the objective is to find a gain H (u) that asymptoti- cally minimizes either the H2 or the H00 norm of M (s). In other words, let M (s, ,u) denote the matrix M (s) with H substituted by the designed H (u), then it can be shown, [43, Theorem 4.4.2], that ||M(s,u)|| —+ inf||M(s)|| as it tends to zero. A historical survey as well as clear explanation of this approach can be found in [43, Section 4.4]. Basically, the idea is to solve a standard H2 or H00 control problem for the following auxiliary system :i: = ATz + CTu + FTw (5.4) y = :1: (5.5) z = BT22 (5.6) The standard H2 or H00 control consists of determining the control gain H T to minimize the H2 or H00 norm of the transfer function between the disturbance input w and the controlled output z over the set of all possible gains while rendering 139 the matrix (A — HC) asymptotically stable. This transfer function is equal to M T(s). The infinimum of its H2 or H00 norm over all possible gains is equal to zero for a minimum-phase left-invertible system for any loop gain F. It is worth noting that these LTR techniques work for general stabilizable and detectable systems; i.e, not necessarily minimum-phase nor left-invertible, but “M(s, a)” is only guaranteed to converge to some finite value, except for some cases where F satisfies certain conditions. It is shown in [43, Section 4.4.1] and references therein that the H2 optimization- based technique yields, for a minimum-phase left-invertible system, a standard alge- braic Riccati equation of the form 1 AP + PAT — —2PCTCP + BBT = 0 (5.7) u The observer gain is H2(u) = (1 / #2)P2 (,u)CT where P2(,u) is the unique positive definite solution of (5.7) that makes the matrix [A — H2(u)C] asymptotically stable. The H00 optimization yields an H00 algebraic Riccati equation of the form AP + PAT — E12—PCTCP + $§PFTFP + BBT + [121 = o (5.8) The observer gain is Hoo(u) = (1 / u2)Poo(u)CT where P0001) is the unique positive definite solution of (5.8) that makes the matrix (A — Hoo(,u)C') asymptotically stable. According to [56], this solution exists for an appropriately large 7 and sufficiently small a (when the system (A, B, C) is stabilizable and detectable). Let 7* denote the infinimum of ||M(s)||oo over all possible gains. Then, equation (5.8) is actually solvable for all 7 > 7* and every 0 < u < u* where p* depends on 7. More- over, it can be shown, see [56], that ”M(s, 7)||oo < 7 for 7 > 7* (in our case 7* = 0). 140 The disturbance attenuation algorithm for observer design is detailed in [42], where it is applied to a minimum-phase left-invertible linear system. It is based on a parameterized H00 algebraic Riccati equation for a dual system. In our case it can be applied to the system :i: = A1: + 860(1):, C, u) + B6(a:, is, z, u) (5.9) y = C1: (5-10) z = a: (5.11) considering 450 as the control input, 6(.) as the disturbance input, and 2 as the controlled output. This methods yields, after some scaling, an H00 Algebraic Riccati Equation similar to (5.8). Now, let us go back to our problem of designing the gain H and apply the optimization-based techniques suggested by the LTR theory. The observer design problem formulated in (5.1), can fit into a Loop Transfer Recovery scheme if we consider d>(.) to be the control input and F = I to be the controller gain. Thus, in this case we have M (s) = (31 — A + H C)_1B. The problem is solvable since (A, B) and (A, C) are controllable and observable pairs, respectively (left invertibility and minimum-phase are implied by the structure of (A, B, C )) The structure of the stabilizing solution of a general algebraic Riccati equation as well as the eigenstructure of the observer matrix have been investigated in [47]. Here, we apply these results to our case of interest; namely, when the triplet (A, B, C) rep- resents a set of chains of integrators and F = I . In Appendix B we establish similar results for the H00 algebraic Riccati equation (5.8). Basically the same structure in (B.18)—(B.19), obtained in Appendix B, applies to both types of equations. The 141 observer gain is given by H = JZPOOCT = block diag[H1, - - - , Hp]. It can be shown u that, for i = 1, - - ~ ,p, we have [ (az,+0(.,))/., . of e- 62 (2+é( 2))/ z (5.12) _ (at, + weave? , To apply the analysis of Chapter 2 we have to transform the closed-loop system into a standard singularly perturbed form similar to (2.11)—(2.14) by scaling the estimation error. The scaling turns out to be :i: = $-S(€)77, where S = diag[S'1,---,SK], S,- = diag[e:i _ 1,---,e,-, 1] (5.13) This scaling, when applied to the observer (2.9) with the gain H just calculated, yields S_1(A — HC)S = 5‘1r + s—1A.R(s) (5.14) where 5 = diag[6117~1, ' ' ' , érpI'rp] (5.15) 3(5) = diagloklllrlw“a0(€p)1rp) 142 _azl 1 0 -a22 0 1 0 F = block diag[l‘1,-~,I‘p], 11,: —ai~,,-—1 0 1 b —a;.z. O-TZ'XTZ- '1 0 0 1 O 0 0 A = block diag[A1,---,Ap], Ai= 1 0 0 _1 O‘Tz'XTi and I‘ is nonsingular. This result yields a multiple time-scale singularly perturbed closed loop system similar to (2.11)—(2.14) with the estimation error equation given by 57'] 2 Fr) + 88601:, 2, 6,D(8)17) + A.R(£)17 (5.16) In Section 5.4 we show how the results of Chapter 2 can be extended to the closed- loop system formed of (2.11)—(2.13) and (5.16). 143 5.2.3 Lyapunov Equation-Based Algorithm In [17] Gauthier, Hammouri, and Othman presented a class of single-input—single- output nonlinear systems transformable into the triangular form r r ' 7 $2 91(331) $3 9261,1132) j: = E + E u = F(x) + G'(:1:)u (5.17) am 9n_1($1»°“a$n—1) _ WE) . _ 972(17) . To implement an output feedback control scheme, the following observer was sug- gested :i: = 17(5) + 6(5):; + so‘olcT(y — 05:) (5.19) where the gain 500 is the solution of the following Lyapunov equation (A + -21—6I)Ts00 + SOO(A + 2161) — CTC = 0 (5.20) and A is an n x n matrix representing a chain of integrators. This solution exists for sufficiently small 6. Through a Lyapunov analysis coupled with global Lipschitz conditions, global exponential stability of the estimation error is established. These results were extended to the single-input—multi-output case in [5]. The system (2.1)-(2.4), with the z dynamics dropped, becomes a special case of (5.17)—(5.19) when 9i(-) = 0 for i = l,---,n — 1. Equation (5.20) is written for a single chain of integrators. Let us examine its solution for a multiple chain of 144 integrators. Multiply (5.20) by 80—01 from the left and from the right to obtain _ _ _ _ 1 _ A5001 + soolAT — SOOICTCSOOI + 25 1 = 0 (5.21) It can be shown that the positive definite solution of (5.21) is a block—diagonal matrix and is given by 5301(5) = block diag[SO—011(e),o--,Sg01,p(e)] sgofne) = [(Sgo{,>gj1r,xr,=[(s;0{,(1))g,g—$_—,1 (5.22) Thus, the structure of the observer gain 30—01 CT is exactly the structure (2.10). Consequently, if the scaling of Section 2.5.1 is used for the estimation error we obtain a two time-scale singularly perturbed system similar to (2.11)—(2.14). Thus, if global boundedness of the control law is implemented, we obtain the results of Chapter 2. It is noteworthy that in [17] all nonlinearities are required to be known which undermines the robustness of the observer. However in the case where g,(.) = 0 for i = 1, - - - , n — 1, the results of Chapter 2 show that imperfect knowledge of the two remaining nonlinearities gn (cc) and (p(x) can be allowed. 5.3 More Separation Results In Chapter 2 we decompose the closed-loop system into a reduced model correspond- ing to the closed-loop system under state feedback controller and a boundary-layer model corresponding to the Hurwitz matrix A0 of the error dynamics. Then, we re- duce the parameter c to make the observer fast enough such that it brings the state estimate close enough to its real value in short time and restores the stabilizing pow- ers of the feedback controller. This is carried out using two Lyapunov functions: one, V(x, z, 6), for the reduced model and another, W(n) = nTPon for the boundary-layer 145 model. It establishes boundedness of trajectories by proving that trajectories enter, in a short time, the positively invariant set A 2' {V(r, z,6) S c} x {W(n) 3 p62} where c and p are positive constants. We notice that the observer gain of Chapter 2 depends only on one parameter c in all of the p channels which results in a closed-loop system with a two time-scale structure. Alternatively, one may have different parameters in the different channels as we have seen. This eventually, after scaling, results in a closed-100p system with a multiple time-scale structure. For the case, where the observer gain is designed using one of the above-mentioned algorithms, we can apply a multiple time-scale analysis to (5.16) and show that we have p decoupled boundary-layer models. The i—th fast subsystem is an exponen- tially stable linear time-invariant system with the Hurwitz matrix Fi° Thus we can repeat the same steps of the previous analysis with the Lyapunov functions V(x, z, 6), W1(n1), - - - , Wp(np) where W,- (172') = né-rPOZ-r), with P0,- being the unique positive definite solution of the Lyapunov equation PI‘z-+I‘ZTP = —I. In this case the positively invariant set is A = {V(x, z, 6) g c} x{W1(171) S pleg} x - - ~ x {Wp(np) 3 $612,}. All of the parameters ei’s should be simultaneously reduced to make the dif- ferent boundary-layer models fast enough to bring the state estimate close to its real value in short time. Ultimate boundedness and convergence of trajectories as well as asymptotic stability analysis can be performed as in Chapter 2 using the above-given Lyapunov functions. The foregoing analysis and discussion are summarized in the following theorem Theorem 5.1 Let the observer gain be designed using one of the above-given algo- rithms, leading to the gain structure (5.12). Suppose that: o The vector fields of the system (2.1)-(2.4), the dynamic controller (2.5)-(2.5), and the observer (2.9) be locally Lipschitz in their arguments over the domain of interest and vanish at the origin (at, z, 19) = X = 0. 146 o The functions F(6,:z:,(), 7(6,.r, C), and ¢0(x,C,u) are globally bounded in x. o The origin X = 0 is an exponentially stable equilibrium point of the closed-loop system under state feedback, with ’R, as its region of attraction. Let the initial state x(O) be in a compact subset S of ’R and the initial observer state 2(0) be in a compact set Q of RT. Then, for sufficiently small 62‘, i = 1, - --,p, the origin (x,§:) = (0,0) is an en:- ponentially stable equilibrium point of the closed-loop system under output feedback. Moreover, for any compact subsets S C ’R and Q C RT, the set 8 x Q is'a sub— set of the region of attraction. Furthermore, as the parameters e,- ’s tend to zero the trajectory x(t,8) under output feedback approaches the trajectory Xr(t) under state feedback control, uniformly in t, for t Z 0. Remark 5.1 For simplicity we stated the separation results only for the case where the origin x = 0 is exponentially stable. Similar results can be easily proved for the case of asymptotic stability along the lines of Chapter 2, but we have to require additional conditions on the local growth of the modeling errors. Remark 5.2 Separation results similar to those of Theorem 5.1 and Remark 5.1 can be stated for the case of stability with respect to a compact, positively invariant set along the line of Chapter 3. 5.4 Conclusion In this chapter we discussed the problem of output feedback control for a wide class of nonlinear systems using high-gain observers designed using different algorithms. These algorithms involved either pole placement, algebraic Riccati equation-based techniques, or Lyapunov equation-based technique. We basically showed that all these designs yield observer gains that asymptotically have the structure of the ob- server gain used in Chapters 2 and 3. Consequently, if the idea of global boundedness 147 of the control law is implemented, the separation results of Chapters 2 and 3 apply to the cases where these alternative techniques are used to design the observer gain. 148 CHAPTER 6 A Smooth Converse Lyapunov Theorem for Robust Stability 6.1 Introduction In [34] Lin, Sontag, and Wang present converse Lyapunov function theorems for stability with respect to sets. Their work allows arbitrary bounded time-varying pa- rameters in the system description, results in smooth Lyapunov functions, applies to stability with respect to not necessarily compact sets, and deals with global asymp— totic stability. In addition to the global results of [34], we need a converse Lyapunov theorem that yields a smooth function defined on a possibly finite open set and that approaches infinity at the boundary of this set. Thus our objective here is to modify the results of [34] to make it suitable for our purposes. The tools used to modify the results of [34] are inspired by Kurzweil [32]. It consists of replacing the distance (with respect to a set in this case) by a continuous positive definite function that has all the important properties of the distance (except, may be, the triangular inequality) and that approaches infinity at the boundary of the open set of interest. In Section 6.2 we give basic definitions and the main results. In Sections 6.3 we 149 prove the converse Lyapunov theorem for uniform asymptotic stability. The proof of the converse Lyapunov theorem closely follow that of [34] except in places where some special discussions have to be carried. We will point out these matters whenever they arise. 6.2 Definitions and the Main Results Consider the system (3.1) (we repeat it here for easy reference) 13(1) = f($(t),d(t)) (6.1) Let MD be as in Chapter 3. Let R be an open subset of R". The system is said to be forward complete in R if Ta) (1 = +00 for all 2:0 6 R and all d E MD. It is backward complete in R if CPI-0’ d = —00 for all 2:0 E R and all d E MD, and it is complete in R if it is both forward and backward complete in R. In this chapter we use notation and definitions given in Section 3.2. 6.2.1 Strong Stability Let A be a compact subset of the open connected set R. Define the function “(A : Rn —) R2 0U{+OO} by )_ max(|£|A,]—€—[—E—]—F—.ZE) ifxER (62) +00 if$¢R 00,4(5 where F is the complement of R in R", and [FIA z info7 u) E A x F ”77 — #ll Remark 6.1 A similar function was first defined in K urzweil [32] for the case where A reduces to an equilibrium point. 150 We can make some useful observations concerning wA(5); they are summarized in the following Lemma: Lemma 6.1 Let A be a compact set contained in the open connected set R g R". The function “094(5) restricted to the set R satisfies the following properties: I. It is positive definite with respect to A; i.e., “A(S) > 0 for all 6 E R/A and 6)./1(6) =0f01‘ allf E A. (1)./4(6) 2 MIA ifR= R”; 2. It is continuous in R; 3. It approaches infinity as 5 approaches the boundary of R; 4. The set {E E R : r1 S “A(5) 3 r2} for r1,r2 Z 0 is compact in R; 5. It is locally Lipschitz. Proof. See Section 6.4.1.<1 Remark 6.2 The function “A“ shares some properties with HA, namely, positive definiteness with respect to A, continuity, and local Lipschitz property. This fact makes adapting many results of [34] to our case a straightforward process. Therefore, we omit the proofs where replacing I] A by “A“ does not pose any challenge. The following is a notion of stability in an open set inspired by Definition 1 of [32] and Definition 2.2 of [34]: Definition 6.1 Let R Q R" be an open connected set that contains A. We say that the system (6.1) is Strongly Stable with respect to the compact positively invariant set A if the following two properties hold: 1. Uniform Stability in R: there exists a class [Coo function 6(.) such that for any 6 Z 0 and every d E MD, we have wA(:r(t,:c0; d)) S 5, whenever wA(:z:0) 3 6(6) and t 2 0 (6.3) 151 2. Uniform Attraction in R: for any r, e > 0, there is T = T(e, r) > 0, such that for every d 6 MD, wA(x(t,x0; d)) < 6 whenever wA(x0) < r and t 2 T. (6.4) Remark 6.3 The definition of Strong Stability with respect to a set is an adapta- tion of the notion of global UAS [34, Definition 2.2] to a possibly finite open set R (wA(.) 2 HA ifR = R"). This notion is a generalization to a compact set of the definition of strong stability given in [32]. The following definition generalizes [34, Definition 2.6] to the case of strong sta- bility: Definition 6.2 A Lyapunov function for the system (6.1) in the open set R with respect to a compact, positively invariant set A C; R is a function V : R —> R> 0 such that V is smooth on R/A and satisfies the following properties: I. There exist two class Koo functions a1 and a2 such that for any 6 E R, C¥1(wA(lE)) S V(é) S 02(wA(€)) (65) 2. There exists a continuous, positive definite function 013 such that for any 5 E R/A, and any d E MD, L fdvc) _<. mums» (6.6) A smooth Lyapunov function is one which is smooth on all of R. It follows from the previous definition that V is continuous in all of R, zero if and only if 6 E A, and onto. 152 The first main result of this chapter is: Theorem 6.1 Let A C R" be a compact, positively invariant set for the system (6.1). Let R be an open and connected set that contains A. Assume that the system (6.1) is strongly stable in R with respect to A. Then, there exists a smooth Lyapunov function V in R with respect to A. Remark 6.4 It is possible to state Theorem 6.1 as a necessary and sufficient con- dition. Moreover, a result similar to Proposition 2.5 of [34] for the case of strong stability in an estimate of the region of attraction is also correct. The proofs of such results follow from the corresponding ones in [34] by replacing || A with wA(..) It is useful to restate the uniform attraction property as in the following lemma. Lemma 6.2 The uniform attraction property defined in Definition 6.1 is equivalent to the following: There exists a family of mappings {Tr}, > 0 such that o for each fixed r > 0, T7- : R> 0 —-> R> 0 is onto, continuous, and strictly decreasing; o for each fixed 6 > 0, T1- (6) is strictly increasing in r and limr _; 00 Tr (e) = 00 and, for each d E MD, we have wA(x(t,x0; d)) < 6 whenever wA(x0) < r and t Z Tr(e) Proof. straightforward from Lemma 3.1 in [34].<1 6.2.2 Uniform Asymptotic Stability Let MCD be the set of all continuously differentiable functions from R to ’D where the derivative (1’ (t) of d(t) belongs to a compact set ’D1. 153 The following lemma relates the notion of strong stability to that of uniform asymptotic stability given in Chapter 3: Theorem 6.2 Let d E M’D. Let the system (6.1) be Uniformly Asymptotically Stable (UAS) with respect to the compact positively invariant set A. Let R be an open, connected subset of the region of attraction that contains A. Then, the system {6.1) is Strongly Stable in R with respect to A. Proof. see Section 6.4.2.<1 The second main result of this chapter is: Corollary 6.1 Let d E M'D. Let A C R" be a compact, positively invariant set for the system {6.1). Assume that the system (6.1) is UAS with respect to A. Let R be an open, connected subset of the region of attraction that contains A. Then, there exists a smooth Lyapunov function V in R with respect to A. Proof. if follows directly from Theorems 6.1 and 6.2.4 Remark 6.5 The open set R can be any time-independent estimate of the region of attraction of (6.1) with respect to A. This region of attraction is not necessarily time- independent nor positively invariant unless the system (6.1) is autonomous (see [4, Chapter V,Proposition 415]). Thus, we can not take R to be the region of attraction as in [32, Theorem 12] unless the system (6.1) is autonomous. 6.3 Proof of Theorem 6.1 The proof is divided into three main parts. In the first part we assume that the system (6.1) is complete and we construct a Lyapunov function which is smooth in R/ A. In the second part we drOp the completeness assumption and we construct an 154 auxiliary complete system for which a Lyapunov function is available from the first part. Then, we prove that this function is also a Lyapunov function for the system (6.1). Finally, we smooth the resulting Lyapunov function in R. 6.3.1 Case of a Complete System Theorem 6.3 Let the assumptions of Theorem 6.1 hold. Moreover, assume that the system (6.1) is complete in R. Then, there exists a Lyapunov function V in R with respect to A which is smooth in R/ A. Proof. First, we construct a useful locally Lipschitz function g({) and use it to con- struct a not necessarily smooth Lyapunov function U (5) Then, we use a smoothing result given in [34, Theorem B.1] to show the existence of a smooth Lyapunov func- tion V({). The system (6.1) is strongly stable in R with respect to A. Let 6 and Tr be as in Definition 6.1 and Lemma 6.2. Lemma 6.3 The function g : R ——> R, defined by 9(5) inf {wA($(t. 6; d))} (6-7) = t S 0, d E MD is well defined, continuous everywhere in R, locally Lipschitz on R/A, and satisfies 9($(t.€;d)) S 9(6). W > 0. W E Mp (6-8) 5654(6)) S 9(6) S wA(€) (6-9) for allE E R. Proof. Some additional steps should be added to the proof of [34]. One is the proof that 155 g(.) is well defined on R, and the other is the proof of the local Lipschitz property of g(..) The function g(C) is well defined because the trajectory x(t,€;d), t S 0 starting from C E R exists for any d 6 MD. Moreover, it stays in R for at least a finite interval [T8, 0] in the negative time. Since the function w A is equal infinity on the boundary and outside of R, then the infimum function always has a finite value for every C in R. The proof of (6.8) and (6.9) is a straightforward extension of the corresponding proof in [34]. Let us characterize g(.) in the set Kg, r dgf {C E R : e _<_ “54(5) < r} defined for any 0 < e < r (KEJ is not necessarily compact). Note that, for all e and r with 0 < e < r, there exists qg, r g 0 such that: CE K5,)”, d E MD, and t < (15,1‘ => wA(x(t,C;d)) 2 r Therefore, for any C E K 5, r, we have 9(5) = inf{w,4(rr(t,€;d)) = t 6 [$5,130], d E MD, wA($(t,€;d)) S 7‘} (6-10) Let us prove that g(.) is locally Lipschitz on R/A. Fix any CO 6 R/A, and let _ . IéoLA |€0lF s—m1n{ 2 , 2 Let B (C0, 3) denote the closed ball of radius 3 centered at {0. Then B (CO, 3) g K031", for some 0 < o < r. Pick a constant C as in Proposition 5.5 of [34] with respect to the closed ball B(CO, s) and T = ng', r]- Pick any C, 77 E B (CO, 3). Then, from (6.10) (a property of the inf function), for any 156 6 > 0 there exist some d7), 6 E MD and some t1}, 6 E [gag r, 0] such that 9(7)) 2 wA(I(tn,e, 77; 0117, 6)) - 6 (5-11) Moreover, from (6.10), we have g(C) S WA($(tn,c,C;dn,e)) (6-12) From (6.11) and (6.12), we conclude 9(C) — 9(7)) S wA($(tn, e, C; d7), 6)) - wA(-T(tn, e, v; dn, e)) + f (6-13) Since B(CO, s) is compact, then, by Proposition 5.1 of [34] the points x(t”, 5, C; d7), 6) and x(tn,€,n; (177,5) belong to a compact set E in R"; moreover, by (6.10), they belong to {C : wA(C) S r}. Consider the set E = Efl{wA(§) g r}. We know from Lemma 6.1 that {wA(C) g r} is compact, then the set E is compact and is contained in R. Using the Lipschitz property of w A in E , proved in Lemma 6.1, yields 9(() — 907) S Lll$(tn, e, C; do, 6) — x(t", 6,77; 6177,15)“ + ‘5 (614) where L is a Lipschitz constant of w A in E. By Proposition 5.5 in [34], applied in B (C0, 3), we have 9(() - 9(77) S CLIIC - 71H + 6 (6-15) Note that (6.15) holds for all e > 0, then it follows that 9(C) - 9(9) S CLIIC — 77” 157 By symmetry we have g(n) — g(C) g CL||C — nll which proves that g(.) is locally Lipschitz in R/A. The Lipschitz constant CL depends only on B (CO, 3) (although L depends on E, E depends only on B(CO, 3) if we fix the time T). Let us show that g(.) is continuous everywhere in R. From the Lipschitz property we conclude that g(.) is continuous in R/A. Since, for C E A, wA(C) = 9(5) = 0, then, using (6.9), we have I907) - 9(€)| S [454(9) - wA(€)| for C E A and 77 E R. Since ”A“ is continuous in R, the above inequality shows that g(.) is continuous in R. 0 —> R> 0 is any strictly increasing, smooth function that satisfies: 0 there are two constants 0 < c1 < c2 < 00 such that k(t) 6 [c1, c2] for all t Z 0; 0 there is a bounded, positive, decreasing, and continuous function r(.) such that k’(t) 2 r(t) for all t 2 0 (For instance, c :ct t is one example of k(t)). 158 Moreover the function U is continuous everywhere on R, locally Lipschitz on R/A, and satisfies 015(wA(€)) SU(€) SCQwA(€) (6-17) for all C E R. Finally, the function U(C) decreases along trajectories of the system (6.1); i.e., L de (6) S -(3z(wA(C)), Vd e MD (6.18) for all C E R/A, where 51 is a continuous positive definite function. Proof Some additional steps should be added to the proof of [34]. One is the proof that U (.) is well defined on R, and the other is the proof of the local Lipschitz property of U () The function U (C) is well defined for C E R because the value g(x(t,C; d)) exists for all t Z 0 and all d E MD given that 0 S 9(1‘(t.€; d)) S 9(6). V6 6 R, W > 0 and g(C) exists for C E R. The proof of (6.17) is a straightforward extension of the corresponding proof in [34]. Let us characterize U(.) in any set of the form {C E R : 0 < wA(C) < r,r > 0}. Note that, for any 0 < wA(C) < r, U(E) = SUP {9($(t,€;d))k(t)} 0 S t S t€;d E MD 159 where t5 = Tr (%5(WA(§)))- Let us prove the local Lipschitz property of U () on R/A. For any compact set K g R/A, let def t : maxt <00 K CEKé Finiteness follows from the above expression of U (C), since K Q {C E R : 0 < wA(C) < r} for some r > 0, and from continuity of T7-(.),6(.), and wA(.). For C0 9! A, pick a compact neighborhood K0 of C0 in R such that KOOA = (0. By (6.17) we have (the continuous function wA() reaches its minimum on the compact set K0) U(5) Z 7‘0. V5 6 K0 (6.19) for some constant r0 > 0. Let r1 = 2%; and let K1= Kofl{v = “9-60” < L] “ 2CL where C is a constant such that (see Proposition 5.5 of [34]) llx(t.€; d) — x(t. n;d)ll _<_ CHE — nll. V5.17 6 K0. 0 .<_. t _<_ tKO. d 6 M1) (620) and L is to be determined later on. The set K1 is a compact neighborhood of C0 in ‘R/ A. In the following we will show that there exists some constant L such that for any C,17 6 K1, we have |U(€) - U(77)| S l3H6 - 77“ (6-21) 160 We have to find a compact set in R/ A on which we can apply the Lipschitz prop- erty of g(.) (this set will be K3). We know that (it is a property of the supremum), for any C 6 K1 and any 6 E (0,r0/2), there exist tC,e E [0,tK0] and dC,e E MD, such that (1(5) 3 “1505,95; 45,999., a + . Furthermore, using (6.9) yields mo 3 5.4.4995, ..4; (15,.» + . Thus, using (6.19), we obtain wA(x(t€,€,C;d€,€)) _>_ Eg- —é Since 6 E (0, r0/2) WA($(t€,€,€;d£,€)) 2 T1 (6'22) From Proposition 5.5 in [34] we know that there exists a compact set K 2 such that x(t,C;d) 6 K2, VC 6 K1, Vt E [0,tK1], and Vd E MD The set K 2 is contained in R, given the definition of strong stability (especially the definition of stability in R). Let L be the Lipschitz constant of wA(.) in K2. Then, for all 77 6 K1, we have (we apply the Lipschitz property of ”A” in K 2) WA(IE(L€,€, Tl; (16’6” Z WA($(t€,€,§;d§-,€)) — Ll|$(t€,€,€; (15,5) _ $(t§,€.77;d§,g)ll 161 Therefore, using (6.20) and (6.22), for all 77 6 K1, we have (we apply the Lipschitz property of x(t,C; d) in 1(1)) WA($(t§,€.TI; (15,.» 2 r1 — LCllé - nll Thus, from the expression of K1, we have wA(x(t€,€,n;d£,€)) 2 ’31 (6.23) The compact set K3 dgf K2 fl{C E R : wA(C) 2 g} is a subset of R/A. The inequalities (6.22) and (6.23) show that x(t€,£,C;d€,€) and x(t€,€,n; (15.6) belong to K3. Then, we can apply the Lipschitz property of g(.) on the compact K3 and obtain lg($(t£,5:€i dé, 5)) _ g($(t€,€ani d€,g))l S C1|l$(tg,€.€;dg,e) - $(t§,€.fl;d§,e)ll (624) for some C1 > 0. Therefore, using the bounds on k(t) and (6.24), we have I/\ U(E) — U(77) 9($(tg, 5. 6; Gig, 5))k(tg, 5) + 6 — 9617055077; (15, €))k(t€, 5) S k(t€,€)[g($(t€’€,€;616,6» _ g($(t€,ei 7]; 616,5)” + 6 |/\ 02|g($(t€,6,€;d€,6)) _ 9($(t€,5.773d§,5))l + 6 /\ _ 016114-66, ..6; 46,.) — x(tg, ..n; d5, .)1 + . (6.25) Using Proposition 5.5 of [34], inequality (6.25) yields U(E) - U(n) S I3|I€ - all + 6 (6-26) 162 for some constant L = C1c2 > 0 that only depends on the compact set K1. Since (6.26) holds for any 6 E (0, 70/2), ~ U(E) - 11(7)) S Lllé — rill. V6.77 E K1 Thus, by symmetry, we can prove (6.21). Let us prove continuity of U () everywhere in R. From the Lipschitz property of U () we conclude that U () is continuous in R/A. For all C E A we have U (C) : 0 and wA(C) 2: 0. Then, for all 77 6 R, we have |U(€) - U(77)| = U(77) S C2lwA(€) - w,4(77)| Since wA(.) is continuous in R, so is U(.). The proof of (6.18) is a straightforward extension of the corresponding proof in [34]. Finally, let us smooth the function U (C) in R/A. By [34, Theorem B.1], there exists a C00 function V : R/A ——> R2 0 such that for all C E R/A, we have {_1_(52 “no — U(on < 2 and [\Dt—J L mm s —-c‘r(w,4(€)). Vd 6 Mo Extend V to R by letting V = 0 on A and denote the extension by V. Note that V is continuous on R and smooth on R/ A. Thus, V is the desired Lyapunov function with 01(3) 2 3375(3), 02(8) = 2323, and a3 = %6. <1 163 6.3.2 Case of a Forward Complete System Hereafter, we establish a Converse Lyapunov result without completeness of the system. In order to use the previous result, we modify the vector field of (6.1) to make it complete. Then we show that, if the system (6.1) is strongly stable, then the new system with the modified vector field is also strongly stable. Finally, by applying Theorem 6.3 we show the existence of a Lyapunov function for the new system. The same function will be the desired Lyapunov function. Let us define a new but complete vector field. Lemma 6.5 Let f : R" x D —+ R” be continuous, where D is a compact subset of Rm. Then there exists a smooth function a f : R" —+ R, with a f(x) 2 1 everywhere, such that ||f(x,d)|| S af(x) for all x E R” and all (1 E D. Proof. Let a(x) = maxd E D I] f (x, d)“. This function is continuous because of the continuity of f (.,d). Choose any smooth function a f such that a f(x) 2 1 + a(x) for all C E R". The function a f is the desired one.<1 For any given system 23 : x = f(x,d) (6.27) not necessarily complete, the system ‘5 3? 5‘: II S... 2b I it = (:2), d) (6.28) is complete since W s 1 for all x E R" and all d e D (see [16, Theorem 2.1, page 17]). 164 Assume that f is continuous and locally Lipschitz in x uniformly in (1, then f has these same two properties. Lemma 6.6 Assume that A is a compact subset of R”. Let R be an open, connected set that contains A. Suppose that the system 2 is strongly stable in R with respect to A. Then, system 2b is strongly stable in R with respect to A as well. Proof. straightforward from Lemma 7.2 of [34].<1 Now consider the system (6.1) and let 2b be the corresponding complete system. By Lemma 6.6, we know that the system 2b is strongly stable in R with respect to the set A. By applying Theorem 6.3 to the complete system 2b in R, there exists a Lyapunov function V for 2b such that |/\ V(é) S 92(wA(€)). V5 6 R Lde(§) < —-a3(wA(C)), V5 6 R/A, and Vd e D 0195405)) for some class lCoo functions a1, 02 and some continuous positive definite function 013. Since a f (C) 2 1 in R, it follows that Lde(€) S -a3(wA(C)), w: e n/A, and Vd e D Thus, we conclude that V is also a Lyapunov function for 2. <1 6.3.3 Smoothing of Lyapunov functions The Lyapunov function found previously is only smooth on R/A. Hereafter we smooth this function in the domain of interest R. To do this, we first construct an appropriate smooth function 6 then we prove that the function W = 60V is the 165 desired smooth Lyapunov function. This procedure is summarized in the following lemma and proposition: Lemma 6.7 Assume that V : R ——> R2 0 is C0, the restriction VlR/A is COO, VI A = 0, and VIR / A > 0. Then, there exists a class lCoo functions ,6 which is smooth on (0,00) such that 13(2)“) ——> 0 as t -—> 0+ for each i = 0,1,---; 6’ > 0, Vt > 0, and W = ,6 0 V is a C00 function over R. Proof. see [34, Lemma 4.3] we just need to restrict the domain of interest to R instead of Rn._ ICIA 2 0. Now, if wA(C) = 0 then |C|A = 0. Thus, C belongs to the closure of A, then, since A is closed, we have C E A. Next, if C E A, then [C | A = 0 and ICIF Z |F|A; i.e., Kl]? — lelA < 0. Thus, wA(C) 2 [CIA = 0, and we conclude positive definiteness of “A” with respect to A. Finally, if R = R", then, the term [fl—E — [7'21 is zero and, since [CIA Z 0, we have wA(§) = MIA- 2) Continuity in R: It follows from the fact that the maximum of two continuous functions is continuous; see [6, Exercise 4, page 136]. 166 3) Properness in R: As C approaches the boundary of R we have [C | F —> 0. From the definition of wA(.) we always have 1 2 > _ WA“) - lap [FLA . 2 . . . Therefore, Since If]; 1S finite (A 1S compact), then wA(C) —> 00 as C approaches the boundary of R. 4) Compact Level Sets: Since wA(.) is continuousin R, then the set {C E R : r1 3 wA(C) g r2} is closed. Now we only need to show that this set is bounded. Note that {€ERIT1SwA(€)ST2}={€€R=7‘1SwA(€)}fl{€€R=wA(€)ST2} Thus, it suffices to prove that the set 52 dgf {C E R : wA(C) 3 r2} is bounded. In case R is bounded, the set (2 is necessarily bounded (of course this is possible because A is bounded). Now, consider the case where R is not bounded. Suppose that Q is not bounded. Then, as “C” ——> 00; i.e., as C approaches the boundary of R, we have that wA(C) —+ 00 which is a contradiction because, in (2, we have wA(C) 3 r2. The assumption that A be bounded is needed because otherwise the set 0 is necessarily unbounded. 5) Local Lipschitz property: Let K be a compact subset of R/ A. Let us show that there exists a positive constant k such that, for all C1, C2 6 K, we have lwA(€1)- wA(€2)| S kl|€1 - €2|| (6-29) First, consider the case where wA(C1) = |C1|A and wA(C2) = ICglA. In this case 167 we have lwA(€1)* wA(€2)| = | lfllA — |€2|A| Since || A is locally Lipschitz, we can write I |€1|A - |€2|A] S ||€1 - E2” Then, using (6.31), the inequality (6.29) is true. Second, consider without loss of generality that 954051) = |€1|A ... (5) _ _1_..L A 2 ‘ IolF IFIA Then, from the definition of wA(.), we have 1 2 > ___—— (“(61) ‘— |€1|F IFIA 91,4(62) 2 |€2|A Using (6.32) and (6.35) yields “A(51)— wA(€2) S |€1|A — |€2|A Then, using (6.31) in (6.36), we have wA(€1)- wA(€2) S ||€1 - 62” 168 (6.30) (6.31) (6.32) (6.33) (6.34) (6.35) (6.36) . (6.37) Using (6.33) and (6.34) yields 1 2 1 2 _, < __————+_— “91(9) ”4151) - (.21).. lFlA I€1IF IFIA ‘ 16le |51|F 1 ] (6.38) s ] 1 — [52]}? [61]]? Now, if we can prove that, in K, there exists a positive constant k2 such that 1 1 —— — — < k — 6.39 [[62]]? léllFl - 2ll€1 £2” ( ) then, using (6.37), (6.38), and (6.39) in addition to (6.31) proves the local Lipschitz property of wA(.) with k = max(1, k2). Hence, it suffices to prove (6.39). Since R is an open set, F is a closed set. Thus, with a similar reasoning as the one that lead to (6.31) we have I |€1| F — I629] 3 ”£1 — €2ll (6.40) Since K is compact, then there exists a positive constant k2 such that 1 —— < k 6.41 |€1|F|€2|F _ 2 ( ) Using (6.40) and (6.41) we can write ] 1 __ 1 ] < |€2|F-|€1|Fl |€1|F |€2|F |€1|F|€2IF S L2l|€1 -€2|| (6-42) This concludes the proof of Lemma 6.1.<1 169 6.4.2 Proof of Theorem 6.2 Assume that the system (6.1) is UAS with respect to A with R as in the theorem. Then, the system is forward complete in R and Definition 3.1 applies. Let us show strong stability of (6.1) with respect to A. We divide the proof into two parts. First, we show uniform stability in R. Then, we show uniform attraction in R. The forthcoming proof is inspired by that of Theorem 12 of [32]. Since R is open, then there exists an open ball B0 = {C : |C | A < r0, r0 > 0} around A such that BO is the largest ball in R that conatins A (i.e., BO is the first ball whose boundary intersects with the boundary of R). It is important to notice that r0 may be finite. Let us define the function (15(3) = sup wA(C), for s E [0,r0) lélA S s the function (p(s) is continuous, positive definite, increasing (not necessarily strictly), and goes to infinity as s —> r0. Moreover, we have w,4(€) S ¢(|€|A) for all C 6 80- Let 7(8) be a class [C function such that 7(5) 2 k¢(s) with k > 1. Then, using the above inequality and the definition of wA(.), we have IEIA S wA(€) S V(lflA) (6-43) s-ll>mr07(s) = 00 (6.44) Part I - Uniform Stability in R: Fix 6 > 0. Consider the positive constant 17 = 7_1(e); 7‘1 exists since 7 : [0, r0] —) 170 [0, 00) is strictly increasing and onto. From Definition 3.1 we know that there exists a positive constant 61(7)) such that, for all d E M'D and all t 2 0, we have Imam... 614 s 77 (6.45) whenever [xOIA s 61(7)). Now, let wA(x0) 3 61(7)) which implies, given the definition of “A, that |x0| A _<_ 61(77). This implies (6.45). Then, using (6.43), we have Mama; «1» 6 7(4) = . for all d 6 M51) and all t Z 0, whenever wA(x0) 3 6(6) = 61(7—1(6)). Now, we have to find a class K00 function 6 to replace 6 in the above statement. For a fixed 6, let 6(6) be the supremum of such 6. Then, wA(x0) < 6(6) => wA(x(t,x0; d)) g 6 (6.46) for all d E M’D and all t _>_ 0, and if 62 > 6, then there exists at least one initial state 5:0 and one function (i E M’D such that (“(60) S 62 and sup wA(x(t,:i':0,d)) > 6 (6.47) t Z 0 Let 6(6) 2 %5(€). Then, (6.46) can be written as wA(x0) 5 6(6) ==> wA(x(t,x0;d)) g 6 (6.48) for all d E M’D and all t Z 0. The function 6(6) is positive and non-decreasing, 171 but not necessarily continuous. Furthermore, we have lim,E _, 06(6) 2 0 because we can see from (6.48) that 6(6) 3 6, otherwise we would get a contradiction at t = 0. Claim 1: lim 8(6) = 00 (6.49) Proof. Assume that lime _) 00 6 (6) = 600 < 00. According to (6.47), for every i 2 1 and for 62(i) = 26(i) + 1 : 6(i) + 1 > 6(i), there exists in E R and d,- 6 M51) such that wA(x0,-) S 62(i) and tsgp0(wA(x(t,x0,-; dill) > i (6.50) This means that lim sup{ sup (wA(x (t, inidillll = 00 (6.51) i —> 00 t > 0 Let us find 0 < T* g 00 and a solution x(t,x0; d), where x0 6 R and wA(x0) S 600 + 1, such that limt _) T* x(t, x0; 61) = 00 and let us show that this constitutes a contradiction to the fact that x0 belongs to R, a subset of the region of attraction. def Let xi(t) = x(t,x0,-;d,;), then {x,-(t)} is the sequence of solutions defined by (6.50). Define T* d__e_f sup{T_ > 0:1imsup(0 i——>000 SmtSTw'A (332(0)) < 00} (6-52) Note that 0 < T* S 00. Given (6.51), the quantity T* is the supremum of all T Z 0 such that all of the elements of the sequence are finite for all t E [0, T]. 172 Case 1: First, suppose that T* < 00 Consider a sequence {72-} of positive constants such that for every i Z 1 we have if wA(x0) S i, then wA(x(t,x0;d)) S i + 1, 0 S t S 27,- (6.53) for all d E M’D. This sequence is not empty because of the continuity of x(t, x0; d). Without loss of generality we can take T* > T1 > 72 > - - - and limz- __) 00 r, = 0. Let {le(t)} be the subsequence of {x,;(t)} such that wA(x,1(T* — 71)) > 1, w 2 1 (6.54) This subsequence is infinite. To show that, assume that it is finite and let 11 be the set of indices such that xz-(t), i E I 1, does not belong to {le (t)} (i.e., 11 is infinite). Take i 6 II, then wA(x,-(T* - r1)) S 1, then, from the definition of q, (6.53), and uniqueness of solutions, we have wA(x,(t)) _<_ 2, for T* — T1 St 3 T* + 71 (6.55) Then, from the definition (6.52) of T*, we have lim sup{ max (62 (x(t)))} < 00 (6.56) i611 OStST*—r1 A 2 Then, using (6.55) and (6.56), we can write limsup max w x't <00 6.57 2,611{Ostg,+71( A( .0») ( > 173 Since 11 is infinite, then lim sup{ max (wA(x,-(t)))} < 00 (6.58) i—>00 ogth*+r1 which is a contradiction with (6.52). Thus, 11 is finite and the subsequence {le(t)} is infinite. Moreover, let {x12(t)} be a subsequence of {le(t)} such that wA(x,2(T* — 72)) > 2, Vi 2 1 (6.59) As previously, we can show that the subsequence {x22(t)} is infinite. Similarly, we construct a family of subsequences. Now, as we continue, we end up with the sequence {xj(t)} such that wA(:r.-(T* - 9-)) 2 j. W 2 1. w 2 1 (6.60) Since wA(x’,-(0)) S 600 + 1 < 00, then, using Proposition 5.1 of [34], the set of solutions starting from :77,- (0) is compact on [0, T] for any 0 < T < T*. Thus, the sequence of solutions {53,-} is uniformly bounded (uniformly in i) on compact intervals of time [0,T], 0 < T < T*. Let {62-} be the sequence of time-varying parameters corresponding to the sequence of solutions {6,}. It can be shown that t - lliz-(t1)- 22.62)” s [,12 116.6). daemon (6.61) Since the sequences {6,} and {Ji} are uniformly bounded and f (., .) is continuous, then there exists a constant c513, independent of i, such that ”13601) - ji(t2)ll S Cxltl - t2| (6-62) 174 for all i Z 1. Then, using (6.62), for every 6 > 0, there exists a positive constant ct > 6/ c1; such that ”173761) - 513602)” < 6 (6-63) for all t1, t2 such that |t1——t2| < ct and all i _>_ 1. Then, see [6, Section 4.5, page 208], the sequence {x,-} is equicontinuous. Thus, by Ascoli-Arzela’s Theorem [6, Theorem 4.5.8; Exercises 4.5.9, Probleml], this sequence has a subsequence (indexed by ij) that converges (as i j —+ 00) for every t E [0, T], 0 < T < T*. Given the compactness of [0, T], the previous subsequence converges uniformly in t (see [6, Section 4.5, page 206] for the definition of uniform convergence). By [12, Theorem 2.11], we see that the limit is a continuous function in t on [0, T], 0 < T < T*. Let {dz-j} be the subsequence corresponding to the subsequence {652-3.}. By the Mean Value Theorem we can show that the sequence {Jij} is equicontinuous and that it has a subsequence (indexed by i jk) that converges (as ij —> 00) uniformly k (in t) in every interval [0, T], 0 < T < T*. It can be shown that the set of functions of MCD defined on [0, T] is a closed subset of the set of continuous functions defined on [0, T] with values in Rd. Then, the limit of the convergent subsequence belongs toM'DforallO00 0St_T for all finite T > 0. Since all x0,- belong to the region of attraction then, by Definition 176 3.3, for u > 0 there exist a finite T,(u) > 0 such that livi(t)|,4 < u. t 2 T.- (6.68) for all i Z 1. Choose u such that u < r0. Then, using (6.43), we have wA(:v.-(t)) 6 7(2). for t2 T.- (6.69) Thus, using (6.67) and (6.69), we have lim sup{ sup (wA(x,-(t)))} < 00 (6.70) i —+ 00 t Z 0 which contradicts (6.51). Thus, we showed that if 600 is finite, we reach a contradiction. Therefore, 600 = 00.6 Since, by Claim 1, lime _) 00 6 (6) = 00, then we can choose a class lCoo function 6(.) such that 6(r) S 6(r). Hence, given 6 > 0, we have wA($(t.xo; d)) S e for all (1 6 M’D and all t 2 0, whenever wA(x0) S 6(6) S 6(6). Thus, we have proved property 1. Part II - Uniform Attraction in R: Fix 6. Consider the positive constant 17 = 7_1(6). From Definition 3.1 we know that 177 there exists T = T(n) > 0 and a > 0 such that |I(t.xo;d)|A < n (6.71) for all d E M'D and all t _>_ T, whenever |C0| A < 0. Since R is a time-independent subset of the region of attraction, without loss of generality, we can choose a such that {C: IxOIA < a} C R. Let wA(x0) < a which implies that |x0|A < a. Let T(6) = T(7—1(6)). Then, using (6.71), we have wA(x(t,x0; d)) < 7(17) < 6 (6.72) for all (1 E M’D and all t Z T, whenever wA(x0) < 62. Fix 6, r > 0 and let us prove that there exits T = T(6, r) > 0 such that wA(x(t, 1120; d)) < 6 (6.73) for all d E M’D and all t Z T, whenever wA(x0) < r. Assume that the above is not true. Then, VT > 0 there exists an initial state :20 = 5130(T) with wA(x0) < r such that wA(x(t,x0,d)) 2 6 for some t _>_ T and some of E M’D. In other words, if we take a sequence {T z} of positive numbers such that limz- _, ooTi = 00, then there exist sequences {ti} C R2 0, {x02} C R, and {di} C M’D such that lim ti = 00, wA(x0,-) < r, wA(x(t,-,$0i; dill Z 6 (6.74) i——>oo Using the uniform stability property (6.48) and uniqueness of solutions, we can write, 178 for any 7 2 0, 02A(x(t,x0; d)) < 6 (6.75) for all d E M'D and all t 2 7, whenever wA(x(r,x0; d)) < 6(6) (note that 6 is fixed). Hence, 6,400,160,; 3,» 2 8(6), Vt 6 [0,1,] (6.76) Since wA(x0,-) < r, using the uniform stability property (6.48), we have wA(x(t,$0iidz°)) < 6‘16), Vt 2 0 (6.77) Thus the sequence of solutions {x(t, in; dill is uniformly bounded (uniformly in i). As in Part I, we can find subsequences (indexed by i) {6,}, {530,-}, and {dz-(t)} which converge on any interval [0,T], T > O to x(t), :20, and d(t), respectively. Furthermore, 17:0 6 {C E R : wA(C) S r}, d 6 M’ , for any T > 0. Moreover, x(t) is the solution of x = f(x, 6), x(0) = :20, defined for 0 S 0 < 00. From (6.76) and the fact that limi _, 00 t,- = 00 we conclude that 9426(0) 2 8(6). t e [0. oo) (6.78) This is impossible since 5:0 6 R that is, the solution x(t) must converge to A; i.e. wA(x(t)) —+ 0 as t —+ 00. Thus T(6, r) exits such that (6.73) is satisfied.<1 179 CHAPTER 7 Conclusions In order to implement a controller using output feedback, we can rely on a separation principle which divides the design process into two steps. The first step consists of designing a' state feedback controller to achieve the desired performance. The second step consists of estimating the state using an observer, and then using these estimates in the control law. This implementation should recover the performance achieved under the state feedback control, at least asymptotically. This is the task we have set to achieve in this work for a certain class of nonlinear systems. First, we gave a comprehensive formulation for the state feedback controller performance which describes the behavior of the entire state. Then, we recovered this behavior using an observer, thus proving the possibility of an output feedback implementation of the control law. State feedback controllers achieve a wide range of performance. Sometimes we need to steer the state to a point and have it stay thereafter. Other times we need a part of the state to track a constant or a time-varying reference. We also may require the input-output response of the system to satisfy certain criteria and limitations or we may require the controller to'minimize some performance index apprOpriate to the task at hand. We opted to a stabilization-type of performance description. We found that this formulation covers a wide range of design objectives. We found 180 that numerous design objectives can be formulated as a stabilization problem of an equilibrium point or a compact, positively invariant set. In both cases we considered a system that contains one or several chains of integrators. In the case of an equilibrium point we considered time-invariant systems. However, in the case of invariant set we allowed a time-varying bounded parameter in the system’s nonlinearities. Moreover, we used a high-gain observer to implement the control law and we required this law to be globally bounded which can be achieved by saturation outside a region of interest. It is noteworthy that this separation principle does not require any particular state feedback structure as long as the control law is globally bounded. The high-gain observer has an adjustable gain which allowed us to estimate the state quickly enough to recover the full stabilizing power of the control law. In addi- tion to recovering local stability, we recovered arbitrary compact subsets of the region of attraction (or sometimes an estimate of it) achieved under the state feedback con- troller. Moreover, we showed that trajectories under the output feedback controller approach those achieved under the state feedback controller as the observer gain ap- proaches infinity. Finally, for each stabilization case we presented several examples to show some sources of the class of systems considered and to illustrate how we can apply the separation results previously proved. To prove the recovery of the aforementioned set of performance measures we divided the task into several steps. First, we showed that trajectories starting in an a priori given compact subset of the region of attraction are bounded. For this we needed a Lyapunov function that approaches infinity at the boundary of this region. This Lyapunov function was supplied by results due to Kurzweil [32] in the case of an equilibrium point and by results due to Sontag et al [34] for the case of an invariant set. Second, we showed that these trajectories come arbitrarily close to the attractor (a point or a set). Before concluding asymptotic stability, we showed convergence of these trajectories to the ones achieved under the state feedback controller. As for the 181 local property of asymptotic stability, we discussed three cases. First, we discussed the case where the modeling error is zero; i.e., we perfectly know the nonlinearities at the end of the chains of integrators and we use them in the design of the observer. Second, we discussed the case where the convergence to the attractor under the state feedback controller is of an exponential type. In this case we used a local Lyapunov function supplied by the classic Lyapunov theory for the case of an equilibrium point and by an adaptation of some results of Yoshizawa [55] to our purposes for the case of a set. Third, we discussed the case where we have asymptotic stability under the state feedback controller but where we have nonzero modeling error; i.e., we used an imperfect knowledge of the system’s nonlinearities to build the observer. In this last case, some conditions on the size of the modeling error had to be imposed in order to recover local asymptotic stability. These conditions state that the modeling error should be proportional to the rate of convergence of trajectories to the attractor (raised to a power less than one) under the state feedback controller. Moreover, we note that our results can only show semiglobal stabilization under output feedback even when the state feedback control achieves global stabilization. It is noteworthy that for the case where the attractor is a set we used some results provided in [34] when the region of attraction is the whole state space. However, when this region is a subset of the state space we had to adapt the results of [34] to our purposes which required that we restrict the class of the time-varying parameters and we only recovered an open estimate of the region of attraction. Finally, we reviewed several techniques in designing high-gain observers. These techniques are: pole-placement algorithms, Riccati equation-based algorithms, and Lyapunov equation-based algorithms. In this work we showed that the observer gain structures provided by these different techniques are asymptotically similar to the structure used to show the separation results. Therefore, we concluded that the abovementioned separation results hold for all these observer design techniques. 182 4.. sex-.1; _ ' ' .——-—' The idea of this work (providing a formulation of controller performance and establishing separation results based on this formulation) is still a fertile ground for future research. Actually, we noticed that the controller performance of some design examples can be described as achieving partial stability (stability of part of the state) of the system as in [2], or as input-to-state stability (the magnitude of the trajectory is function of the magnitude of the input) as in [29]. Moreover, an Optimization- type of performance can be investigated (minimization of a performance index, for example) as in [1]. 183 APPENDICES 184 APPENDIX A Technical Lemmas Lemma A.1 Let A = A x {e = 0}, where A is a compact subset of R". Then, the following holds: |(e.n)|A = [HelanIfill/z (A.1) Hell2 6 l(e.n)|,24 (A2) Inlji, s 192.613, (A.3) Proof: The definition of the distance with respect to a set implies e, = e, — =_inf e, — ,— |( 7))[A ( 77) bll bEA“( 7?) (0 b)|| inf [I beA 2 - -21/2 [Hell +_mf-|ln—b|| ] beA To prove (A.1) , it suffices to show that 1613, = _inf- lln — 5112 (AA) b e A 185 First, since A is compact, then, there exists b0 6 A such that In! - = - inf- lln — 5H = lln - 50” A b e A Thus, from the definition of infimum, we have 36-0—665um—6m3=m& (40 b e A Next, the definition of infimum implies that, for any 6 > 0, there exists bl 6 A such that IM-5w2—6S_mflln—HF b 6 A Thus, since (1)112, 3 Mn —51||2, we have lug—es_mLMn—HF b e A This inequality is true for an arbitrary 6 > 0; then, Im%s_mrin-az (46 b e A From (A5) and (A6) we conclude (A.4). Finally, using (A.1) , we have IMP glvwwwmfi=uamfi, lag slwW+wmfi=wemmi Thus, we conclude (A2) and (A.3).<1 186 Lemma A.2 Let A = A x {e = 0}, where A is a compact subset of B". Let (7)(t), e(t)) be the solution of the system (4.35) under the state feedback controller u* and assume that [U(tllg s (31(ln(0)|,.1.t)+7( sup |le(r)|l). W20 (Av 0 R be a continuous function such that there exist two continuous positive definite with respect to A functions V1 (x) and V2(x), defined on D, such that V16) 6 V(t,x) s V202) for allt Z 0. Let Br 2 {x : [xIA S r} C D, for some 0 < r < r0. Then, there exist a class IC functions 0‘1 and 02, both defined on [0, r], such that 01(IIIIIA) S V(M) S 012(lxlA) for all x 6 Br and all t 2 0. Moreover, if D = R" and V1 (x) is radially unbounded, then al and 02 can be chosen to be class ICOO. Proof: similar to that of [28, Lemma 3.5]. 0 and p > 0 such that Br 2 {x E R" : lxlA S r} C U and p < minlxlA = ral(l$|A) (exists because A is compact, thus {x : [xIA :2 r} is also compact). Then the set {x 6 Br : al(|x|A) S p} is in the interior of Br. Now, define the time-dependent set at, p by “tap = {x 6 Br : V(t,x) S p} Since 0209421) S 9 => V(tx) S p 194 the set Qt,p contains {x 6 Br : 02(lxlA) S p}. On the other hand, since V(t,x) S p => 01(l55lA) S p Qt,p is a subset of {x 6 B7: : 01(leA) S p}. Thus, {xEBr:02(|x|A) Sp} CthC {xEBrzal(|x|A) Sp}CB7~ for all t Z 0. Since V(t, x) is negative on U /A, hence, V(t, x) is decreasing. Therefore, for any to Z 0 and any x0 6 {x 6 Br : 02(lxlA) S p} C QtO: pa the solution starting at (to, x0) stays in Qt, p, for all t 2 t0. Since Qt, p is bounded, the solution starting at (t0,x0) is defined for all t 2 t0 and x(t) 6 Br. Now, for any x0 6 {x 6 Br : 02(lxlA) S p}, we can write 179.4) = 95:1 + 859696)) 6 —ag(|x|A) s —a(V(x O. For any x0 E R", we can choose p large enough so that x0 6 {x 6 Br : 012(l33lA) S p}. Thus, using an argument similar to the previous one, we can conclude uniform global asymptotic stability of the system (3.1).<1 C.2 Proof of Theorem 3.3 This proof is an extension of the proof of Theorems 19.1 and 19.2 of [55] to the case of exponential stability with respect to sets. Define the function V(t, C) = sup {|x(t + r,C, t; d)|Aeq'lT} (C1) 7‘ 2 0,d 6 MD for all (t,C) E R2 0 X (20, where 0 < q < 1. Clearly, for r = 0 we have IEIA S V0.6) (02) 196 Furthermore, using the definition of UES in Definition 3.1, we have WM) 6 sup{/ce’lTlélAeqlT} (0.3) 720 S klflA (C4) Inequalities (C.2) and (C4) prove (3.8). Using the local Lipschitz property of [I A and Preposition 5.5 of [34], we have (notice that, since A is compact, the set 90 is compact) ||$(t.€.t0; d)l,4 - [$6.77. to;d)lA| S CH6 - all (05) for all C, 77 6 (20, all d E MD, and all t 6 [0,T], for any fixed T > 0. There is no loss of generality in choosing k > 1. Hence, we can choose T > 0 such that k = e(1 — q)7T_ Then, for r 2 T, we have |$(t+r.€.t;d)l,46q77 S 196—(1—qllTIEIA S IEIA ((16) which implies that Now, using (C5) and (C7), we have - lir(t + 7.77.15; d)|A leqlT} cue — vlle‘nT |/\ S Lllé — 77H (0.8) 197 for all C, 77 E (20, where L = Ceq'lT. Thus, we have proved (3.9). This local Lipschitz property in x implies continuity of V(t, x) in x. Let us prove the continuity of V(t, x) in t. Take 6 Z 0. Then, for C 6 (20, we can write |V(t + <16)- V(t.€)| S |V(t + 5.0 - V(t + 5.336 + 5.5.75; d))l + lV(t+6,x(t+6,C,t;d)) — V(t,C)| (C.9) for d E MD. For 6 small enough, even if the trajectory x(t + 6,C,t; d)) may exit from (20, the function is still defined and locally Lipschitz with a constant that we denote by L. This implies |V(t+5.€)—V(t.€)| S L||€-$(t+5.€.t;d)|| + IV(t+6,x(t+6,C,t;d)) — V(t,C)| (C.10) Let us discuss the second term of the right-hand side of (C.10). Let C = x(t+6, C, t; d). By the uniqueness of solutions and a change of variable 7’ = r + 6, we can write V(t +6,x(t+6,C,t;d)) = supT Z 0,d 6 MD{|x(t+6+r,C,t+6;d)|AeQ7T} = supT _>_ 0,d 6 MD{|x(t+6+T,C,t;d)|Aeq’77-} = supTI _>_ 5, d E MD{|x(t + r’,§, t; d)| Ae‘I’YTe-W} = a(6)e-q76 (0.11) where a(6) = supT Z 6,d 6 MD{|x(t + T,C,t;d)|Ae97T}. Given (C.11), (C.10) can be written as |V(t + 5.6) - V(t.€)| S L||€ - x(t + 5.6.1861)“ + lame—(175 - 9(0)] (012) 198 Given the fact that a(6) goes to a(O) as 6 goes to zero and the continuity of the solutions of (3.1), we conclude from (C.12) that V(t, C) is continuous in t. Finally, let us prove (3.10). Let C = x(t + h,C, t; d), h > 0. Again, by uniqueness of solutions and a change of variable, we have V(t + h,C) = supT 2 h, d E MD{|x(t + h + r, C, t + h; d)|_Aeq7T}e_q7h < Wage-‘17" (013) Using (C.13), we have . V(t + h, 6') — V(t, 6) . e-Wh — 1 1 < v t, lm _— h 510 h ‘ ( E) h L. 0 h S ~97V(t.€) ' ((114) Take /\ = q7.<1 199 BIBLIOGRAPHY 200 BIBLIOGRAPHY [1] B. Aloliwi and H.K. Khalil. Adaptive output feedback regulation of a class of nonlinear systems: convergence and robustness. IEEE Trans. Automat. Contr., 42(12):1714 — 1716, December 1997. [2] B. Aloliwi and H.K. Khalil. Robust adaptive output feedback control of nonlin- ear systems without persistence of excitation. Automatica, 32(11):2025 — 2032, December 1997. [3] T. Basar and P. Bernhard. H oo-Optimal Control and Related Minimax Design Problems, a Dynamic Game Approach. Birkhauser, second edition edition, 1995. [4] N .P. Bhatia and GP. Szego. Stability theory of dynamical systems. Springer- Verlag, New York, 1970. [5] G. Bernard and H. Hammouri. A high-gain observer for a class of uniformly observable systems. Proc. Conf. Decision and Contr., Brighton, England, pages 1494 — 196, December 1991. [6] D.S. Bridges. Foundations of real and abstract analysis. Springer-Verlag, New York, 1998. [7] K. Busawon, A. El Assoudi, and H. Hammouri. Dynamic output feedback sta- bilization of a class of nonlinear systems. Proc. Conf. Decision and Contr., San Antonio, TX, pages 1966 — 191, December 1993. [8] J .H. Chow and P. Kokotovic. A decomposition of near-optimum regulators for systems with slow and fast modes. IEEE Trans. Automat. Contr., 21:701 — 705, 1976. [9] J.C. Doyle and G. Stein. Robustness with observers. IEEE Trans. Automat. Contr., 24(4):607 — 611, 1979. 201 [10] F. Esfandiari and H.K. Khalil. Observer-based design of uncertain systems: recovering state feedback robustness under matching conditions. Proc. Allerton Conf, Monticello, IL, pages 97 — 106, September 1992. [11] F. Esfandiari and H.K. Khalil. Output feedback stabilization of fully linearizable systems. Int. J. Contr., 56:1007 - 1037, 1992. [12] W. Fleming. Functions of several variables. Springer-Verlag, New York, second edition edition, 1977. [13] R. Freeman. Global internal stability does not imply global external stabilization for small sensor disturbances. IEEE Trans. Automat. Contr., 40(12):2119 — 2122, 1995. [14] J .P. Gauthier and I. Kupka. A separation principle for bilinear systems with dissipative drift. IEEE Trans. Automat. Contr., 37(12):1970 — 1974, 1992. [15] R. Gurumoorthy and SR. Sanders. Controlling non-minimum phase nonlinear systems - the inverted pendulum on a cart example. Proc. of the Amer. Control Conf., San Francisco, California, pages 680 — 685, June 1993. [16] J .K. Hale. Ordinary Differential Equations. Krieger Publisher Company, Mal- abar, Florida, second edition edition, 1980. [17] J .P. Gauthier H. Hammouri and S. Othman. A simple observer for nonlinear systems application to bioreactors. IEEE Trans. Automat. Contr., 37(6):875 — 880, 1992. [18] G. Hardy, J .E. Littlewood, and Polya. Inequalities. Cambridge University Press, 1952. [19] J. Hauser, S. Sastry, and P. Kokotovic. Nonlinear control via approximate input- output linearization: the ball and beam example. IEEE Trans. Automat. Contr., 37 (3):392 — 398, 1992. [20] A. Isidori. Nonlinear Control Systems. Springer-Verlag, New York, third edition edition, 1995. [21] A. Isidori. A remark on the problem of semiglobal nonlinear output regulation. IEEE Trans. Automat. Contr., 42(12):1734 — 1738, 1997. [22] A. Isidori and GI. Byrnes. Output regulation of nonlinear systems. IEEE Trans. Automat. Contr., 35:131 - 140, 1990. 202 [23] M. J ankovic. Adaptive output feedback control of nonlinear feedback linearizable systems. Int. J. Adaptive Control and Signal Processing, 10:1 - 18, 1996. [24] Z.P. Jiang, D.J. Hill, and Y. Guo. Semiglobal output feedback stabilization for the nonlinear benchmark example. Proc. of the Europ. Contr. Conf., Brussels, Belgium, July 1997. [25] H.K. Khalil. Robustness issues in output feedback control of feedback lineariz- able systems. Proc. of the Europ. Contr. Conf., pages 58 - 62, 1993. [26] H.K. Khalil. Robust servomechanism output feedback controllers for a class of feedback linearizable systems. Automatica, 30(10):1587 — 1599, 1994. [27] H.K. Khalil. Adaptive output feedback control of nonlinear systems represented by input-output models. IEEE Trans. Automat. Contr., 41:177 — 188, February 1996. [28] H.K. Khalil. Nonlinear Systems. Prentice Hall, second edition edition, 1996. [29] H.K. Khalil. Universal integral controllers for minimum phase nonlinear systems. Amer. Cotrl. Conf., pages 756 — 760, 1998. [30] H.K. Khalil and F. Esfandiari. Semiglobal stabilization of a class of nonlinear systems using output feedback. IEEE Trans. Automat. Contr., 38(9):1412 - 1415, 1993. [31] H.K. Khalil and EC. Strangas. Robust speed control of induction motors using position and current measurement. IEEE Trans. Automat. Contr., 41(8):1216 — 1220, 1996. [32] J. Kurzweil. On the inversion of lyapunov’s second theorem on stability of motion. Amer. Math. Soc.’ Transl, 24(Sr. 2):19 - 77, 1956. [33] W. Lin. Input saturation and global stabilization by output feedback for affine systems. Proc. of the 33rd CDC,Lake Buena Vista, FL, pages 1323 — 1328, December 1994. [34] Y. Lin, E. Sontag, and Y. Wang. A smooth converse lyapunov theorem for robust stability. SIAM J. Control 62 Optimization, 34:124 — 160, 1996. [35] Z. Lin and A. Saberi. Robust semi-global stabilization of minimum-phase input- output linearizable systems via partial state and output feedback. IEEE Trans. Automat. Contr., 40(6):1029 - 1041, 1995. 203 [36] Z. Lin, A. Saberi, M. Gutmann, and Y.A. Shamash. Linear controller for an inverted pendulum having restricted travel: a high-and-low gain approach. Au- tomatica, 32(6):933 — 937, 1996. [37] NA. Mahmoud and H.K. Khalil. Asymptotic regulation of minimum phase nonlinear systems using output feedback. IEEE Transactions on Automatic Control, 41(10):1402 — 1412, 1997. [38] NA. Mahmoud and H.K. Khalil. Robust control for a nonlinear servomechanism problem. Int. J. Control, 66(6):779 — 802, 1997. [39] P. Martin, S. Devasia, and B. Paden. A different look at output tracking: control of a vtol aircraft. Automatica, 32(1):101 — 107, 1996. [40] S. Nicosia and A. Tornambe. High-gain observers in the state and parameter estimation of robots having elastic joints. Systems Contr. Lett., 13:331 — 337, 1993. [41] S. Oh and H.K. Khalil. Output feedback stabilization using variable structure control. Int. J. Contr., 622831 — 848, 1995. [42] LR. Petersen and CV. Holot. High-gain observers applied to problems in dis- turbance attenuation, hoo optimization and the stabilization of uncertain linear systems. Proc. American Control Conf., Boston, MA, pages 1824 - 1827, June 1991. [43] A. Saberi, B.M. Chen, and P. Sannuti. Loop Transfer Recovery: Analysis and Design. Springer-Verlag, 1993. [44] A. Saberi and P. Sannuti. Cheap and singular controls for linear quadratic regulators. IEEE Trans. Automat. Contr., 32(3):208 - 219, 1987. [45] A. Saberi and P. Sannuti. Observer design for loop transfer recovery and for uncertain dynamical systems. IEEE Trans. Automat. Contr., 35(8):878 — 897, 1990. [46] A. Saberi and P. Sannuti. Observer design for loop transfer recovery and for uncertain dynamical systems. IEEE Trans. Automat. Contr., 35(8):878 - 897, 1990. [47] P. Sannuti and HS. Wason. Multiple time—scale decomposition in cheap control problems—singular control. IEEE Trans. Automat. Contr., 30(7):633 — 644, 1985. 204 [48] E. Sontag and Y. Wang. On characterization of input-to—state stability with respect to compact sets. Proce. IFAC Nonlinear Control Systems Design Symp, Tahoe City, CA, pages 226 -- 231, June 1998. [49] A. Teel and L. Praly. Global stabilizability and observability imply semi-global stabilizability by output feedback. Systems Contr. Lett., 22:313 — 325, 1994. [50] A. Teel and L. Praly. Tools for semiglobal stabilization by partial state and output feedback. SIAM J. Control 6 Optimization, 33(5):1443 — 1488, 1995. [51] AR. Teel. Semi-global stabilization of the ball and beam using output feedback. Proc. of the Amer. Control Conf., San Francisco, California, pages 2577 - 2581, June 1993. [52] A. Tornambe. Output feedback stabilization of a class of non-minimum phase nonlinear systems. Systems Contr. Lett., 19:193 — 204, 1992. [53] M. Vidyasagar. On the stabilization of nonlinear systems using state detection. IEEE Transactions on Automatic Control, 25(3):504 -- 509, 1980. [54] F. Viel, E. Busvelle, and JP. Gauthier. Stability of polymerization reactors using i/o linearization and a high-gain observer. Automatica, 31(7):971 — 984, 1995. [55] T. Yoshizawa. Stability theory by Liapunov’s second method. Mathematical Society of Japan, 1966. [56] K. Zhou and P. Khargonekar. An algebraic riccati equation approach to h00 optimization. Systems Contr. Lett., 11:85 — 91, 1988. [57] V.I. Zubov. Methods of A. M. Lyapunov and their applications. P. Noordhoff Ltd., Groningen, The Netherland, english edition edition, 1964. 205 . E . . taill- A.1. wt]: [1...]. . villa.l|tl ll!!! .lvot‘ol: ll! I l. l full (Elwin-AH.” ....JUWJ .. . I‘ll . .. .w >2...» 1 II 1! (...!!OVI . 4 P‘s... ..Wfl‘nll . lunar“...