1.. ”hm—c... .. \ .i av. ; . r...» . ”Ha. ... .- Mruimwabfi . I i 16...: . «Muéwué .r a? as”. if“ 1.. 1.. umrr 13 2m. .9. .. .H. ‘ . . . 59......” mm“ Nmufiui .qfiqmwgmfie w... 5.1... , . V {5%. a. i fit. 1} ‘3‘ «I k. :- u:s,.l...3 .2005 This is to certify that the dissertation entitled LOGIC-BASED SWITCHING CONTROL OF NONLINEAR SYSTEMS USING HIGH-GAIN OBSERVERS presented by LEONID B. FREIDOVICH has been accepted towards fulfillment of the requirements for the Ph.D. degree in Mathematics Major PrWignature F} émacy [‘23] (Qfl/E Date MSU is an Affirmative Action/Equal Opportunity Institution LIBRARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 2/05 mlClRWeDueJndd—pns LOGIC-BASED SWITCHING CONTROL OF NONLINEAR SYSTEMS USING HIGH-GAIN OBSERVERS By Leonid B. Fr'ez'dovz'ch A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 2005 ABSTRACT LOGIC-BASED SWITCHING CONTROL OF NONLINEAR SYSTEMS USING HIGH-GAIN OBSERVERS By Leonid B. Freidovz'ch The design of robust output and state feedback controllers for practical track- ing and stabilization for nonlinear systems with large—scale parametric uncertainty is considered. We consider a class of single-input single-output systems that can be transformed into a special form, where unavailable states (if any) are derivatives of the measured outputs. We consider a large class of nonlinear systems that could be stabilized via a Lyapunov-based technique if the level of parametric uncertainty was much lower and if the unmeasured states were available for feedback. We propose and investigate a new approach based on subdividing the set of parameters into smaller subsets, designing candidate controllers for each subset, and implementing a logic- based switching between them. We use the output of an extended-order high-gain observer to substitute the unavailable states in the candidate controllers, check the inequality for the derivative of the Lyapunov function and switch if it is not satisfied, and to approximately identify the parameters. We discuss the issues of digital im- plementation and measurement noise and illustrate our design procedure on several examples. To the memory of my mother, Elena Moiseezma Al’tman, and to the memory of my first advisor, Anatolii’ Arkad’em'ch Pervozvanskti’ iii ACKNOWLEDGMENTS I I would like to express my deepest and sincerest gratitude to my advisor, Professor Hassan K. Khalil, for his guidance, constant encouragement, and support, without which this thesis would not have been possible. It is hard to overrate his help, insight into Control Theory, or boundless patience. He has taught me not only to learn and obtain results but also to interpret, test, and present them. I would like to thank the members of my Ph.D. committee, Professor Sheldon E. Newhouse, Professor Vera M. Zeidan, Professor Charles R. MacCluer, and Pro- fessor Ranjan Mukherjee for their willingness to serve on the committee as well as for their assistance, encouragement, and valuable input. I am especially thankful to my co-advisor, Dr. Newhouse, for his insight into the theory and methods of Dy- namical Systems and Differential Equations and to Dr. Zeidan for a comprehensive introduction to Optimal Control Theory. My former advisors, co—authors, and teachers from St. Petersburg State Technical University, especially Professor Anatoly A. Pervozvanski, Professor Ilya V. Burkov, and Professor Sergej F. Burdakov, gave me a foundation as a researcher, encouraged my ambition, and directed my, often times misplaced, energt To them, as well as to my instructors at Michigan State University, go my most heartfelt thanks. It is also pleasure to acknowledge various helpful discussions on switching control with Dr. Daniel Liberzon and on different control problems with Dr. Sridhar Sesha- giri, Professor Cevat Gokcek, Dr. Konstantin E. Avrachenkov, Professor Alexander L. F radkov, Professor Anton Shiriaev, Dr. Ahmed Dabroom, and Dr. Ilya Kolmanovski. To my family, relatives, and long-time friends, especially to Leonid Slavin, Zeynep Altinsel, Boris Yakovlev, Alexander Levin, and Denis Moskvin, I owe the maximum gratitude, for years of their unflinching support, patience, understanding, and not giving up on me at a rough time. Contents List of Figures ............................................. viii 1 Introduction ............................................ 1 1.1 Feedback Control Design ......................... 1 1.2 Lyapunov Functions ........................... 7 1.3 Uncertainty and Robustness ....................... 9 1.4 High-Gain Observer-Based Design .................... 14 1.5 Literature Review for Switching Control ................ 19 1.6 Overview of the Thesis .......................... 21 2 Sliding-Mode Control-Based Tracking of Minimum-Phase Non- linear Systems .......................................... 24 2.1 Description of the Idea .......................... 24 2.2 Motivating Example ........................... 26 2.2.1 Robust control [22] ........................ 26 2.2.2 Adaptive control [21] ....................... 29 2.2.3 Logic-based switching ...................... 30 2.2.4 Simulation results ......................... 33 2.3 Problem Formulation ........................... 38 2.3.1 Model and Main Assumptions .................. 38 2.3.2 State feedback control for p E ’P“) ............... 40 2.3.3 High-gain observer for p E 73“) ................. 43 2.3.4 Switching logic .......................... 45 2.4 Main Result ................................ 45 2.5 Remarks .................................. 52 Lyapunov-Based State and Output Feedback Stabilization for a Class of Nonlinear Systems ................................ 54 3.1 Introduction ................................ 54 3.2 Class of Systems ............................. 57 3.3 Switching Logic .............................. 60 3.4 Main Result ................................ 65 3.5 Examples ................................. 71 3.5.1 Linear non-minimum phase system ............... 72 3.5.2 Nonlinear system and adaptive control ............. 77 3.6 Remarks .................................. 80 Nonlinear Example: Case Study ........................... 83 4.1 Model ................................... 83 4.2 Continuous Controller Design ...................... 84 4.2.1 Lyapunov-based switching output feedback ........... 84 4.2.2 Lyapunov-based switching state feedback ............ 86 4.2.3 Scale-independent hysteresis-based switching logic ....... 87 4.3 Practical Implementation ......................... 89 4.3.1 Digital implementation ...................... 89 4.3.2 Suppressing measurement noise ................. 91 4.4 Simulation Results ..... A ....................... 92 4.4.1 Noise-free case ........................... 93 4.4.2 Acceptable level of noise ..................... 96 vi 4.5 Remarks .................................. 99 5 Conclusions ............................................. 101 A Listings of the programs .................................. 104 Bibliography .............................................. 121 vii List of Figures 1.1 2.1 2.2 2.3 2.4 3.1 3.2 3.3 4.1 4.2 4.3 4.4 4.5 Pendulum ................................. Simulation results for the case of p = 5 ................ Simulation results for the case of p = —10 ............... Performance under unmodeled dynamics ................ Performance under unmodeled dynamics and measurement noise . . . Simulation results for y, = 0.5 : the dashed line is for adaptive control and the solid line is for switching control ................. Simulation results for y, = 0.1 sin(t) : the dashed line is for adaptive control and the solid line is for switching control. ........... Simulation results for y, = 0.5 sin(t) : the dashed line is for adaptive control and the solid line is for switching control. ........... Improving pre—routed search: the case of z" = 10 ........... Improving pre-routed search: the case of z" = 40 ........... Noise-free performance: the case of z" = 30 .............. Performance of the state feedback controllers without and with filters: the case of z" = 20 and on = 10‘4 ................... Performance of the state feedback controller without filters in the pres- ence of noise of different levels: the case of z" = 40 .......... viii 35 36 37 79 97 4.6 Performance of the controller with hysteresis-based scale-independent logic without filters and with filters, having II" = 0.01 and no 2 4 : the case of 2" = 40 and a0 = 1 ..................... 99 A.1 state-d.mdl, page 1 ........................... 105 A2 state_d.mdl, page 2 ........................... 106 A3 state_d.mdl, page 3 ........................... 107 A.4 state_d.mdl, page 4 ........................... 108 A5 state_d.mdl, page 5 ........................... 109 A6 state-d.md1, page 6 ........................... 110 A.7 state_d.md1, page 7 ........................... 111 A8 state-d.mdl, page 8 ........................... 112 A9 hgo1d.m (High-gain observer), page 1 .................. 113 A.10 hgold.m (High-gain observer), page 2 .................. 114 A.11 1bsl.m (Lyapunov-based switching logic), page 1 ........... 115 A.12 lbsl .m (Lyapunov-based switching logic), page 2 ........... 116 A.13 lbsl .m (Lyapunov-based switching logic), page 3 ........... 117 A.14 main- sf.m, page 1 ............................ 118 A.15 main- sf .m, page 2 ............................ 119 A.16 main- sf .m, page 3 ............................ 120 ix Chapter 1 Introduction In this thesis we consider some results in the field of mathematical theory of nonlinear feedback control design. This chapter is included in order to make the presentation self-contained and easy to read for a non-specialist in control. We start with a brief overview of some classical problem formulations, traditional and modern techniques. We define terminology and provide some references for the results to be used latter in the main part of the thesis. Finally, in the last part of this chapter we give a brief preview of our results. In addition, we identify the contribution and show the location of our results in the broader picture. 1.1 Feedback Control Design Let us, following [43, Ch. 1], start with a simple physical motivating example. Consider the pendulum of Fig. 1.1, to which one can apply a torque as an external force. Assuming the rod is rigid and has zero mass, the system can be described by the second-order nonlinear differential equation mld(t) + mg sin(0(t)) = u, (1.1) mg Figure 1.1: Pendulum where 9(t) is the value of the counterclockwise angle with respect to the vertical at time t , 0(t) = %0(t), m is the mass at the tip, I is the length of the rod, 9 is the acceleration due to gravity, and u is the value of the external torque (counter- clockwise being positive) at time t, that we can assign arbitrarily. We call u the input or control. Suppose our goal is to design a control law that stabilizes the pendulum at the upward vertical position ( 6 = 1r and d = 0 ), i.e. to find a formula for u, the torque that moves the pendulum into the upward position and keeps it there. It is clear that, assuming that the pendulum is initially at rest with the goal can be achieved in many different ways appropriately defining the control u = k(t) (1.2) as a function of time (a solution in this form is called a feedforward control or precomputed control [43, Sec. 1.4]). Moreover, the behavior of the solution of the system (1.1), (1.2) can be optimized in order to minimize the transition time (obey— ing certain physical restrictions on the control magnitude) or power consumption and to satisfy certain specifications of the provoked motionl. However, it is clear that since the upward equilibrium for the unforced or open-loop system (with u E 0 ) is unstable (physically and mathematically in the sense of Lyapunov), if the initial conditions for the system are slightly (actually, arbitrarily!) different from the as- sumed values, the feedforward control law would fail, since absolutely no function of time can change stability. The stabilization problem can be solved in the more realistic case when the initial conditions are not known in advance, i.e. for (6(0).6'(0)) e a, where Q C R2 is a known (typically compact) set, using feedback control, provided the pendulum is equipped with a set of sensors. The sensors measure certain set of signals in the system y = h (an), 9(a) (1.3) on-line, called the outputs, and allow us to define u in the form of either static output feedback u = k(t.y) (1-4) or dynamic output feedback u = k1(t,y, Z), Z = k2(t,y,2). (1.5) In a special case, when the output y is precisely the vector of state variables ( l9 and 9 ), (1.4) is called static state feedback while (1.5) dynamic state feedback. 1This kind of problems is in the field of the Theory of Optimal Control, which naturally complements the problem we are interested in this thesis. Introducing the notation (and omitting dependence of t ) (III 6 — 7T I2 :1: = = , and f(1:,'u) = 1:2 6 (tr/m — gsi1(1'1+ 7r)) /1 we can rewrite (1.1) and (1.3) in the general form of the vector differential equation with input and output 1% = f(;1:,u.) and y = h(;r), (1.6) where x E R" is a vector of state variables, :i: = iii—f is a vector of time derivatives of :r, t is time, 3; 6 IR“ is a vector of measurements (outputs), and u 6 IR’" is a vector of controls (inputs) to be determined. If k = m = 1 , the system is called single-input single-output [$180]; if k > 1 and m > 1, it is called multi- input multi-output [MIMO]. Typically, f () and h(-) are locally Lipschitz vector functions and 33(0) 6 f2 , with Q C R" being a given (often compact) set of initial conditions, containing the origin in its interior. We are ready now to define the stabilization problem: design a control law in the form of either dynamic or static feedback [23, 19, 37], that guarantees that the solutions of the closed-loop system, (1.6) and (1.5) or (1.6) and (1.4), respectively, exist and are asymptotically small in certain sense. In particular, one might require 0 either practical stabilization: all solutions are uniformly bounded, and for a given sufficiently small positive number 60, there exists T > 0 such that ||;r(t)]| _<_ £0 for t 2 T, 0 or asymptotic stabilization: :r(t) = O is a solution of the closed-loop system, which is stable in the sense of Lyapunov [12, 41, 23] (for every 6 > 0 there exists 6 > 0 such that ||:r(t)|| S 5 whenever ”1(0)” 3 6 ) and attractive, i.e. tlirn |].'r(t)|| = 0 whenever 3(0) 6 f2. It is easy to see that feedback linearization-based [23, Sec. 13.3] static state feedback it 2 ml (%sin(:r1 + 7r) — 2.172 — 1'1) (1.7) or dynamic output, with y = r1 , feedback 21. : ml (% sin(y + 7r) + 82 — 3y) and 23 = y — 32: (1.8) transform the system (1.1) into a linear exponentially stable system 5131 + 2171 + (131 = 0 Of z‘3)+32+32+z=0, respectively, and, therefore, solve the problem of global ( Q = R2 ) asymptotic stabi- lization. Suppose now that, instead of asymptotic stabilization of the upward position, we would like to achieve a more challenging task: we need the pendulum to follow the predefined reference signal 9d(t) as close as possible. Assuming that this signal is twice continuously differentiable, redefining the state variables as 3:1 = 9 — 9d(t) and x2 = 9 — 9d(t) it is not hard to see how to modify (1.7) and (1.8). We can take u = ml (910) + %sin(x1 + 9d(t)) — 2:132 — 2:1) CJ'I 01' u = ml (9,,(t) + %sin(y + 9d(t)) + 82 — 3y) and 2 = y — 32: in order to ensure that the solutions for each closed-loop system exist and that lim [9(t) — 9d(t)| = 0. t—+oo In general, for the system (1.6), we define a controlled output yc = hc(;r) a sufficiently smooth (continuously differentiable with a certain number of uniformly bounded derivatives) reference signal r(t) , and a tracking error 6(t) = hC(flv(t)) - T0)- The goal now is to design a static or dynamic state or output feedback control law to ensure existence of the solutions of the closed-loop system with 3(0) 6 fl and 0 either practical tracking: all solutions are uniformly bounded and the track- ing error is uniformly ultimately bounded, i.e. for a given pre-specified (suffi- ciently small) 50 > 0, there exists T > 0 such that ||e(t)|| S 60 for t Z T, 0 or asymptotic tracking: all solutions of the closed-loop system are uniformly bounded and tlim ||e(t)|| = 0. A special case of tracking with a constant reference signal is called regulation and often can be reduced to a significantly simpler problem of stabilization. We would like to notice also that for some practically important cases the measured and controlled inputs are the same. It is clear that neither stabilization nor tracking problem can be solved without additional structural assumptions about the functions f (~) and h(-) . Depending on the particular assumptions, different design strategies have been proposed in the literature [23, 19]. A few of them are to be described below after we briefly recall one of the techniques that is usually used while proving that the goal is achieved or even during the design stage. 1 .2 Lyapunov Functions As soon as an appropriate control law is designed, the closed-loop system can be rewritten in the form X = F(t,x), x(0) E Q. (1.9) If the control law is locally Lipschitz and the reference signal is ‘nice’2, then F () is locally Lipschitz in x and piecewise continuous in t. It is known, that if this is the case, solutions exist locally, i.e. there exist T 6 (0,00] such that the tra- jectories are well-defined for t E [0, T) (see, e.g. [23, Th’m 3.1]). However, we need to show that the solutions of (1.9) exist globally (extendable to [0, co) and satisfy certain properties, in particular, we often need to proof stability or asymp- totic stability. For nonlinear systems, the most universal and popular approach for this kind of analysis is Lyapunov technique, see, e.g. [23, 12, 41] and especially [23, Th’ms 4.18, 8.4, 4.8 and 4.9]). For illustration purpose we assume that F(-) is time-independent, which is usu- ally the case for the problem of stabilization. The idea is to use an auxiliary scalar- valued continuously differentiable function V(x) , called Lyapunov function, with the following pr0perties: o V(0) = 0 and there exists C E (0, 00] such that every set in the nested family {X 6 1R" : V(x) = c} with c E [0, C] is compact; 0 there exists 01 6 [0,0] such that Q C {3: 6 IR" : V00 2 c1} and, for 2For example, 9.1(t) in the tracking problem for the pendulum example could be taken twice differentiable with bounded piecewise continuous second derivative. practical stabilization, there exists c2 6 [0, C] such that V (x(t)) 3 C2 implies ”XU)” g 60; o the time derivative of V(x(t)) along the trajectories of the system, i.e. - (9V V = B—XF(X)’ is strictly negative for x inside the set {x 6 1R" : V00 3 CI} but outside the set {X E R" : V(x) 3 C2}, with C; = O for asymptotic stabilization. There are several standard control design techniques for nonlinear systems [19, 23] in the form (1.6). Some of the approaches (not only analysis) are based on some constructive applications of the theory of Lyapunov functions. As an example, let us consider a problem of asymptotic tracking a bounded con- tinuously differentiable signal 1rd(t) for the system We can start with x = a: — ird(t) , and take V(X) = )8. Computing derivative of V , we obtain: V = 2x(—:i:d(t) + f1(:r:) + u). Naturally, the choice it = k(t,:r:) E 234(t) — f1(:r:) —§ ensures V S —x2 and solves the problem. In a similar way, the stabilization and tracking control laws presented above for the pendulum example could be obtained as well (if we start with a quadratic Lyapunov function for the corresponding closed-loop systems). However, for most practically important problems the situation is complicated by the, so-called, presence of uncertainty. 1.3 Uncertainty and Robustness Returning back to the problem of stabilizing the upward equilibrium of the pen- dulum system (1.1) it is worth noting that for a ‘real’ physical system a control law like ( 1.7) is not possible to realize if the value of either the mass, m , or the length, l , is not known precisely. Moreover, if the pendulum is a model for a one-link ro- botic manipulator, so that m mostly corresponds to the mass of the object, hold in the grasp, it may vary significantly. A more realistic assumption is that the system, instead of (1.6), should be described as j: = f(p,13,’tt), y = h($)1 (110) where p is a vector of unknown parameters of the model that are known to belong to a given compact set ’P . In particular, for (1.1), we can take 1 1 ml 1:2 PM ll w l- p2 —g/l plu + 132 sin(rl + 7r) and, instead of (1.7), apply u = (—]32 8111(1131 + 71’) _ 2172 - (ED/131a where 13 is a vector of nominal values for the parameters. It is not hard to show, that if the set of parameters 1’ is sufficiently small, this control law will still work, however the corresponding modification for tracking of an oscillating signals would lead to practical tracking (not asymptotic). In addition, if ’P is not small enough or if practical tracking is not acceptable, the design must be done differently and the whole mathematical problem should be reformulated. In general, the function f () cannot be assumed known and therefore the designed control law must work not for one specific model but for a certain class of models in the form of (1.6). If this is done, we say: the control law is robust with respect to the corresponding class of uncertainties. The main part of this thesis is devoted to a problem of feedback control design in the presence of parametric uncertainty: we will assume that the class of systems can be described by (1.10). There are three basic techniques, dealing with uncertainty: 0 Adaptive control design, see e.g. [2, 18, 26], [42, Ch. 8], is based on the idea of parameter identification. First, we design a control law for each possible set of the parameters, e.g. in the form u = k1(t, p, y). Second, we design an adaptation or identification law, e.g. 15 = k2(t,p, y), sometimes aiming to achieve parameter convergence £11120 || 15(t) — pl] 2 0 and sometimes not. Finally, the control is implemented as ’U, = kl(taf§iy) and IS: k2(tafivy) o Robust control design, see e.g. [40, 47], [23, Sec. 14.1 and 14.2], [42, Ch. 7], [30] and references therein, is based on the worst-case scenario. We do not employ the control law, designed for the known-parameters case. We attempt to find a control law that would work for all possible values of the parameters simultaneously relying only on the knowledge of the whole set of parameters. Inparticular, sometimes analyzing the derivative of a Lyapunov function, we can choose the control law to dominate over all the uncertain terms instead of canceling them as in feedback linearization-based approach. 0 Switching control design is similar to the adaptive design except for the choice of the parameter identification law. Instead of adjusting the estimate of the parameters continuously, according to a differential equation, we change it abruptly at certain switching times if certain conditions are satisfied. Since a contribution to this approach is the subject of this thesis, we give a more accu- 10 rate description, literature review and references below in a separate section. To illustrate the ideas, let us consider the problem of asymptotic stabilization for r=p+u. UJU For adaptive design, we start pretending that p is known. In this case, the control law u _ _ :1: — 2 would solve the problem since the closed-loop system would be :i: 2 —g— and asymp— 2 totic stability can be shown by the Lyapunov function V = :1: (applying, e.g. [23, Th’m 4.9]). Since p is unknown, we use an estimate, p , for the parameter: NIH Let us define the estimation error: 13 = p — p. Augmenting the Lyapunov function x2 , used above, with a quadratic term in p we obtain V(:r, p) = $2 + p2 and v = —:r:2 — 215(1'5 — 1:). It can be shown that the adaptation law '6): II a.) 2 and ensures asymptotic stabilization. For a more general non- results in V = —a: linear system we typically employ a more advanced tool like Barbashin-Krasovskii’ Theorem [23, Th’m 4.2], or LaSalle’s Theorem [23, Th’m 4.4], or Barbalat’s Lemma- based stability Theorem [42, Lemma 4.3], [23, Th’m 8.4]. To regulate the system ( 1.11) using robust design, we can use the same Lyapunov 11 function, once again, and compute the derivative V(.'L‘) = r2 and V = —.’L‘2 + 2J5 (u + p + g) . It is easy to see that since the set of parameters, ”P , is compact we can take3 a: . u : —§ — Sign{I} SUPilPlia p679 to obtain V S —:r:2, which ensures asymptotic stability“. Known robust and adaptive techniques are not always applicable or practical in the case when the set of possible values of the parameters is large. We will illustrate what is meant by ‘not practical’ below, in the next chapter, via simulations. In order to see some difficulties let us consider the system i=pm where pE’P with PD{—1,l} and P25{0}. For the design in the case of known parameter we can proceed as above and obtain that a: u = —— 2:0 stabilizes the system. Let us consider now V = r2 + 152 3 . _ :r, if :r 2 0 Here srgn{:r:} _ { —:r:, otherwise 4It is easy to see that the trivial solution exists when 1(0) = 0 and the right-hand side of the closed-loop system is locally Lipschitz otherwise. 12 and attempt to derive the adaptation law for :r u :- —— 213 We obtain 1) $9 p arl V 2(°+~L) .12 2~ f+$2 :r=—— —————:r: l( = 51:3: =—‘— —. 2p 2 2;; pp p p 213 We notice, that although the choice '. _ 132 P — 215 would ensure V = —:r2 g 0, not only stability but even global existence of the solutions cannot be argued since the right-hand is undefined at p = 0. Moreover, if 2(0) 79 0, 15(0) > 0, and p < 0, then p(t) must be monotonically decreasing as t —+ 00 , is approaching the bad point and, clearly, p(t) 74» p . Trying to apply robust design, we also run into some obstructions. Proceeding as above, we obtain V(:z:) = 3:2 and V = 2pxu. It is clear that there is no choice for the control in a form u = k(t, at) that would guarantee v = (22rk(t, x))P S 0 simultaneously for all p E ’P, since the sign of p is undefined. From an intuitive point of view, the complexity of the adaptive and robust designs here is due to the unknown direction of control (whether positive u would result in increasing or decreasing of :r(t) depends on the value of the parameter); this situation is called unknown high frequency gain. 13 Let us assume that P = {—1,1}. We know that u = —a:/2 would work for p = 1 and u = r/ 2 would work for p = —1. It is also obvious that if the wrong control law is applied, a:(t) monotonically approaches either 00 or —oo. Assuming that this ‘bad behavior’ can be identified (just check whether :r:(t) is positive and increasing or negative and decreasing) it seems reasonable to suggest the following control strategy. Put one of the control laws into the loop and wait. As soon as escaping to too is identified, switch to the other control law. This control strategy is an example of switching control design. In general, the most challenging part of this approach is the design of the switching logic, i.e. identifying that the current control law does not work and choosing the one to switch to. 1.4 High-Gain Observer-Based Design Now we are going to present another technique that is useful for the stabilization or tracking problems when some of the state variables are not available and therefore must not be used in the control signal computation. Consider again the stabilization of the upward position for the pendulum system (1.1). We would like to derive a dynamic output] feedback control using the output y = 3:1 as an alternative to (1.8). Our new control law will be built on a known static state feedback control law5. Such a procedure is called observer-based design. Had both states I, and 1:2 been available, we would have used (1.7). Since 3:; is not available, we may obtain its estimate, 5:2, and use the feedback control law u=ml(%sin(y+7r)—2ai:2—y). Since 2:2(t) E y(t) = lim y(t) ’ y(t “ h) h 0 h a a possible way to obtain an estimate for 2:2 5For simplicity of presentation we consider here only the case of known parameters. 14 is to use an Euler’s backward difference formula (assuming y(t) E 0 for -—e g t < 0 ) )2 W) -y(t -8) :i:. t .( E 7 where 5 > 0 is sufficiently small. It is clear that for t 6 [0, e) the estimate obtained with this formula is not reliable and huge (i2 = y/e for t < 5) since 8 is small. Such behavior, which is called the peaking phenomenon, might provoke unacceptable transient and even finite escape time in a more complicated system. An alternative approach is to use a high-gain observer formula ; . 2 —:i?1 ; —i1 11:32+—(y————) and $22y e 52 with arbitrary initial conditions and small 5 > 0. Defining the scaled estimation [’71] [(y—iil/5] ’7: = . . 772 31-1172 we can rewrite the dynamics of the observer as [”l l4 l [ml [0] E = +5 and 532:3:2—172. 7'72 “1 0 772 y(t) Now it is reasonable to believe6 that, if y(t) is uniformly bounded and e is suffi- CI'I'OI' ciently small, then 172(t) vanishes and 132(t) approaches 232(t), possibly up to an error of an order of 0(5) 7. We note that 77(0) = 0(1/5) and hence peaking may occur here as well. However, for either of this two schemes of estimating derivative, the system can be protected by saturating the control signal outside of the region of interest, based on the given compact set of initial conditions, as suggested in [7]. 6See [23, Sec. 14.5.1] for further motivation. 7We write 5 = 0 a if lim f—(2 is finite. f( l (9( )) 5—.0 9(5) For a special case of the problem of output feedback control design, when the missing (unavailable) state variables are derivatives of the outputs we can proceed in a way similar to the presented above [3], [23, Sec. 14.5]. Consider the systemg zitl Al‘l + B¢§(;rl,.r2, u) yl Cr] 2 and y = = , (1.12) :32 ¢’(I1,$2,U) y2 h2‘(1131,~”l?:2) where "0 1 0‘ '0- 0 0 1 0 0 A: , B: , C=[1 0 0] 0 0 1 0 _0 0. _1‘ are matrices of appropriate dimensions, ¢(-), W'), and h2(-) are locally Lipschitz, vanishing at the origin nonlinear functions. The control design can be done in two steps. First, we derive a globally bounded in $1 (saturated outside of the region of interact), locally Lipschitz dynamic partial state feedback control law u = k1(:r1,y2, z) and 23 = k2(:r1, y2,z). (1.13) Then, we use a high-gain observer 13']: A571 + B¢0(i1,yg,U) + H(y1— C(31), (1.14) 8For simplicity, we present here only a simplified version of the result from [3]. 16 where (but) is a globally bounded in i1 model for cf)() , H: e 52 5"1 [0'1 02 (in, [T 7 e > 0 is a small parameter, and o ’s are chosen so that the roots of s"1 + 013’”—1 + - - - + (in, = 0 are in the open left-half plane. Finally, we use the control law u = k1 ($231,112, :5) and i = k2(:i:1, y-z, z). (1.15) Under certain additional technical assumptions, it is shown in [3] that the dynamic output feedback control law recovers the properties of the partial state feedback con- trol law, provided 5 is sufficiently small. In particular, it is shown that the trajecto— ries of the systems with these two control laws put into the loop are asymptotically close to each other. Following [7, 3], in order to analyze a system with a high-gain observer, we make the change of variables 77 = ID(5)l-l($1- 131), where 0(5) 2 diag[e""l 6"“? 1], to transform the system (1.12), (1.15), (1.14) into the form it = F(€.x,n) and 61'? = 4077 + €G(€,x.n), (1-16) where F () and G() depend on e regularly and are globally bounded in 77, A0 isa Hurwitz matrix, whose eigenvalues are exactly the roots of s"1 +len1-l+. - +01", 2 0, 17 x(0) = {1(5) = 0(1), and "0(0) 2 £2 (5) z 0 (871114) . This system is a singular perturbation of the system (1.12), ( 1.13), called slow-subsystem, which could be written as XC : 17(0) XCan) (1.17) with chO) 2 61(0). It is easy to see that the new variables X and 77 change with different rates and so the motions'in the system can be divided into slow and fast. Had {2(0) been 0(1) , the system (1.16) would have been in a standard sin- gularly perturbed form [23, Ch. 11]. However, using 0 the Lyapunov function W = nTPOr) for the fast-subsystem fie : A0770 where P0 = POT is a positive definite solution of the Lyapunov equation AOPO + POAg‘ = —I, o arguments, similar to the ones used in the proof of continuous dependence of the solutions of a (regularly perturbed) system on parameters [23, Th’m 3.5], and o boundedness of F () and G() in 17 it can be shown that there exists T(5) > 0, such that [13(1) T (5) = O and x(t) stays of the order 0(1) for t 6 [0, T (5)), while 17(t) decays to 0(5) -values during this period. After this transient period, that corresponds to peaking, the system is, basically, in the standard singularly perturbed form and could be analyzed using the composite Lyapunov function V(x) + :WM), where V(XC) is a Lyapunov function9 for the slow subsystem (1.17). 9In [3] and [23, Ap. C.23] existence of this function is argued using a converse-Lyapunov-function Theorem [23, Th’m 4.17]; note, however, that it is a natural assumption if the state feedback design is done based on a Lyapunov function as above. 18 1.5 Literature Review for Switching Control Very recently, switching control design attracted attention of many researches in mathematical theory of control. The pioneering work by Morse [34] introduced supervisory control design with dwell-time logic for regulation of uncertain linear systems. We will follow the key starting point of this approach: split the 'set of parameters into a finite (or infinite) family of smaller compact subsets and design a family of control laws, each to work under the assumption that parameters are in the corresponding small subset. The idea has been developed later for the control of a fairly general class of linear systems by many researches; see, in particular, [35, 17] and, especially, [28] and the references therein. After the families have been designed we put one of the controllers into the loop and wait to observe the behavior of the closed—loop system. It is obvious that if switching of the control is not allowed faster than a certain fixed period, called dwell-time, then the solutions are globally defined and that if the right controller is put into the loop, then the tracking error must vanish exponentially and sufficiently fast ensuring that the tracking problem is solved. What is not trivial to notice is that if the control laws are designed appropriately then vanishing of the tracking error ensures that all the other signals are bounded. It has also been suggested in [34] to compare the output of the real system with the outputs of a family of models of the system, corresponding to each subset of parameters. The switching in [34] is based on using such output predictors or multi-estimators to compute (on-line) performance indices for all models and then choosing a controller that corresponds to the smallest index as soon as the dwell-time is over. An alternative switching logic has been recently used in [4], refining an idea from [11]. Locally robust candidate controllers are designed and arbitrarily ordered, then pre—routed switching strategy is used based on checking whether the Lyapunov func- 19 tion has decreased by a certain percentage during a small pre—fixed period of time. Extensions of the switching logic ideas to nonlinear supervisory control design is still in an early stage. To the best of our knowledge, only a few results are known for continuous-time nonlinear systems [14, 15, 16, 28, 36]. These papers, in particular, make basic assumptions, similar to the following ones from [15]: (i) the set of parameters is finite; (ii) for each choice of parameters an input-to—state10 (or integral input-to—state) stabilizing output feedback controller with a known gain function should be available; (iii) the multi-estimators are designed so that for each value of the parameters, the interconnection of the real system and each model is integral input-to state (or input-to-state) stable with respect to the corresponding estimation error. It should be noticed also that in these papers, except for [36], the authors employ the scale—independent hysteresis-based switching logic, originally introduced in [16]. In this logic the dwell-time period is not constant and is not specified in advance. Instead of assuming that it is fixed and sufficiently small (so that, in particular, no finite escape would be possible during the time when wrong controller is put into the loop) the authors of [16] have suggested to switch when one of the performance indices is smaller than all the other by a certain percentage. Several important nonlinear examples have been worked out, but characterization of a class of nonlinear systems satisfying the assumptions above is still an open problem. A different switching strategy, which is similar to the one we are going to use, has been used for the problem of state feedback regulation of an uncertain discrete-time 10The notion of input-to—state stability has been introduced by Sontag. Roughly, it means that all the states are small whenever the input signal is small (see, e.g. [23, Sec. 4.8] for a precise definition). Integral-input-to—state stability means that the states are small whenever input signal is small not point-wise but in integral sense. 20 system in a recent paper [1]. One of the main assumptions in [1] is that the Lyapunov function and its incremental difference over one period, for each candidate controller, are independent of the unknown parameters and so are available for computation. Moreover, the origin is an equilibrium point independent of which candidate controller is in the loop. For the problem treated in this thesis neither the Lyapunov function nor its derivative is assumed available and we use a high-gain observer to estimate several derivatives of the output in order to obtain estimates of the Lyapunov function and its derivative. When specialized to the regulation case, our problem does not require the origin to be an equilibrium point under all controllers. Also, let us remark that in our continuous-time formulation we have to deal with a possible finite escape time. Lyapunov function and stability-based switching strategies were also used in non- linear systems in a different situation [6, 27, 29, 31, 38]. There, the state space, not the set of parameters, is partitioned into smaller subsets equipped with local or regional candidate controllers and Lyapunov function candidates. We would like to mention also [25], where switching logic is combined with a design that uses a robust control Lyapunov function. 1.6 Overview of the Thesis The goal of this thesis is to develop a systematic procedure for switching control design and analysis, applicable for a large class of nonlinear uncertain systems. In Chapter 2, published in [9], we consider a single-input single-output minimum- phase nonlinear system with large parametric uncertainty. Roughly speaking, the minimum phase property means that when both input and output are small, all other signals are asymptotically small as well; see [23, p.517] for a precise definition. We assume that the system can be represented globally in the normal form; i.e., after an appropriate global change of coordinates it can be written as a chain of inte- 21 grators followed by a nonlinear affine in control equation and that defines the so—called zero dynamics; see [19, Sec. 4.1]. Our goal is to find a dynamic output feedback control law to ensure that the output (practically) asymptotically tracks a bounded smooth reference signal. Earlier work used high-gain observers with saturation to derive adaptive as well as robust control laws for this problem. The-adaptive con- trol law requires the nonlinear functions to be linearly parameterized in the unknown parameters and could have unsatisfactory transient performance for a large parame- ter set. The robust control law is based on a worst-case design and could be overly conservative. High gain feedback is needed to implement both controllers in the case when the set of parameters is large. As a result, the robust and adaptive controllers may perform poorly in the presence of unmodeled dynamics and measurement noise. In order to reduce the controller gain and improve performance we propose a new approach based on partitioning the set of uncertain parameters into smaller subsets. Robust control laws are designed for each subset and logic based switching is used to choose the appropriate control law. The switching rule uses an estimate of the derivative of a Lyapunov function, which is provided by a high-gain observer. In Chapter 3, published in [8], we generalize the technique, presented in Chap- ter 2, and apply it to a wider class of nonlinear systems and more general Lyapunov- function—based state and output feedback control designs instead of the restriction to sliding mode control and output feedback as in Chapter 2. It is worth noting, that, in particular, we require here neither the sign of the high-frequency gain to be known nor the system to be minimum-phase. The key idea is the same: split the set of parameters into smaller subsets, design a controller for each of them, and switch the controller if the derivative of the Lyapunov function does not satisfy a certain inequality, after a dwell-time period. However, we do not order the candidate controllers in advance, as earlier. Instead, we use estimates of the derivatives of the states, provided by an extended order high-gain observer, to calculate instantaneous 22 performance indices. When the controller is falsified, we switch to a new controller that corresponds to the smallest index among the controllers that have not been fal— sified yet. This modification is important when the number of candidate controllers is high and pre-routed search may lead to an unacceptable transient performance. Chapter 4 is devoted to study some practical issues of the developed Lyapunov- based switching control design strategy via numerical simulation. We follow the the- ory, presented in the previous chapters, to design output and state feedback controllers for a regulation problem for an example taken from [13]. The example is a second order nonlinear system in strict feedback form with an unknown high-frequency gain with finite set of uncertain parameters. Since most of the practical controllers are implemented digitally, we develop a discrete, sampled-data versions of our dynamic regulators and test the performance of the closed-loop systems in ideal (disturbance free) conditions and in presence of measurement noise. For the state feedback case, the regulation problem for this system has been solved previously using scale-independent hysteresis-based logic, proposed in [16]. We review this alternative approach, follow- ing [13, 15], and compare performance under different controllers in the loop. We show how to reduce the influence of sensor noise with the help of a low-pass Butterworth filter and analyze the acceptable level of noise. Finally, we conclude with some remarks and provide directions for future research in Chapter 5. 23 Chapter 2 Sliding-Mode Control-Based Tracking of Minimum-Phase Nonlinear Systems We consider a single-input-single-output (SISO) minimum-phase nonlinear system that can be represented globally in the normal form. In particular, we are interested in an 8180 affine in control version of the model used in [3], with additional, sufliciently smooth, bounded additive input and output disturbances. The model may depend nonlinearly on a finite number of uncertain parameters. We assume that the uncertain parameters, as well as the initial conditions, belong to known compact sets, which could be large. 2.1 Description of the Idea We are interested in the case when known methods of robust and adaptive control fail to achieve satisfactory performance because available control efforts are restricted while the compact set of the unknown parameters is very large. This set is to be partitioned into a finite number of significantly smaller subsets. For the case when 24 the parameters are assumed to belong to one of these subsets, we design robust output feedback control. Just to be concrete, the control law is based on continuous sliding-mode state feedback [23, sec.14.1], implemented using a high-gain observer [23, sec. 14.5]. A candidate controller is designed for each parameter subset such that the derivative of a Lyapunov function satisfies a certain inequality. The idea is to apply one of the controllers over a small pre-fixed period of time (called dwell time); then check whether the estimated derivative of the Lyapunov function, calculated using an extended order high-gain observer, satisfies the inequality. If the inequality fails, we switch to the next controller. We make the observer’s dynamics sufficiently fast so that the observer peaking occurs within the dwell time, whereas the latter is shorter than any possible finite—escape time. By the above construction, there always exists a control law that ensures that the corresponding inequality holds and thus, as soon as it is chosen, no further switching of the control would be done; hence, the switching period is finite. Due to the conservatism of the analysis, there is no guarantee that the controller, designed for the actual small parameter subset, will be used; however trajectories will still converge with the desired rate. The rest of the chapter is organized as follows. We continue with further expla- nation of the main ideas of the design for a motivating example. Then, the class of systems of interest is formally introduced and state feedback control is designed and partially analyzed. We proceed with the main result for the problem of output feedback tracking, followed by a stability proof. We conclude with some remarks and discussions. 25 2.2 Motivating Example To show the idea and associated technical difficulties, we start with a simple moti- vating example. Consider the parametrically-uncertain controlled Duffing’s equation a + as + (pi/)3 = u, (2-1) where p is the unknown parameter, y is the output, it is the control input, and our goal is to asymptotically (practically) regulate y(t) to zero. We assume that p belongs to a given compact set P = [—10,10] and the initial states (y(O),y(0)) belong to another known compact set 0 C R2. Had p been known and y been available to measure, the simple feedback-linearizing regulator u = py + (pg/)3 — ks/u, where s = 2y + y, with positive k and p chosen by pole-placement techniques, would have guaranteed that the origin of the closed-loop system is globally exponentially stable. Moreover, performance issues would not be hard to address from the perspective of linear control theory. To deal with parametric uncertainty, we use robust and adaptive control designs from [22] and [21], respectively. Then, we introduce the new logic-based switching design. 2.2.1 Robust control [22] We use the continuous sliding mode control law u = ,3, +(13y)3 — k Sat(8/rt) 26 where p is an estimate of p, Sat(-) is a smooth saturation function that is bounded, continuously differentiable with bounded first derivative, and satisfies1 .3 - Sat(s) 2 s2 if [3] S1 (2.2) s - Sat(s) Z [.9] if [9] > 1 and p, k are positive constants to be chosen to guarantee that the trajectories con- verge to a positively invariant set where |s| S u. Towards that end, consider the Lyapunov function candidate V(y,s) = 32 + 4y2. Suppose Q C {V(y,s) S 16} so that V S 16 at t = 0. To guarantee a certain exponential convergence rate, we require the inequality V + 2V S O to be satisfied along the closed-loop trajectories. It is not hard to see that V + 2V S — (216/11 - 6) 32 - 81/2 + 2|8y|(I13 — r2|+|153 — PHI/2), provided |s| S u and V + W S -|SI[2k — 618l- 2|y|(|15 - PI + I133 - P3|y2)l, provided ls] > p. In the set {V S 16} we have Is] S 4 and [y] S 2, and it is not hard to show that V + 2V S 0 if k/u 2 3 + (SUP{|15 -10I+4l153 -p3|= m3 E 7’})2/16 and k212+2sup{Ip—p|+4lp3—p3|: 1943519}. With the choice 13 = 0, we need k/u 2 100500925 and k 2 8032. In this design, a very high controller gain is needed to ensure convergence when the level of 1 Sat(s) = (l/tanh(1))tanh(s), Sat(s) = 2s/(1 + Isl), and Sat(s) = tan-1(2s) are examples of such function. 27 uncertainty is high. A possible choice for the parameters is k = 8500 and u = 0.008. Finally, we use the output feedback control law u = [93/ + (13y)3 — isms/i.) with s = 2y + t, (2.3) where y = 5:2 is the output of the second-order high-gain observer (HGO): E1=i2+h1(y—Cfil), $32 =h2(y—.'f31), (2.4) in which h1 = 2/5, h; = 1/52, and 5 > 0. The HGO parameter 5 is to be chosen sufficiently small. Using simulation, we choose 5 = 10*. It is worth noting that to make the performance under output feedback close to the performance under the (unimplementable) state feedback, the HGO dynamics must be made sufficiently fast relative to the dynamics under closed-loop state feedback. The latter dynamics are not slow because of the high controller. gain. Consequently, a very small value of 5 > O is required when the uncertainty level is high. It is well known that having too small values for u and 5 as well as too high values for the gain It results in poor performance in the presence of measurement noise and unmodeled high-frequency dynamics. It is important to notice that in the case when there is no uncertainty (p z p) the preceding analysis would require k/u Z 3 (compared to 106 ) and k 2 12 (compared to 104. ) Moreover, as explained above, smaller values of k/p _>_ 3 would allow us to use larger values for 5. Clearly, the huge difference is due to the high uncertainty level and the worst-case design strategy. We show next that adaptive control suffers from the same drawbacks. 28 2.2.2 Adaptive control [21] We set p1 = p and p2 2 p3, so that the dynamics are linear in the parameters, v+p1y+p2y3 =u and (phpg) E P = [-10,10] x {—103,103] C R2. Following [21] we augment the Lyapunov function as W = V+l5i/’Yl “Hg/72, where 71 and '72 are positive adaptation gains, p1 = 13, —p1, p2 = 132 —p2, and pl, 132 are estimates of the parameters. To guarantee that the inequality V + 2V S 0 is satisfied when p, E 0 and 132 E 0, we require W + W = —8y2 + 23(3s - ply — P2113 + u) + 2mm + 225252/72 s 0. Assuming first that s is available, we take u = 211(3, :9) = -2003 + 131.1! + 1523/3, then W + 2V S -8y2 — 39432 + 2131(7133/ + 1230/71 + 2132(7gsy3 + jag/72. We define the adaptive law with parameter projection: 7r¢i(')[1+(bi “El/5i if 152' > bi and ¢i(°) > 0 13.: mat-O [1 + (:3.- + bi)/5l if 23.- < -—b.- and at) < 0 (2.5) 7r¢i(') otherwise where i=1 or i=2, b1=10, b2=103, ¢l(31y) : _Syr and ¢2(S,y) : —Sy3a 29 to ensure that W + 2V S 0 and the parameters do not leave the set 735 = [‘10 —(5, 10+ 5i X {—103 _ 6’103 +6]’ where 6 is a small positive number. To use the estimate s, provided by the second— order HGO, we need to protect the system from the destabilizing effect of peaking by saturating both the control and the adaptive law outside the region of interest. Assuming that initially (at t = 0) pi = 0, p2 = 0, V S 16, and (p1,p2) E P, we obtain that W S Mg, where ME > 16+ 102/71 + 106/72. Hence, |s| S M5, lyl _<_ M3/2, and we redefine ¢1(') = *A’II Sat (éy/MI) , ¢2(') = —M2 Sat (593/942), u = ’I/}() = Mu Sat([—200.§ + 1513/ + fi2y3]/Mu), (2.6) where M1 = Mg/z, M2 = Mg/8, and M1‘ = 205Ms +125Mg. It is worth not- ing that the large values for the saturation levels is the result of the high level of uncertainty (see [10] for more on this issue) and is an unwanted feature. Using simulation, we choose 71 = 10, '72 = 103, and 5 = 10‘“. A too small value of 5 is needed due to the same reasons as in the case of robust control and is undesirable. We are ready to present an alternative way to deal with this situation that allows us to keep the controller and observer gains much smaller in order to achieve better performance in the presence of measurement noise and unmodeled high-frequency dynamics. 2.2.3 Logic-based switching To reduce the level of uncertainty we adjust the parameter estimates on-line using logic-based switching. Let us consider a covering of the compact set P by a finite 30 number of comparably small compact subsets {P(’f)},’: 1- We follow the analysis of our robust control to design N control laws a“) = 9% + (13“)yl3 - k.- Sat($'/u.-)a (2-7) where p“) E P“), and 16,, u,- > 0 are chosen under the assumption that p E P“). The next step is to define an algorithm to identify an appropriate regulator and a corresponding rule to switch, among these N control laws on—line. Let r > 0 be a small fixed constant. Suppose that the control law u = u“) is applied for to S t < to + 1'. For t 2 to + r we would like to check whether the inequality V + 2V S 0 holds. If not, then p ¢ P“) and we should switch to u = u‘i“). Again, we ‘dwell’ on this controller for some time (that is r) and then check if the Lyapunov function is decreasing sufficiently fast to decide whether to switch to the next candidate or not. This logic avoids problems with chattering and infinitely fast switching. By choosing 7' small enough to ensure that the solution does not leave a certain compact set, we are guaranteed that the solution is well defined for all t Z to. In addition, it is obvious that switching has to stop in a finite number of steps. However, to apply this logic, we need to estimate V and V on-line. Noting that V + 2V = 2s2 + By2 + (8y + 43);; + 233;, we estimate y and y by :32 and 53:3, respectively, which are provided by the third- order HGO: 551: 532 + knit! ‘ 131), 1£2 = 533 + h2i(y — 531), 5373 = had?! - 531), (2-8) where h1,- = 3/5,, h2, = 3/53, and h3,- = 1/5?, and 5, > O is sufficiently small. The corresponding estimate for s is given by s = i2 + 2y. This observer ensures that the 31 differences (:52 — y) and (32:3 — y) decay exponentially to 0(5,) values within a short peaking period T0 of the order of 5,- log(5,-) [3], following, possibly, each switching. During this period, the system is protected from destabilizing effect of peaking since the control law is globally bounded in :82. For switching we check the inequality V + 2x? = 2s? + 8;;2 + (8y + 49:22 + 25a, 3 a0 (2.9) where a0 > 0 is a small parameter included to deal with possible non-vanishing observation errors. Note that we have to wait for the peaking period To to use the observer outputs in this inequality. Therefore, 7‘ must be greater than To. To finish the design, we need to determine N, p“), In, fig, 5,, and r. ‘ Let us try to reduce k and increase u relative to the robust control design of Section 2.1. For k.- = 200 and u,- = 0.36, we need to split the set of parameters into several subsets {P(‘)}[’;0 such that if p, p“) 6 P“) then Ip — 13|+ 4|p3 — 133| g min{k/2 — 6, Mic/p - 3} = 94. 45 A possible choice is N = 45, P = U P“); i=1 p“) = \/3 222', 19“) = [f/22(i- 1),{/22(z'+1)], for i odd; 13“) = ——€/22(z' — 1), P“) = [— V" 222', -,3/22(z° — 2)], for i even. By simulation, we choose 7' = 2 x 10“4 and 5, = 10‘5. We notice that the increase in the value of 5 is not impressive. The reason is a too high number of 32 subsets, which is due to the huge reduction in the value of the controller gains k and Up. We suggest to follow this way if it is crucial to reduce the control effort and if it is expected that the influence of unmodeled high-frequency dynamics will be more significant than the influence of measurement noise. 2.2.4 Simulation results Let us first compare performance of the three designs in the ideal situation when there are neither measurement noise nor unmodeled dynamics. The simulation for p = 5 and p = —10 are given in Fig. 2.1 and Fig. 2.2, correspondingly. In each figure, the results of the adaptive controller are shown in the first row, those of the robust controller in the second row, while the third row shows the results of the switching controller. In all three cases, the first and second columns show the output y and the control u, respectively. In the case of adaptive control we show also the parameter estimates p1 and p2, in the robust control we show 3, and in the switching control we show 3 and parameter estimate p. The case when lpl is not too large is illustrated with p = 5 in Fig. 2.1. For the adaptive and robust controllers, large control efforts, 6 x 106 and 10‘, respectively, are needed during the short peaking period. For the switching controller, on the other hand, the control signal does not exceed 2 x 102 in magnitude. Notice also, that the switching logic gives an appropriate parameter estimate much faster than adaptation and there are no fast control oscillations as in the case of robust control. Remarkably, the output trajectories for all three designs are indistinguishable despite such a big difference in the control effort. The case when [pl is large is illustrated with p = ~10 in Fig. 2.2. The output and control signals under adaptive and robust control are similar to the previous case. For the switching controller, this is the worst possible case because the highest level of the ‘equivalent control’ is needed to ‘overcome’ the system’s dynamics and 33 V(t) 5 W) 1 10x 0 p, O, we need to design the controller to ensure that ly(t) — r(t)| < 50 after some finite time. We assume that (i) There exists a known positive constant 6 such that f1(-) 2 (5 \7’ 5 E R", z E R", and p E P. (ii) The reference signal and its derivatives, up to order n, are available on line3. (iii) lwu(t)l is bounded by a priori known constant and the vectors d(t) = [r(t),7"(t), ' ' ' ,7‘("’(t)lT and w(t) = [wo(t),wé(t), - ' ° ,wl"’(t)lT belong to known compact sets D C Rn“ and W C RM“. (iv) The system 2' = w(p,§, z), with 2 as the state and g as the input, is bounded- input-bounded—state (BIBS) and there exist a continuously differentiable proper function V0(t,p,z) and class [Coo functions ao(-) and 70(-) such that V0(t,p,z) Z ao(llzll) and for all p E P and E 6 IR": V0 2 6%? + %¢(p,£,z) S 0 whenever %(t,p,z) _>_ 70mg”). It is well known that if the z-subsystem, with 2 as the state and 6 as the input, is input-to—state stable (ISS) for all p E P, then assumption (iv) is satisfied. Reference [22] makes the ISS assumption and requires the zero dynamics to be locally exponentially stable when g = 0. Assumption (iv) relaxes both requirements. It is possible to weaken this assumption even further and, in particular, require it to hold on a certain compact set if we impose some restrictions on 95 and ’D. However, for the sake of simplicity of the presentation we do not do this. 3It is possible to modify the analysis to deal with the situation when only r(t) is available by means of treating the unavailable derivatives as additional bounded disturbances. 39 It is convenient to introduce a new state variable a: = g + w(t), so that i: = A11: + Blfo(p. a: — w(t). z) + f1(p, :1: — w(t). z){u + w(t)} + 10900)], (2.11) 2: If«’(p,$—U.’(t),2), yzajlt CZQ(p,$—IU(6),Z), where (:1:(O),z(0)) E Q, a compact subset of R"+m such that (a: — w, z) 6 S25 and w E W imply (3,25) 6 Q. We partition the set of parameters P into N smaller subsets {P(f)},’~:1 and design a regulator for each of them. 2.3.2 State feedback control for p E P“) In sliding-mode control design [47], we use the error coordinates e = [y — r(t), r2 — r'(t), . . . ,xn_1 — r("’2)(t)]T and define the sliding surface s = 0 by s = 2,, — r("‘1)(t) + Te, where T = [t1,...,t,,_1]. Then, 5 = Ale + bm(2:,, — r("‘”) = Ale + bm(s — Te), where the pair (A1,bm) represents a chain of (n — 1) integrators. Motion on the sliding surface s = 0 is governed by e = (A1 — me)e. Therefore, T is de- signed so that Am 3- A1 — me is Hurwitz. With this representation, our system can be viewed as an interconnection of three subsystems: an exponentially input- to-state stable linear e-subsystem e = Ame + bms, an 8180 scalar s-subsystem s = f2(p,e,s,z,d,w,,,w) + f3(p,e,s,z,d,w)u, where f2(-) = f0(p,:r — w(t),z) — Ma) + f1 (p, a: — w(t), z)w,,(t) + ulna) + T(Ame + bms), f3(-) = f1(p, :1: — w(t), z), 40 and a BIBS z-subsystem, satisfying assumption (iv), 23 = '1/21(t,p,e,s,z), where 2:1(t,p,e,s,z) ='1.i’(p,$ - w(t), 2)- Let 1}“) E 73“) be a nominal value of p. For each 2' we design a continuous sliding-mode state feedback controller: u = u“) 2 11.42"", 6, s, c, d) — k, Sat(s/u,), (2.12) where ueq(-) is an equivalent control component to be defined later and Sat() is a smooth saturation function that satisfies (2.2). Each of these regulators is designed to work in the case when p 6 pm as follows. We consider the Lyapunov function candidate V(e, s) = ueTPe + 32/2, where u > O and P 2 PT > 0 is the solution of the Lyapunov equation PAm + AiP = —I. For a fixed number 6,, 6 [0, 1/||P||), we compute V + 60V along the trajectories of the closed-loop system with u = u“) : V + 6,,V = -V(€T€ — (SveTPe) + sf3(v)[A,-(-) - k,- Sat(s/u,)], where A,(p,e,s, z,d,w,,, w) = [f2(-) + 6,,3/2 + 2ueTPbm]/f3(-) + ueq(-) is an uncer- tainty term. To make it smaller, we should choose ueq(-) to be the best available model for —[f2(-) + 603/2 + 2ueTPbm]/f3(-). We would like now to define a positively invariant compact set that contains the trajectories of the closed-loop system provided V _<_ 0. Towards this end, let p(-) be a strictly increasing unbounded function such that p(R) 2 sup{||:r — w“ : V(e,s) S R,d E D,w E W}, where the “sup” is finite for any finite R 6 IR due to compactness of D and W. 41 Noting that, due to assumption (iv), V S R and V0 2 70[/)(R)] imply Va 3 0, we define the set U(R) = {(e,s,2): V(818) S R, Vo(t,P»3) S ”Vol/)(Rll W E R71) E 73}, which contains (6,8,2) whenever V S R. The set U (R) is compact because [eT,s]T is linear in a: and d, and d E ’D. Moreover, if (6,8,2) 6 U(R), then ”1:“ S p(R) + sup{||w|| : w E W} and ”2” S ag‘(70[p(R)]). We define R to be a fixed positive number such that Q C U (R0) for some R0 6 (O, R). Next, we show how to choose k,- and p,- in order to guarantee that U (R) is positively invariant. Let p,(R), L1,(R), L2,(R), L3.,-(R) Z 0 be such that |A1(')l S 101(3) and |f3(')A.-(°)| S L12(R)||€l|+ L21(R)|S| + 1431(3) for all (6,3,2) 6 U(R), p e P“), d e D, and w e W. Take 17 = V(l — 6,,||P||) and k,- > p,(R); then, inside U(R) and V19 6 73“), using the inequalities above together with (2.2) we obtain V + 6.,V S -l7||6||2 - We; — ps(R)l|SI. provided Isl > 12,-, and V + 5vV S -l7||€||2 + L11(R)|8|llell - [k16/us— L21(R)]|s|2 + L31(R)|Sl, provided |s| S M. By choosing )2,- small enough, we can ensure that plkfls/fli " L21(R)l > [LAID/212. (2-13) 42 Then there exist [3, > 0 and 7, > 0 such that V + (SUV 3 — Ini11{p,'),,fi,V — uiL3,(R)}. (2.14) If u,- is small enough to ensure that #iL13i(R)/(6v + 52') S R (215) then V = R and V}; S '70[p(R)] imply V g 0, due to assumption (iv). We conclude now that if p 6 P“), then every trajectory of the closed-loop system (2.11), (2.12) initiated inside the set 9 can not leave U (R) and, moreover, it will enter the set U0 2 {(8) 8’2) I V(ev 3) S 60) ‘/0 S. ’70[p(R)]}9 where 60 Z comL3,(R)/(5v + 31) with co > 1, in finite time and stay thereafter. To ensure that ly(t) - r(t)| S 60 inside this set, we require 60 g 1163/ IIP‘IH. 2.3.3 High-gain observer for p E 79“) Because 6 and s are unavailable, instead of u = u“) we use the control u = a“) 2 21.42%, 2, ed) — k.- Sate/2.), (2.16) where 5 = 2,, — r<"-1> + Té, (2.17) é = [y -— 73562 — 1",. . . ,in_1 — r("‘2)]T are estimates for s and e, and :i: is provided by the high—gain observer (HGO) 5: =Aoi+Hi(y—il)+b0h(fi(i)véa§and)1 (2'18) 43 in which (A0, b0) represents a chain of (n + 1) integrators and h() is a model for the time derivative of [f()(p, :1: — w, 2) + f1 (p, a: — w, 2){u + 10"} + my] ; we can take - . 01 0’2 01n+1 h E 0. The observer gamls takenas H,- = [—,—2—,...,n—+, , where a1,...,oz,,+1 are chosen so that the polynomial ("+1 + 018‘ + - - - + an“ is Hurwitz and e,- > 0 is a small parameter to be specified. Note that the order of (2.18) is greater than the order of the original system because we need to estimate the n -th derivative of the output in order to estimate V. The latter is needed to implement switching, as explained in the next section. It will be clear from the stability analysis that in order to estimate an ‘extra’ derivative we need to impose the following additional assumptions (v) f0(-), f1(-), u.,(.), Sat(), wu(t), 2115,")(1), and r(")(t) are continuously differentiable; w;(t), wgn+l)(t), and r("+1)(t) are bounded, (vi) ueq(-) is globally bounded in 5 and ég,...,én-1, (vii) partial derivatives of ueq(-) with respect to (é, 5,0 are globally bounded in g and 62,. . .,€n_1, (viii) there exist c1,02,c3 > 0 such that" |lh(-)|| S C; + Cgllé” + c3|§|, 2.3.4 Switching logic We start with z' = 1 at t = T1: 0 and for each 2'6 {1,...,N} we wait with u = a“) in the feedback loop for t 6 (TILT.- + 7'), where 7' > 0 is to be specified, and then check the inequality 2 + 6J7 _<. a,- — min{,u1%, £3117 - (aha-(RH. (2.19) where V = §2/2 + uéTPé (2.20) “This assumption is always satisfied when h(-) E 0. 44 and V = —1/éTé + 2uéTPbms + s(z2,,.., — A") + T[A,,,é + bm§]) (2.21) are estimates of V and V, and a,- is a small positive constant that is included to deal with possibly non-vanishing estimation errors. If at sometime T,“ 2 T,- + T the inequality is not satisfied, we switch to the control law 21 = 12““). The dwell-time T should be chosen sufficiently small not only to guarantee no finite escape time but also to ensure that (e(t), s(t), 2(t)) does not leave U (R) It will be justified below in the analysis that: the solutions, initiated inside U (R0), exist for all t Z 0; (e(t),s(t),2(t)) stay in U(R); there are no more than (N — 1) switching points; there exists M > 0 such that V S M. Based on these properties we define 7" = as an upper bound for 1'. T7112 2.4 Main Result Let us assume that (ix) 53(0) 6 {2, where Q C Rn“ is compact; (x) with k,- > p,(R) and p,- > 0 chosen to satisfy (2.13) and (2.15), suppose a,- > O is chosen small enough to satisfy [01' + ML3i(R)]/[6v + fir] S 50/60 and 01 < #171 for every z'E {1,...,N}, where co > 1 and 50 S min{ucg/||P‘III,R}. Let U0 = {(e,s,z,7)) : V(€,S) S 50, “Z” S Vol/KR”, WU?) S 1"OE-2}, 45 where r0 > 0, E > 0, W(77) = nTPmnn, and Pom > 0 is the solution of Lyapunov equation 243,an + Pom/lam = —I. If the trajectories are trapped inside 00 , then ly(t) — r(t)| < 60. We show next that it is the case, after a transient period, and so the tracking error is asymptotically sufficiently small. Theorem 2.1 Consider the closed-loop system (2.11),(2.16),(2.17),(2.18) with the preceding switching logic, initiated inside Q x (2. Suppose assumptions (i)-(x) are satisfied. Then, there exists To 6 (QT), such that for every 7’ E (70,?) there exists 5,- 6 (0,5) such that for e,- 6 (0,5,) every trajectory is bounded,.enters in finite time a compact set (70 , and stays thereafter. To rewrite the closed-loop system in a singularly perturbed form, we introduce the scaled variables where xn+l = f0(pa ZL‘ _ WU), Z) + fl(pa SL‘ _ IU(t), Z) [u + wU(t)] + whn)(t) is the needed extension of the state. This extension is well-defined in view of assump- tion (vi) and is continuous between the switching times, for t E [7},7}+1), as long as the solution exists. At the switching times Ti, 7) could have a finite jump, while :i' and 23,-, for j = 1,. .. ,n, are continuous. The jump in 1),,“ is clear in view of the jump in In.” that results from control switching. A jump in 17,-, for j = 1,. .. ,n will take place if 5,- 7E 51+1- 46 Following the derivations in [3], the closed-loop system (2.11),(2.16),(2.17),(2.18) can be rewritten as e = Ame + bms, 23 Z ¢’1(t1P161313), s = f2(p,e, s, 2, d, w“, w) + f3(p,e,s, z, d,w)[u<"> + @103“), 62 51977100]: 517') = AM + 6.1209.- (13mm, 8, s, z, n, d, w. wu), C = (II (tap, 6’ 312% where Aom = A0 — [011, - - - ,an+1]T[1,0, - - - ,0] is a Hurwitz matrix; 94-) = — h(-) = d—d. (M) + f1(-){fl"" + wan} + w£"’] — ho; wi() z it“) _UU) = 216003“): é) g! C» d) _ 1184(1)“), 8, S, C: d) + ki [sat('§/#i) _ sat(8/#i)]i (11(')= (100.16%); and i E {1, - - - , N} is changed according to the switching rule described above. This system is valid between the switching times. Noting that V g —6,,V(-) — DeTe + sf3(-)[A,-(-) + \II.(-) - k.- Sate/2.)] and \Il(p(‘), e, s, (,1), d) is globally bounded in 17 due to assumption (vi), we define M = sup{—6,,V(e, s) — DeTe + sf3(-)[A,-(') + \Il,() — k, Sat(s/p,)] : (e,s,2) E U(R),i E {1,...,N},p 6 ’P,d 6 ’D,w E W,n E IR"+1}, so that as long as the solution exists and (e,s, 2) E U (R) the inequality V S M will be satisfied uniformly in 17. Due to finiteness of the dwell-time, there is no possibility for chattering (infinitely fast switching). Hence, for any fixed 7' E (0, 'F) and any choice of e, > 0 the solution 47 of the hybrid closed—loop system does exist locally. Let us consider first the case when the right controller is chosen. We show that the solution exists globally, there will be no more switching, the trajectories con- verge to a subset of (70, and stay thereafter. We notice that there exists to such that p 6 ”PM”. Suppose the controller is switched at t = T,- to u = 2160) and (e(T,-0),s(T,-o),2(T,-O)) 6 U(Rio), where Rio 3 R0 + (N —1)Mr. Note that since (2 C U(Ro), at t: 0, (e,s,2) start in U(RO). We show that 7' is small enough to guarantee that during the dwell-time the ‘slow states’ stay inside a compact set and the ‘fast states’ decay to small values. To prove this claim we adopt the arguments presented in [3] (see also [7, Theorem 2] and [23, pp. 616-617,713-718]) being careful in dealing with the need to estimate an ‘extra’ derivative of the output. As a result of extending the system state, we face a technical difficulty that is not typical for the analysis of peaking in high-gain observers: the perturbation term on the right-hand side of the fast subsystem is not uniformly bounded in 17. However, we can show that it can be bounded by a function that grows linearly in the fast variables. To the best of our knowledge, [7] is the only paper where a similar situation has been investigated. While Theorem 1 of [7] cannot be applied directly to our current problem (even if all e.- are chosen equal to each other) since switching between several control laws results in discontinuous right-hand sides of the slow subsystem, we are going to use the technique presented in [7]. Let us consider the time-derivative of the Lyapunov function W(17) = nTPomn along the system’s solutions: W = —17Tn/ e.- + nTbog,(). We would like to show that if ”n“ is not small then W < 0. Hence, we need to argue that _ 6$n+1 axn+l 1 6311+] g1(.)_[-a—$1-—,..,—5‘x—n‘] [A$+b$n+1—1U]+ 62 ¢() 5&0). 3*(1'). +f1(') [We + 719‘s 5? + w;] + wf,"+1) — h(-) 48 satisfies a linear growth condition in r]. Imposing at this point assumptions (vi) and (vii), it is left to notice that s: s — (7),,“ — 071771) + T(e — e) and éj = 33) ‘ 1.0)“) - sll‘j(vlj+1 — 011771) for j = 1,. . .,n - 1 are affine functions of r] uniformly in 5,, and so is h() due to assumption (viii). Hence, there exist Go > 0 and 01 > 0 such that ‘ ”RIP 2 1 2 ”7)” W<———+G +0 .. 2. —G , _ ——G' , _ 6‘0 0H77” 1”,)” 2&0 0 “7]” 25m 1 “7)“ provided (e,s,2) E U (R). Assume that 5,0 > O is sufficiently small to ensure 1/(26,-0) > Go. Noting that W 2 mega implies “77“ 2 5mm, we choose r0 = (201)2||Pom||, so that “17“ _>_ 26313-0 and hence W S —6wW, where 6.0 = [1/(251'0) —Go]/||Pom|| > 0. Since V S M in U(R), uniformly in 17, and Rio < R, we have V(e(t), so) 3 veer...» s01.» + M — T...) s R... + M < R for t E [TimTio + T]. It is easy to see that n(t) is large but finite at t = Tia and decays faster than an exponential mode of the form (1 /e§;)e‘(["'l/ 5‘0)‘ uniformly in (e, s, 2) E U (R), so that for sufficiently small values of 5,0, 1)(t) enters the set {W S r052} during a period of time To < T and is trapped inside. For this controller, inequality (2.14) with i = i0 is satisfied under the state feedback u = u(‘°). Under the output feedback it = 21“”), the right hand side of the inequality is perturbed by the term sf3(-)\II,-o(-). Due to continuity of \II,0(-), the identity \Il,o(p(‘°),e, s,(,0,d) E 0, and the fact that 1) is 0(5) for t 2 Tie + To, we 49 see that there exists r.) > 0 such that ||\II,0(-)|| S 1",), for t 2 ,0 + T0, and that n), can be made arbitrarily small by choosing 5 small enough. Hence, V satisfies the inequality V + (SUV S — mild/110710, {310V - HioL3io(R)} + 5% with ("1,0 E (0, (1,0). Since, during the same interval, V and V, given by (2.20) and (2.21), are 0(5) close to V and V, respectively, we conclude that for sufficiently small 5, V + 60V S — Humming), [310V — MOI/310(3)} + aio for t Z Tm + T0. Hence, there is 5,0 > 0 such that if 5,0 E (0,5,0) then as soon as the right controller is chosen no more switching occurs and the trajectories converge to the corresponding residual subset of (70. If i0 = 1 we are done. Otherwise we have to consider the case of a wrong controller in the loop. Suppose i 75 i0 and the controller is switched at t = T, to u 2 it“) and (e(fl),s(fl",-),2(T,~)) E U(R,), where R.- S R0 + (i — 1)Mr. Following exactly the same arguments as for the case of the right controller, it can be shown that (e(t),s(t), 2(t)) E U(R; + Mr) for t E [T.-,Tg + 7'] and 17(t) E {W S r052} for t Z T.- + To and they stay so until the next switching time. At time t = T.- + T, we check the inequality (2.19). If it is not satisfied we take 71+] = T.- +T and switch to the next controller. Otherwise we stay with the controller a = 21“) as long as (2.19) is satisfied. Once again, since V and V are 0(5) close to V and V, respectively, we see that, for sufliciently small 5,, V + 60V S - min{un.-, fi1V - #iL3i(R)} + Eli» for some a,— E (a,, mm), which is close enough to a,- to ensure that [512' + #iL31(Rll/[5v + fit] S 50/50 50 for some 50 > 1. Then, for V 2 50, V S —c for some c > 0. This shows that V does not increase during the period [T,-, T,+1). Moreover, if the inequality (2.19) is never violated, there will be no more switching and the trajectories reach the set 00 in finite time and stay thereafter. If at some time ta 2 T,- + r the inequality is not satisfied, we set T,“ = ta and switch to the next controller. Obviously, :1:(T,-+1) and 17(7111) are finite, therefore :i:(T,-+1) is finite and so is 1701.11). Hence, if either T,“ = 00 or i + 1 = i0, the trajectories must enter a compact subset of (70 in finite time and stay thereafter. Noting that i0 S N and using induction in i = 1,... ,io we see that: (e(Tl),s(T1),2(T1)) -—- (e(O),s(0),z(0)) E Q C U(RO), (e(T2), 8(T2),Z(T2)) E U(Ro + MT), (e(Ti), 8(T1), 2(Ti)) 6 U(Ro + (i - ler), ..., (e(T,O),s(T,-o), 2(T,-o)) E U(Ro+(io-1)Mr) C U(R,0). This completes the proof. We notice also that it follows that there can be no more than (N - 1) switching points. 2.5 Remarks We have considered the design of output feedback control for a class of para- metrically uncertain nonlinear systems. The main ingredients of our approach are logic—based switching between several candidate controllers, designed for small sub- sets of the set of parameters, and dynamic output feedback, based on a high-gain observer. We have shown that the parameters of the controller can be tuned to ensure semiglobal practical stabilization. The main technical difficulty is that the existing theory of high-gain observers with saturation [23, sec. 14.5] does not cover switching among controllers. We have shown that, provided the logic is chosen so that there is no switching during the observer peaking time, the techniques of that theory are applicable to the analysis of the closed-loop system between the switching times. An additional technical difficulty is the need to estimate an extra derivative 51 of the output in order to approximate the Lyapunov function and its derivative. Our derivations show that this difficulty can be resolved if, in addition to the control it- self, its first partial derivatives are globally bounded with respect the estimated state. Consequently, we have had to use a smooth saturation function. It is worth noting that even in the case of no input and output disturbances our switching controller ensures semiglobal ultimate boundedness only, whereas, under certain additional assumptions, both adaptive and robust controllers from [21, 22] guarantee asymptotically vanishing tracking errors. However, the adaptive control design [21] is applicable to a much more restrictive class of nonlinear systems since linearity in the unknown parameters, which is not needed for our design, is essential. It is intuitively clear, and has been recently formally proved in [10], that in the case of large parametric uncertainty the high-gain observer-based adaptive feedback control technique [21] leads to very high saturation levels as well as high controller and adaptation gains. The needed large values might beunimplementable in practice and might provoke poor robustness with respect to unmodeled dynamics and high- frequency measurement noise. The robust controller of [22] is based on worst-case analysis and therefore it suffers from the same problem associated with high gains when the level of uncertainty is large. In particular, when the level of uncertainty is large, three parameters of the control law: the saturation level (k), the linear feedback gain (k/u), and the observer gain (1/5) have to be high. A very small value of p, which characterizes the thickness of the boundary layer, would result in chattering and a very small vallie of e, which defines the observer bandwidth, would result in high sensitivity to measurement noise. In our design, the set of parameters is split into smaller subsets and the candidate controllers are designed to work only on this subsets. As a result, the level of uncertainty is reduced and we can increase both p and e. We presented simulation results confirming that our switching design allows us to eliminate chattering observed under robust control in the 52 presence of small actuator delay. We remark that it is done when both controllers are designed to ensure similar output performance in ideal (disturbance free) conditions and without any restrictions on the values of the control parameters and the control efforts. For the motivating example, we observe that the needed control effort for our design is smaller than for the adaptive and robust ones. This leads us to believe that, under severe restrictions on the available control effort, logic-based switching design would be more practical; however we leave this issue for future investigation. There are many possible avenues for extending our approach. An extension to the multi-input—multi-output case seems to be straightforward. It is could be noticed from the proof that the sliding-mode design for the candidate controllers is not essential. Lyapunov redesign [23, sec. 14.2] or any other robust technique could have been used instead and this is going to be a subject of our next chapter. In addition, judging from the simulation results, the most important extension that is needed is an implementation of a more intelligent switching logic to replace the pre-routed search. This is done in the next chapter together with some further generalizations. Chapter 3 Lyapunov-Based State and Output Feedback Stabilization for a Class of Nonlinear Systems 3.1 Introduction In the previous chapter we have presented a solution for the tracking problem for a class of minimum-phase nonlinear systems. The key idea of Chapter 2 is to check whether a certain inequality for the derivative of the Lyapunov function is satisfied. The inequality can be verified using estimates of the states and their derivatives, as soon as the peaking time of the high-gain observer is over. If it is satisfied, then the current controller ensures convergence of the trajectories to a set where the tracking error is small. If not, then switching is necessary. It can be seen from the analysis that the sliding-mode design for the candidate controllers, presented in the previous chapter, is not essential and any Lyapunov-based design could have been used instead. Moreover, it is worth noting that for state feedback control design, a similar switching logic, with a high-gain observer providing derivatives of the states, could be used in 54 order to improve robustness and transient performance of the closed-loop system. We present this generalization below with an important improvement in the switching logic. We avoid following pre-routed search, that may result in an unacceptable transient performance when the number of candidate controllers is high. We use the available on-line information to identify the set to which parameters belong and to choose a candidate controller to be put into the loop when the inequality for the derivative of the Lyapunov function fails. However, the most important extension of our previous result is the class of systems that can be handled with our approach. This class, studied below, contains not only minimum-phase nonlinear systems with a known sign of the high-frequency gain, studied in the previous chapter, but also uniformly observable nonlinear systems with possibly unstable zero-dynamics and uncertain sign of the high-frequency gain. The motivation for the class of nonlinear systems, we introduce below, is the well-known procedure of dynamic extension [46, 45] for output feedback control. The idea is to augment thesystem with additional dynamics, typically by adding a chain of integrators at the input. The original system can be parametrically uncertain, non-minimum phase, and even with unknown high-frequency gain. The dynamic ex- tension yields a system with additional outputs and no zero-dynamics. As was noticed by Tomambé [46], implementing a high-gain observer to estimate derivatives of the output, we end-up with a state feedback control problem for the extended system. However, in the nonlinear case, designing state feedback law and then implementing it employing the estimates of a high-gain observer could lead to a shrinking region of attraction and unacceptable transient behavior due to the peaking phenomenon. This difficulty can be overcome if the control law is saturated outside the region of interest, as argued by Esfandiari and Khalil [7]. These ideas, combined, in particular, with backstepping and robust high-gain feedback design, were used by Tee] and Praly [45] to develop a powerful tool for semiglobal feedback stabilization of the class of uni- 55 formly observable systems. Their results were extended by many researchers; see e. g. [3, 23]. The available techniques apply to parametrically uncertain systems. However, when the parameters of the model belong to a known but comparably large compact set, it is typically assumed that all the systems in the family are minimum-phase and that at least the sign of high-frequency gain is fixed and known. The stability of the zero dynamics (often in the form of input-to-state stability) is essential when high- gain feedback is used to overcome uncertainty. Knowledge of the control direction is important in the problem of smooth stabilization [44] and discontinuous switching control seems to be the most efficient way to avoid it [28]. Advantages of allowing discontinuity in control are well-known [32] and, in many cases, there is no need to impose the structural assumptions, mentioned above, provided switching is allowed. Hence, our goal in what follows is to develop a procedure of achieving stability via switching between several candidate controllers, designed to ensure satisfactory per- formance for a small subset of parameter uncertainty. The rest of chapter is organized as follows. We start with a precise description of the class of systems. Then, we discuss the design of extended high-gain observers and the switching strategy. The formulation and proof of the main result conclude the theoretical part of the chapter. The last part of the chapter presents two illus- trative examples. The first one is a linear non-minimum phase system, where we show that it is not possible to design a single stabilizing feedback controller for the whole set of unknown parameters, and discuss the applicability of our design. The second example is a nonlinear system from [20] that can be stabilized via a high- gain observer-based adaptive control design [21]. We present simulation results and compare the performance under our design with the alternative one. We note that another example is treated in the next chapter where we consider a nonlinear systems with unknown sign of the high-frequency gain, which can be stabilized using scale- independent hysteresis-based logic [13, 16], and discuss some implementation issues 56 as well. 3.2 Class of Systems We consider the regulation problem for a nonlinear system that can be represented in the form i=AI+BMMLCUL (I: ¢'(p1$1C)u)a (3'1) y=Ca where the triple (A, B, C) E RM" x Rm” x 1R1” represents a chain of n integrators, i.e. '0 1 0 0 0' "0" 0 0 1 0 0 0 A: , B: , C=[1 0 0], 0 0 1 0 0 0 0 0 1 0 _0 0 O 0‘ -1- (19(1), -) and w(p, -) are known continuously differentiable functions; C E IRS and x = [$1, - - - ,xn]T E R" are vectors of state variables; y E IR and C are measured outputs; p E 1R” is a vector of unknown parameters; and u E R is the control input. We restrict ourselves to the single-input case only for simplicity of presentation. Extension to the multi-input case is straightforward and can be done along the lines of [3]. We assume that the differential equation for the a: subsystem is dropped if n = 0 and that n 2 2 otherwise (since a: could be incorporated into C if n = 1). Similarly, there is no C subsystem if s = 0. We do not assume any special struc- ture of ¢() and therefore we consider a class of parametrically uncertain nonlinear systems that contains in particular the following important subclasses. 57 0 Feedback linearizable systems with no zero-dynamics [s = 0] : :i: = Aa: + Bo(p, SE, a) and y = C12. 0 Uniformly observable parametrically uncertain systems. Specifically, the model (3.1) includes the case of an uncertain system representable by an n-th order input-output model :i: 2 A11: + ng(p, 23,11) , augmented with a series of integra- tors of v, [3, 46, 21] [cp() is independent of u and w() represents a chain of s integrators of u] : i‘ = Al‘ + B¢(P,$,C), C: ASC + 3371, y=CL where v = CSC and the triple (A3, BS, 03) E RSXS x RS“ x 112‘” is similar to the triple (A, B, 0). Here the C subsystem has to be solved on-line and all its states are naturally available for feedback. 0 General nonlinear systems with all state variables available for feedback [n = 0] : C=¢®£mi In each case we assume that the vector p belongs to a known, relatively large, compact set ’P. In order to simplify the control design, this set is partitioned into smaller subsets as in the previous chapter: N p E ’P = UP“). i=1 Our main assumption is as follows. For a given compact set Q C Rn”; of possible 58 initial conditions and every teI:{1,o--,N} there exists a continuously differentiable, bounded in :L' (saturated outside the region of interest), state feedback control law with all partial derivatives bounded in :r as well, u = U") =3 9"’(1‘,C) (3-2) such that for every p E P“) all the trajectories of the closed-loop system (3.1), (3.2) initiated inside (2 are bounded and y(t) —+ 0 as t —» 00. Moreover, we assume that a corresponding family of Lyapunov functions V(‘)(:1:, C) and auxiliary IC°° functions llilll) [AID + B¢ (15) $9 Cig(i)($’ C))] a[i’(-), ag’(-), and a?’(-) are known such that ([1le all/(”(17, C) 3x ) s V“’(:c.<) s aé” ( (3.3) 6V (”(230 +____6_C_. [El ) ¢(I7a$,C,g“)(2,g)) S _agi)( v13 6 P“) and V($,C) e U(‘)(R) with U"’(R) = {(12.0 t V“) (MD 3 R}. where R > 0 is chosen so that $2 is in the interior of n U “’(R). 561 Remarks. 1. It is not hard to modify the stability analysis of the closed-loop system for the case when the static state feedback control law (3.2) is replaced by a dynamic one. The dynamic part of the controller could be incorporated into the (- 59 dynamics of the system. 2. It is possible to extend the class of systems (3.1) to include additional dynamics C the input, and all the assumptions are uniform, in certain sense. However, we :r if they are input-to—state stable (ISS) in the region of interest, with l l as shall not pursue this case in order to avoid some technicalities and refer the interested reader to Chapter 2, where it was done for the special case where there are no C-dynarnics. Our goal is to design an output feedback control law to guarantee practical regu- lation. The family of the control laws (3.2) cannot be implemented because 0 the vector :1: is not available for feedback and e the value i“ E I, for which p E 73“.), is not known. The first problem could be resolved with the help of an appropriate high-gain observer (HGO). To deal with the second one we suggest to use ‘discontinuous adaptation’ in the form of logic-based switching. 3.3 Switching Logic First of all, we need to obtain a robust estimate :3: of the vector :1: or, equivalently, of n — 1 derivatives of the main output y, in order to apply the control law u = a“) s g(’)(:f:, c), (3.4) which is close to (3.2), provided Ila: — all is small. However, it is also useful to obtain estimates of y‘”) and C as well, to be used for parameter identification. Estimating ‘extra’ derivatives is an alternative to the filter design of classical adaptive control theory [2, Sec. 2.3] aiming to obtain a quasi-static regression model suitable 60 for parameter identification. In what follows we also use these extra derivatives in 1:1) Differentiating the equations of the closed-loop system (3.1), (3.4) with a fixed i order to check whether (3.3), which could be rewritten as 8V“)(:I:,C) . 8V“’(r.<) ' m ____+ , — < — '. at .r + 3C C _ 0.3 is satisfied in order to decide whether or not to switch. we obtain y(n) =2i"(W1C’§")’ (3.6) c: wi"(p,x.<, 2.2), where i 8 717A“) 11' 305,,,2(i) Ai :1: C (9(b(p,:z:,(,u) 89(i)(i‘$<) 1 6.9“)(1', C) “(2') + an “:11“, 81% 1: + 6C $(P,$,C,U ) 9 i 6 ,IL', all“) Ai 6/ ’33? 1 ’11“) *i W(p,x.c,u) 69%,0; 09"“)(20 «a + a“ “:11“, 65% I + 6C w(P1$,C,u ) ' (i) ‘ It is crucial to notice that, since the functions it“) E g")(i:,(), W, and (i) ‘ . . a—gfig—C) are globally bounded in i, the functions 45‘1"” and wl')(-) satisfy a linear growth bound in it, i.e. Hat’w. cm)“ s M, + M2112“, . . . (3.7) ”We”, 92.2)“ s M, + Man“. for some nonnegative constants M1, Mg, M3,.and M4, provided (22,0 E U (’)(R). 61 We are ready now to design a high-gain observer for the extended closed-loop system (3.1), (3.4), (3.6) following the standard procedure [3]. The variables :1: and y(") are estimated by 5: and in“, provided by $5 = ASE + Bin-+1 + H(€i)(y — Ci), (3.8) I. 71' A .1 an 1 A $n+l : 50(1)(y7€7$71‘)+ ———+—-(y_ Cf), n+1 5i where aili’(y, C ,it,:i:) is a nominal model of ¢li)(p,a', C, 2,25), which can be taken as zero and is assumed to satisfy an inequality similar to (3.7), e,’52’ ’5’? 1 I y(gi):l31 % ffilT, al, - - - ,an+1 are chosen such that the polynomial 8‘“ + an? + . . . + mg + an+1 is Hurwitz and e,- > 0 is a small parameter to be specified. Similarly, C is estimated by C2, provided by the following observer (we are going to discuss later some simpler alternatives that are more suitable for special classes mentioned in the introduction): (3.9) A) C <2 : ¢lj)(y1C1jai') + 52 v where 113]” (y, C, :it, :5) is a nominal model for 1129’ (p, 11:, C, a}, :5), which can be taken as zero and is also assumed to satisfy an inequality similar to (3.7). We assume that the initial conditions for (3.8) and (3.9) belong to a given (arbi- trary) compact set S2 C RHH”. Using high-gain observers to estimate additional derivatives brings up the same 62 challenge in the analysis that we have faced in the previous chapter‘. The next issue we are going to address is how to choose the Correct candidate control law or, equivalently, the appropriate index i E I . It is intuitively clear that if i = i" (p E 79(")) and the high-gain observers above are capable of providing good estimates, then the inequality (3.3) is satisfied up to a small error, the trajectories of the closed-loop system cannot leave the compact set U (")(R), and y(t) approaches an invariant set where it is small, provided 5,. is sufficiently small. It is crucial to notice that if (3.3) is satisfied for a value of i 75 i", we can still show ultimate boundedness (practical regulation). Obviously, we cannot check the inequality (3.5) on—line because it depends on the derivatives :i: and C. However, using the estimates provided by (3.8) and (3.9), the following inequality is easy to verify: av“) . av“) . .,- i‘ —ax—(2r)2+ 0C (2,()c25ao—a§’ (lllclll), (3.10) where 2 ,. . ,2 .. T I : Ax + BIn+1=l£2,' ' ' 7$n+ll is an estimate for it and a0 > 0 is a small constant, introduced to deal with an 0(e,) estimation errors induced by the high-gain observers. After a short peaking period, we can find out whether inequality (3.10) is satisfied. As long as it is satisfied, we maintain the controller that is currently in the loop. When it is violated, we switch to another controller and exclude the previous one from the list of candidate controllers. How to choose the next controller is a crucial decision that affects the performance of the system. We can switch systematically according to a pre-sorted list, as in Chapter 2. The performance, however, might not be acceptable if we have to switch through a long sequence of controllers before we settle at one for which the inequality (3.10) is satisfied. A more intelligent way is to use on-line 1See the discussion on p. 48. 63 information to decide on the next controller. Noting that, after the peaking period, the estimates in“ and C2 satisfy the equations iii-+1 -' (V (pa j,C,g(i)(;EI, <)) Z 0(Ei)’ C2 _ w(pai7<29(l)(i1<)) : 0(Ei)’ for the true parameter p, we can use m 2 line _ a (par, g“)(;i:,C))l + lléz — w (p.2.c,g<"(2.<)) ll as a performance index to be minimized over all possible values of p. Invoking addi- tional assumptions on how the functions d)(-) and 1/J() depend on p, we may use gradient, normalized gradient, least square, or another standard estimation algorithm [2]. Naturally, the algorithm choice will impact the performance, as discussed in [32] for a similar problem. For computational simplicity, we adopt the following approach. For each set 790), we choose a nominal parameter p(j). Assuming that the sets 130') are small, it is reasonable to expect lJ (p) — J (pU))l to be small for all p E 190). Hence we use J (pm) as an index for the set “Pm. If I is the set of indices to choose from at some switching time, the next index is taken as i = arg meip {J (p(j))}. (3.11) J Finally, we pick a small positive dwell-time constant T, which is greater than peaking time, and proceed according to the following algorithm. Step 1. Define initial time, say to := 0, the set of indices I 2:: {1,2, - - .,N}, and an arbitrary initial value for i E I, say i := 1. Step 2. Put the controller u = 12"), defined by (3.4) and (3.8), into the loop for 64 t6 ltn, to + T). Step 3. For t Z to + T we continuously check the inequality (3.10) using current estimates from (3.8) and (3.9). We keep the controller a 2 it“) in the loop until the moment of time t,- Z to + T when the inequality fails. Step 4. At t = t, we redefine := I \ {i} and choose a new value for i using (3.11). Step 5. Set to :2 t, and go back to Step 2. 3.4 Main Result The following is the summary of our approach: split the large set of parameters into a finite number of smaller subsets, 0 use any Lyapunov-function-based technique to design a smooth partial state feedback candidate controller for each subset, 0 design an extended-order high-gain observer providing the estimates for the states and their derivatives, 0 obtain the performance indices for the subsets, based on the algebraic equations that must be instantaneously satisfied by the parameters, states and derivatives of the states, 0 use the estimates, provided by the observer, to check whether the inequality for the derivative of the Lyapunov function, corresponding to the current controller, is satisfied and if it fails switch to the controller that corresponds to the smallest performance index. Theorem 3.1 Consider the closed loop—system (3.1), (3.4), (3.8), {3.9) under the switching logic described above and with initial states in the compact set 9 x (2. There exist positive numbers Tm,“ and Tmax, with Tm,“ < Tmax, and for every T E (Tmimeax) there exists 5 E (0,1) such that if e,- E (0,5), for i = 1, - --,N, then the trajectories will be bounded. Moreover, ly(t)] will be ultimately bounded by a bound that can be made arbitrary small, provided a0 in (3.10) and e,- are chosen sufiiciently small. Proof. Assuming that, for a fixed value of i, the controller (3.4) is put into the loop, define the scaled estimation errors A x T Tl = (FIE—flan ‘) ' ° '1xn;$n1¢(pv$7Cag(i)(i1C)) _ in+1:l 1 C _6 (i) A A £1 £1: . a £2 :¢(p,$7C!g ($,C)) _C21 S: ' 5’ 52 Using (3.8) and (3.9), it can be shown that it = Ax + B [2500,96, C, g“’(1‘,C)) + A¢‘"(p,$, 917,60], C = 2 (19,33, (,9“’($,C)) + Aib“’(p,x,C,n,5.-), (3.12) 51".? : AO‘mn + eianAfiéli) (p, I, C? T], 8i), at = Air + s.BdA¢i"(p.x, c. n. 5.). 66 where Ae"’(p,l‘, 971,81) = <15 (11,136 11‘”) — (25 (10,11, CM"), Afr/)(i)(pa$7 €177,753) : ll} (p9$1<2fl(i)) — 11) (1313319150), A (i) , _ (i), ,a _ “(1') - 5 $1 (1)11374177381) _ 1(p,$,C,I,L) $1 (yaC1$)I)9 Awl‘lp. 2. c. n, a.) = wi'lp, 2, c, 2. 2) — We, c. 2 :2), 2: = A2 + 182,.+1 + H(e.-)(y — 02). in-l-l = ((90993: C) 11(0) — ”n+1: H(e,~)(y — Ci) = H(€i)(l‘1 — 531)=l011€?-1, "n aanm, -21 I 0 Aol = a Bol = 7 —I 0 I 5 —al 1 0 0 0 —022 0 1 0 0 Am, = , Bo", = , —a,, O 0 1 0 _ -—oz,,+1 0 0 ~ - 0 . _1_ and A0), Aom are Hurwitz matrices. From the foregoing expressions for :3: and :i:, it can be seen that A¢(‘)(-), Adm”, Aol’)(-), and Aibli’(-) depend continuously on e, and using (3.7) it can be shown that there exist nonnegative constants L1, L2, L3, L4, L5, L6 such 67 that llAel"(p2:v,C.n€i)ll S L1+ Lzllvll, ”At/Mm, cw” 3 L3 + Linn”, IIAe“’(p,z,C,n,€z-)ll S lelvll, llAi/J"’(p,a:,C,n,€¢-)Il S lelvll, (3.13) provided (:r,C) E U(‘)(R) and e,- E (0,1). Let Pom = P3,; be the positive definite solution of the Lyapunov equation A0,, 0 A3,, 0 + 0 A0) 0 AT ol Pom Pom=_la and define the constants av“) , _ . . 6V“) , _ - . Ki =Sup {—_—a—(Q-L'£_C_) [A$+ B¢(p,$,€,g(z)($,<))] + _._a.(Ci_Cl w(p)x7gvg(z)(x7<))}1 where supremum is taken over all (:17, C) E U (”(R), :i: E Rn“, f) E P, and the N number R0 E (0, R) is such that Q C H U (‘)(R0). Let i=1 T _ R_R0 m“_K1+K2+o--+KN and assume that T < Tm”. First of all, it is clear from the definition of K,- and the description of the switching logic, that if at t = to the controller a“) is put into the loop and (r(to),C(to)) E U(‘)(R.-) for some R,- E [R0,R — TKg), then for t E [t0,to + T) there will be no switching and (x(t),C(t)) E U(‘)(R,- + TK,) C U (’)(R). We show that during this I time the estimation errors must decay to 0(a) values. Using (3.13), it can be shown that the derivative of the Lyapunov function 77 77 5 Mai) = Pom 68 satisfies 2 +C1 2 WS—1 51 +C2 l3] _.,) [U] [U] C C for some nonnegative constants c, and c2. Hence, 1 n2 1 0 VV _<_ — (— — Cg) - — 25; 6 28,- 6 Assuming that 1 e,- <5S min{ ,—} 4C2 - c IVs-3m vwzfia 52 [Z] - it can be shown that for some positive constants c3 and c4. Therefore, as long as W(n(t),C(t)) 2 efc4, we have Wee). 4(2)) 3 wave). (050)) 2“t — tales/5" s 53%; Ht - Was/e for some c5 > 0. After a short peaking period of the order of 0(5, log(e,)), the fast variables 17 and C decay to, and stay at, 0(e,-) values. This situation may change only with another control switching that could induce another peaking period. Since 5log(5) —+ 0 as 5 —> 0, for an arbitrarily chosen Tm,“ E (O,Tm,,,,) there exist 5 such that for every T E (Tmin, Tm“) and every 5,- E (0, 5), the fast variables n(t) and C(t) decay to 0(e,-) values. Consequently, since lA¢(‘)(-)l S L5llnl| and llAw(i’(')ll S lelnll for t _>_ to + T, we must have (i) , (i) . O (i) (i) . 3;; (iiC) 5: + %_(ia C) (2 = %($,C) :13 + %—($,C) C + 0(5i)' It is clear now that for any fixed a0 > 0, provided 5 > 0 is chosen small enough, if i = i” then for t Z to + T, inequality (3.10) is satisfied, no more switching occurs, 69 and the trajectories must enter an invariant set of 0“.) (0(e,.)) size with 0".)(-) being a class ICOO function. Similarly, if the inequality is satisfied for t Z to + T for some i 79 i‘ ultimate boundedness follows. However, in this case the invariant set is guaranteed to be of 0“) (0(5,-) + a0) size and, correspondingly, not only 5 but also a0 must be chosen sufficiently small to ensure practical regulation (i.e. ultimate boundedness of the solutions with an arbitrary pre-specified bound). If at some time t,- 2 to + T (3.10) fails, another candidate controller would be put into the loop immediately. It is left to notice that no more then N switching times may occur and that the choice of Tm,x guarantees that independent of whether or not (3.10) is violated, (r(t), C(t)) cannot leave the set F] U(’)(R). Q.E.D. i=1 Remarks. 1. In the special case when the system is obtained via dynamic extension from an n-th order input-output model there is no need to estimate C using the high-gain observer (3.9) since 2“) in this case represents a chain of integrators driven by the input, whose state is readily available. Also, in a slightly more general case, when some of the equations in the C subsystem are independent of the uncertain parameters, the estimation of the corresponding derivatives can be done in a simpler way. Assume that for some j we have (j = ¢j($i C? It), then we can estimate the derivative C,- by 623- : ¢j(i) C) U), though the high-gain observer estimate would work as well. It is not hard to see that the forgoing analysis would still work with such minor modification because as soon as lla: — at” = 0(e,-) we have [le — C2)” = 0(2)). Clearly, in this way 70 we may achieve certain reduction in the order of the observer. However, we note that we pay a price since the contribution of the corresponding part in the expression for the performance index J (p(‘)) must disappear. 2. For the case when the state feedback control law (3.2) ensures ultimate bound- edness only and, correspondingly, there are small positive constants added to the right-hand sides of (3.3), the main result still holds true and the change in the foregoing analysis will be insignificant. 3. It is also possible to require that the control law and all its partial derivatives be globally bounded in 2:2, - - - ,1" but not in :31 since 2:1 E y is readily available for the feedback and does not need to be estimated. This observation is especially useful when for each small subset of the set of parameters it is _ possible to design an output (possibly dynamic, as noticed above) feedback control law. In this case, saturating the estimates outside of the region of interest is not necessary. 3.5 Examples The main assumption used to develop our result for a class of uniformly observ- able (with respect to control signals) uncertain systems is the existence of a family of stabilizing candidate regulators for each small subset of the space of parameters. We would like to show now how the whole procedure, including the design of the candidate regulators and subdividing the initial large set of parameters, works for some challenging situations. Specifically, as stated in the introduction, we are inter- ested to see what are the advantages of switching for the problem of control design of parametrically uncertain non-minimum phase systems and systems with unknown high-frequency gainsz. 2See the next chapter for an example of this kind. 71 3.5.1 Linear non-minimum phase system Our first example is a family of linear non-mininnnn phase systems. We shall not formulate the procedure for the general class of linear control systems since our main interest is not to develop a superior technique for linear system stabilization but to develop a strategy that is easily applicable for a class of nonlinear systems as well. We shall illustrate our approach on a simple one-parameter example in order to explain what can go wrong when uncertainty is significant and why switching is essential. Consider the family of one-parameter linear systems 231 = 22, 2.32 =‘U—Zl +22, y: 22 —p21, (3.14) where [21, 2le is the state vector, v and y are the control and measured output signals, and p is an unknown parameter that belongs to a known large compact set ”P C R. It is easy to see that all the systems in the family are of relative degree 1 and are non-minimum phase for p Z 0. We are going to show that when p is known the system can be stabilized via linear dynamic feedback control with an arbitrary guaranteed rate of decay of the solutions. However, a single linear control law cannot ensure even stability for the . whole family when the set 'P is sufficiently large. After that, we show how to resolve this difficulty with the help of a logic-based switching design as proposed above. Following the idea of [46] (see also [7, 45, 3], and [23, Section 145]) we will employ dynamic extension and pole-placement techniques. The system (3.14) can be represented by the second-order differential equation v-v+y=v-pv- 72 Extending the dynamics by one integrator at the input and introducing a new vector of state variables [x C lT = [2:1 1:2 C lT with 131:3!) $223]! (:7), we obtain a linear system in the form (3.1): :i: _ a: _ y _ :r l,l=.4(,.)l +3., llzcl l, (3.15) C C C C where 0 1 0 0 _ _ _ 10 0 A(p)= —l 1 -p, B: 1, Czl l. 0 0 1 0 0 0 1 It is easy to check that the pair (A(p), R) is (uniformly with respect to p) control- lable and the pair (0, A(p)) is (uniformly with respect to p) observable for every p E IR. We design first a state feedback control law for (3.15) u = K(q)[2 ClT = k1(q)21 + k2(q)22 + k3(q)<, (3.16) where q is a nominal value for the unknown parameter p, K (q) is designed so that [24(q) + BK (q) + 01] is Hurwitz, and a > 0 is the desired rate of decay of the solutions. This control law can be represented as a proper transfer function from y to v. It is equivalent to the dynamic output feedback law v = w + k2(r1)y. 21') = 121(2):) + k3(q)[w + k2(q)yl (3.17) 73 i for the original system (3.14). It is worth noting that we are able to design this filter, which allows us to avoid computation of derivatives of the output, essentially using linearity. This trick would not work for a more general nonlinear system but we would still proceed using a high-gain observer and saturating the control law outside of the region of interest. The necessary and sufficient conditions for stability of the closed-loop system characteristic polynomial Ap(3) = d8tlSI _ (AU?) +BK)] 2' 83+ (‘1 — k2 — k3)S2 + (1 - 191+ka +k3)s+pk1— R3, are a1=—1—k2—k3>0, a2(p)=1—k1+pk2+k3>0, a3(p) = pkl — 1:3 > 0, a1 - a2(p) — a3(p) > 0, where for convenience, we omitted the dependence of K on the fixed value q. Sup- pose {0,1} C ’P. The conditions a3(1) > 0 and a2(p) > 0 imply a3(1) + a2(p) = 1 + pkg > 0 and, correspondingly, 1 1 ——>a>-—a In M The conditions 03(p) > 0 and a2(0) > 0 imply a3(p) + a2(0) = 1 + (p — 1)k1 > 0 and, correspondingly, 1 —— > k > . lp-H 1 u—u Finally, the conditions a; > 0 and a2(0) > 0 imply —k2 > k3 + 1 > It; and, correspondingly, 1 1 —1+—a>a>-1————2 u-u M 74 Therefore, 2 (11 = —(k3+1)—k2 > ——, (12(0) = (k3+1)—k1 > -a;,(()) > —1 Ipl w—u’ w—u and al ' (12(0) — (13(0) > 0 lPl'lP—1l_1_lP_—1l> cannot be satisfied with, say, p = 3. We conclude that any fixed choice of K in (3.16) cannot guarantee stability of the closed-loop system for all possible values of p when P is sufficiently large. It is clear that had P been small we could have stabilized the system with a single feedback gain K. Therefore, it is reasonable to subdivide P into smaller subsets, design a regulator for each subset separately and use estimates of higher derivatives for fast parameter identification. In order to organize switching, we need to construct a family of Lyapunov functions satisfying a linear analog of (3.3). Toward this, let P(q) be the solution of the Lyapunov equation P(q)[4(q) + BK ((1) + 01] + [4(2) + BKM) + 01 lTP(<1) = -1- El as a Lyapunov function for the system (3.15), (3.16) and compute V: l1 3:] = -20V- [:6] (I-Gp(<1)) [2:], C C C C where Gq(p) = P(q)[/l(p) — A(q)l + [A(p) — A(q)lTP(q). It is easy to see that the We take T P(a) 1' V0510: T P(<1) matrix Gq(p) is affine in p and Gp(p) = 0. Consequently, there exists 6,,(p) > 0 75 such that q — 6,,(p) < p < q + 6,,(p) implies V S —20V. Since P is compact, there exist a number N, a finite set of points {p(ll, - - - , p‘Nl} C P and positive numbers 61, - - - ,6N such that N P C UP“), where P“) 2 (pm — 5,,p‘i) — (5,) for i = 1, - - -,N. i=1 Now, assuming that for each i we follow the foregoing procedure to design K (pm) we must have that V S —20V along the solutions of the closed-loop system (3.15), (3.16) with q = p“), provided p E P“). We use the following linear high-gain observer to estimate 2: and C : 1 . 3 . ~. . 3 - , $1=$2+E(y—xl), $2=m3+-E—2(y—a:1), $3 = —(y—:cl) (3.18) where e > 0 is sufficiently small so that . i2 . . 53 = . = 33 + 0(5), C2 = k1(q)y + ($200532 + k3(P)C = C + 0(5) 1133 as soon as a short period of peaking is over. Note that, since (3.16) is implemented dynamically in the form of (3.17), the signal u is not available and therefore we cannot take C2 = u. Finally, after the dwell-time period we need to check the inequality T (I: Sao- .J P(q) I A T .’L‘ P(q) I C +0 A 2 C with q = p“), where i is the index of the controller in the loop and a performance 76 index for (3.11) can be taken as J(p(j)) : 153 + 151— 272 + p(j)C — C2 . 3.5.2 Nonlinear system and adaptive control The purpose of the next example is to compare our logic-based design with a more classical adaptive design. We borrow the example from [20, 21] and consider the system: 0 2 . e 212224-621, 222u+z31 23=—23+21, 31:31, where 21,22,23 are the states, 0 is an unknown parameter that belongs to the interval [0, 2], u is the control input, y is the output, and el = 21 — y. is the output error with y, being a reference signal. We use an adaptive state feedback controller design from [21]: a = v = Sat ($75) (—Ké - u — 3) +1}- 26(3)?) + £12 + 3233') + 3153)), (3.19) 3‘ . .. .. 1 22 A r. 0 = 7 Pro) (Sat (.25) (4(eTPb)(yy + y + yy))) , (3.20) él 3? - yr 91 2 where e = 52 = y- y. = Q2/E , K = 4 , P is the solution of the é3 if — yr Q3/52 3 Lyapunov equation for the closed-loop system with d = 0, i.e., 0 1 0 0 0 —2 —1 0 0 0 0 1 P + P 1 0 —4 = 0 —1 0 , —2 —4 —3 0 1 —3 0 0 —1 77 Proj is the smoothed projection [39, 21]: [1 + (2 — é)/0.1lp ifél > 2 and 1,9 > 0 Proj(,9) = [1+ (6— 0)/0.1l1p ifé < 0 and p < 0, 1,9 otherwise the saturation levels are indicated as subscripts, and the scaled high-gain observer is implemented as3 541 = (12 + 3(61 _ (11), 542 = (13 + 3(61— QI)1 5‘13 = 81- (11- (3-21) As an alternative design, we propose to use (3.19) with the extended high-gain observer 8‘ll: (12 + 4(61— C11), €fl2 = (13 + 6(81— 91), 542 = 94 + 4(61- (11), 544 = 81 — 91 (3.22) and switch the value of 0 among the elements of the finite set {0, 0.01, . - - , 1.99, 2} when the Lyapunov inequality 0 1 0 —1 0 0 227‘ 0 0 1 1022—27“ 0 —1 o éSao —2 —4 —3 0 0 —1 with 5 = [52 53 q4/e3 lT is violated using the performance index * ,. c. c . 1 c2 ,. 2. «1(9) = lq4/€3+y§3’ -u-y+y-v-29(yy+y +111!)- We take 0 = 1, 'y = 10, e = 0.001, T = 0.03, a0 = 1 and compare performance under adaptive control (3.19), (3.20), and (3.21) (dashed lines) and our logic-based 3We use a high-gain observer that is different from the one in [21]. All the eigenvalues of the observer characteristic polynomial are taken real to avoid unnecessary control oscillations during the transient. 78 switching control (solid lines) for three different reference trajectories: y, = 0.5, y. = 0.1sin(t), and yr = 0.5 sin(t). We show the tracking error e1, the control signal u, the control signal for an extended system v, and the value of the parameter 9. e1(t) u(t) 1.5 1 - \\ /'~ {I \‘ Ol ,’{A\ ...... 1." \r\ l {ill \..___ -_.._,. 0 ’l \‘\\\ _1 l . .4; . . ..... l. ,. l"~,\ . 1 .. .. . ,. - ll , .. , . ....... 0.5 \\\ . I 2 l; \\ ‘ l I] \'\,‘ \ -3-l ”1', 0* ‘r l a“ -41, ........ .............. , lll!" -O.5 4 -5 V/ 2 r 0 5 10 0 5 1O V(t) 9“) 100 - . 2.5 r i , , 2.1.. _ ............ 50’ ...... .. ...... . | : ‘ 1.5,..‘,(\. . _ ...... m’r-‘r—Tr-r-T- olr/m ‘ 1‘], l 0.5, . , ............. .......... 1 01 ................ . ............. ‘. l '100 ‘ ‘ -o.5 , 4 0 5 1O 0 5 10 Figure 3.1: Simulation results for y, = 0.5 : the dashed line is for adaptive control and the solid line is for switching control. The performance under the logic-based controller is the same in all three cases with only one switching time right after the dwell-time period to a controller that corresponds to a value of d that is very close to 9. This is not surprising since our identification procedure here results in approximating the solution of the linear algebraic equation J (0) = 0. For the adaptive controller, we can see that since the first reference signal is not persistently exciting é converges to a wrong value and since the second one corresponds to a very small excitation level convergence of the 79 91(1) U(t) 2 1 ’/.\ of If} \ \‘ ,N _- I ‘N‘; ,,,,,, / A \ -< 1.5 .,l \ l i ‘_ —.._- . \ ' 1": “\X -1 I all 1 ’ ‘ Z“ i l I) \'1‘\ __ l . o 5 X“ ’ 2 l i)“ 1. \ ; -3 l l) \ l 1' o >-—- . 1 1 -4 . _ I; ’0.5 _5 ‘1 5 10 0 5 10 Wt) 9 (t) 100 . . 2.5 . 2 , .1 ..... , . .................. ., 50 L. I l’ 1 1 .5 ’ l l l A l . ‘ l l I \ ‘— ____________ o L ’V’\:\ 4 i 1 '. L_l|f...l.ll.....‘.. .1 ..'..‘.. ..._..,h_ ...,-..m'. ,; _,",,,.'_.;,,,;__~,_',,., __;,__.5 l1 . . l ; : 1' . _ 0.5. ....... _50 . . . . . . , . . ...... - - o ................ j ........... 1 ........... -100 ‘ A -05 '. i 0 5 10 0 5 10 Figure 3.2: Simulation results for y, = 0.lsin(t) : the dashed line is for adaptive control and the solid line is for switching control. parameter is too slow; it also takes longer for the parameter estimate to converge when the third reference signal is tracked. However, the closed-loop performance under both strategies is close and the only advantages achieved by the switching controller are a little better settling time and better identification in the case of not persistently exciting reference signal. 3.6 Remarks Extending the idea of the previous chapter, we have proposed a new design tech— , nique for output and state feedback control to practically stabilize a class of para- 80 e1“) u(t) 1.5 - . . 1 , 2‘ 2 ,,~\ /, -. _\ /, x, / \ I \\ 0 l I "\‘\,,/' / \\ "J/ / \1 1* 1 -1) l \/ ‘9 1 :3.) l l, ‘.\\.\ j ‘ _2 l ll . . . . , . ....... 0.5, \\.\ .. . l l ll \ . -3 ’l I, ..... .\\ Fl ll . \ i . 0 L‘- — ”4 W11]; -5 .’l‘ ........ -0.5 -6 0 5 10 0 5 10 V0) 0(1) I 2,3‘. ......... . . 1'5,|‘ ...... ...... ...le [ll /\ l I o l //, F‘"\__. #nfi-th-—M~~f~——~m—~~Li_4 1 -' - l 5 3.. rs ,- ...—.'....cd 2:: that-1.: _ 1" i d . . l i ‘ 0.5 ,_ ......... , .......... , .......... -50» i o .. ............ . ............... ........... ‘1“) ‘ 4 -05 '. ; 0 5 10 0 5 10 Figure 3.3: Simulation results for y, = 0.5 sin(t) : the dashed line is for adaptive control and the solid line is for switching control. metrically uncertain uniformly observable nonlinear systems. Compared to our earlier work and to many results available in the literature, we do not impose any kind of minimum-phase assumption neither do we assume that the sign of the high-frequency gain is parameter independent. Sliding-modebased design of the candidate controllers has been replaced by a fairly general Lyapunov-based design. A new identification- based switching logic has been developed instead of pre—routed search. As a result, the transient time has been shortened significantly improving the overall performance of the closed-loop system. The developed procedure has been demonstrated by an example of uncertain non- minimum phase linear system and an example of non-linear systems, previously re- 81 ported in the literature. However, what is not clear from the analysis presented in Chapters 2 and 3 is robustness of the proposed procedure with respect to (non differen- ' tiable) measurement noise and sampling, unavoidable if the controller is implemented digitally. We investigate this issue in the next chapter via simulation. 82 Chapter 4 Nonlinear Example: Case Study In this chapter we consider an example of a nonlinear system with an unknown high-frequency gain in the strict-feedback form [23, p. 595]. Our goals are: 0 to compare the performance achieved via our design and via a recently proposed logic-based switching strategy [34, 14, 15, 28] and o to study some practical issues related to digital implementation and measure- ment noise. 4.1 Model The following system has been investigated in [13, pp. 76—82] (see also [15]) 21 = Plizi5 +P222, 5’32 = ”U, yc = 21 - 7‘, (4'1) where 21 and 22 are the state variables, 1) = [p1, p2]T is a vector of unknown parameters that belong to the finite set 42 p = U {pm} 2 {—1,—0.9,. . . ,0.9, 1} x {—1,1} C 1R2, i=1 83 u is the control input, 7" is a constant reference, and yC is the controlled output. The design procedure, presented in Chapter 3, is applicable for solving both output and state feedback regulation problems. 4.2 Continuous Controller Design 4.2.1 Lyapunov-based switching output feedback We start with the case when only 1:1 is available, i.e. y = ya = 21 '— 7'. To transform (4.1) into the form (3.1) let a: _ _ .,3 _ 1 $1 — 21 -— 7', 2:2 — p1~1 +p222, :1: — , 332 so that i: [3‘] = [ “”2 J. (4.2) $2 3131(131 + T)2$2 + pzu We use feedback linearization and pole-placement to derive the control law u = —[w2y + ani2 + 3q1(y + r)25:2]/q2, (4.3) 84 where q = [(11, (12]T E 79 is a nominal value for p, w > 0, 17 > 0.25, and :32 is an estimate for 1:2, provided by the high-gain observer . . 3' —i: E . A 3 —:i: m2=x3+#, (4.4) 2 9—331 x3_ 83 i where E > 0 is sufficiently small. The parameters of the controller w and 77 are chosen to ensure acceptable transient performance of the closed-loop system (4.2), (4.3) with 532 = $2 and q = p, i.e., i1 = 33;, 1'22 = —w2x1 — anxg. (4.5) The Lyapunov function candidate can be taken as V($1,x2) = w(l + 77)::f + $132 + xg/w, so that along the trajectories of (4.5) V = —W($1,$2) = —w21:¥— (477 — 1):”; Therefore, we start with i = 1, at which the first candidate controller (4.3), saturated outside the region of interest, with q = p“), is given by u = — Sat ([wle + (217w + 3p(li)(:r1 + r)2) 5:2] My) , 85 . ‘7 A _‘I where Sat() is a smooth saturation function. We put this controller into the loop and switch to another one as soon as the dwell-time period is over and the inequality 6V A , - A . -—(:v1,:v2)1:2 + -—(:r1,:I:2)I3 + W(:I:1, 51:2) S a0, 84131 81132 fails, where the output of the high-gain observer (3.18) is used. It is possible to order the. candidate controllers arbitrarily and to use pre—routed search, as in Chapter 2. Alternatively, we can take J (pm) : F3 _ 3plj)(131 + 702% _ pg)” as the performance index for (3.11) and follow the switching logic from Chapter 3, presented on p. 64. 4.2.2 Lyapunov-based switching state feedback In the state feedback case, 31 =[z1—r, 2le, we let C . 3 (1:21—73 <2=Z2+fls (=[l], 92 C2 so that , . [(1] [P1 (C1 + T)3 + “(C2 " 91T3/Q2l] = . = . (4.6) (2 u Following the idea of [13], we use the regulator u = -[w2C1 + 277wcp(q, C) + 3ql(Cl + r)2 0 is a fixed hysteresis constant. As soon as the inequality fails, we redefine z' = argmin {p(j)(t)} jE{l,---,42} and switch to the corresponding candidate controller. It is shown in [13] that, when there are no disturbances, the solutions of the closed-loop hybrid system are well-defined, switching has to stop in finite time (with some value 2' 9—- i0 6 {1, - - - ,42} and it is possible but not necessary that in = z" ), all signals are bounded, and limtnoo [y(t)] = 0. 4.3 Practical Implementation We discuss here some modifications of the controllers, designed aboVe, that are needed for a practical implementation. 4.3.1 Digital implementation Most controllers are implemented in practice using digital computers [5]. The main reason is that some sensors provide the measurements with a certain sampling period. In particular, instead of y(t) 2 z1 (t) — r for (4.3), (4.4), the only available 89 output signal for the feedback is W] = y(krs) = 2mm) — r, where 73 > O is a small sampling period of the sensor and k is a non-negative integer. In addition, even when the signal from the sensor is not sampled, some actuators are not capable to follow continuous signals and therefore, we use zero-order hold, i.e. do not change the control signal during the sampling period. In particular, instead of (4.3), we implement u(t) = u[k] = -(w"’y[kl +2nwi2 +3q1(y[k] +r)25:2)/q2 for kn. s t < (k+1)r... where 7,, is a sampling period of the controller, 5:2 is provided by a sampled-data digital implementation for (4.4). Discretization for the high-gain observer can be done in various ways [5]. Here we follow Euler’s forward formula and use an: +1] = 5:1[k]+ To (mm + 3W] '5' 5’ 1W) , an. +1] = 552[k] + 7,, (2241.1 + 3%] ‘ film), 52 83 :83[k + 1] = i3[k] + 10(M) , where To is the discretization step. For simplicity, we assume that T5 = Tu = To. In order to implement the Lyapunov-based logic, we also assume that the dwell— time constant 1' is larger than Tu in a natural number times. Similarly, we find discretizations for both dynamic state feedback controllers, de- scribed in Section 4.2. 90 4.3.2 Suppressing measurement noise Another issue that has to be analyzed is noise in the measurements provided by sensors. To obtain a more realistic model, we have to take into account the fact that the measured output is always contaminated by noise. We assume that the output signal for each dynamic controller is subject to a random additive error. In particular, the signal y, used in the dynamic output feedback controller (4.3), (4.4), is not equal to 21 — 7‘ but y = 2.1 — r + a,,w(t), where w(t) is a measurable function of t with [w(t)] S 1 and an is a parameter that defines the level of measurement noise. It is not hard to see that, since w(t) is not necessarily differentiable, the change of variables used in Chapter 2 to deal with smooth output disturbances is not applicable. It is also intuitively clear that, since the high-gain observer (4.4) basically estimates two derivatives of the (formally not differentiable) signal y(t) , certain level of noise (that is not suppressed due to sampling) may cause i the control signal to saturate and the closed-loop system to become unstable. A commonly used practical way to avoid this singularity is to pass all the measurements through a low-pass stable linear filter of a sufficiently high order. Following this practice, we are going to use a standard Butterworth filter, provided by the ‘Analog Filter Design’ block from the Signal Processing Blockset of Matlab/Sz'mulink, having 1'3 as a sampling time. In particular, for the dynamic output feedback controller we will have #:0y(na) + bno_1#:o-ly(no~l) + . . . + blfln?) + y : 21 .. 7‘ + flaw“), where no is the order of the filter; bno_1,-- -,b1 are the positive coefficients, such that 1/(s"° + bur 13%“ + - -- + bls + 1) is the standard transfer function of the Butterworth filter, designed to suppress all the frequencies greater than 1; and pa > 0 91 is a small parameter (the cut-off frequency of the filter is l/pn rads/sec). We note that this parameter has to be accurately tuned using extensive simulations; when [1,, is too small the filter is useless, however, when it is too large, the filter may hurt the transient and damage the overall performance of the closed-loop system even in the noise-free case. Similarly, we can modify the state signal used in the dynamic state feedback controllers, i.e. (4.7), (4.8) and the one with hysteresis-based scale-independent logic, implementing two filters. It is worth noting, however, that the stability proofs for the later controller, presented in [13, 15], are not valid even in the presence of smooth exponentially vanishing disturbances and it is not clear how to modify them. Due to the presence of noise, we are going also to modify the Step 4 of the algorithm, presented on p. 64 and use Step 4. At t = t,, if I 94 {2'}, we redefine I := I \ {2}. Otherwise, we restore the initial value of I, defining I := {1, - - - ,42}, and increase (10 in (3.10), so that a0 2: 10ao. Then, we choose a new value for i using (3.11) as before. The reason for this ad hoc modification is as follows. It is intuitively reasonable to expect that the measurement noise, ignored at the design stage, must result in some non-observer related additional errors in (3.10) even in the best case scenario. We notice also that the initial value of the constant a0 might be crucial for the level of acceptable noise. 4.4 Simulation Results All the simulation results are done using M atlab/Sz'mulz'nk. The Simulink diagrams and Level 1 Matlab files used to simulate the sampled-data state feedback controller with the modified logic of Chapter 3 are given in the appendix. The programs for all the other regulators are similar. For numerical integration, we have used ‘ode23s’ 92 (stiff / Mod. Rosenbrock) and have checked some results with ‘ode45’ (Dormand— Prince). We present below simulation results for different sampled-data controllers, dis- cussed above, discretized with the sampling period T5 2 7",, = To = 0.0001 and with 1' = 1.0, w = 1.0, and 77 = 0.7. We note that slightly larger values of the sampling period would still results in the same performance as for the continuous implementa- tion in a disturbance free case and slightly larger values of the sampling periods would significantly increase the time of computation. We show the system’s regulated state 21(t) = 31(t) + r = (1(t) + 1' (column 1), the corresponding generated control input u(t) (column 2), and the index, z'(t), of the controller put into the loop (column 3). The controllers are numbered so that ( (i) (n 04+Olfi—1L—D, Hi<22 p1 7P2 = (—1+OJU—22J) Hi222' 4.4.1 Noise-free case In Figures 4.1 and 4.2 we show results for two output feedback controllers. The second rows correspond to the pre-routed search of Chapter 2, and the first rows correspond to the identification-based logic of Chapter 3. In the case when the index of the system’s parameters is small, i.e. close to the initial value of the index, we note that, due to the longer transient (e.g. in the case of z" = 10 in Fig. 4.1), pre-routed search results in a slightly larger overshoot. However, we note that the control signal in the case of pre-routed search is smoother due to a more graduate change in the control amplitude. In the case when the initial value of the index, 2' = 1, is not close to the real one (e.g. in the case of i‘ = 40 in Fig. 4.2), the difference in performance is significant. We can see not only larger overshoot and longer settling time but also an undershoot. The later is due to the fact that, when pre—routed search is used, several candidate 93 x1“) u(t) i(t) 1.5 10 1o 3‘3 5 ------- E 8 a, w E ‘ 6 ...... m' OW‘ '. 4 . l 3 -5 . ...... 3 2 ............. -o.5 ~ ~10 o o 5 1o 5 10 o 5 1o Lyap.-B. [Pre-Routed] Figure 4.1: Improving pre—routed search: the case of z" = 10 controllers put into the loop are designed for the wrong sign of the high-frequency gain. For the rest of the chapter we use only Lyapunov—based design with the logic of Chapter 3. In Fig. 4.3, the first and the second rows show results for the Lyapunov-based switching logic for the case of output and state feedback (1' = 0.03, 5 = 0.001, and a0 = 0.1) correspondingly, and the third row shows results for the scale-independent hysteresis-based logic (h = 0.1 and A = 0.5). We show here the results for z" = 30. The parameters in this case are identified correctly and this is always the case for the hysteresis-based logic and is the case for most values of the parameters when Lyapunov-based switching logic is used. It is worth noting, however, that for all tried values the shortest settling time is achieved with the output feedback controller and 94 x1“) U(t) i(t) 1.5 . 10 50 "'3' 5 E 40 . a) . . .13. 2 30 ----- s + - 0p”— : a _5 ... . 2 _>Js 10, ........ ....... "'05 -10 0 5 10 0 5 10 0 5 10 Lyap.-B. [Pre-Routed] Figure 4.2: Improving pre-routed search: the case of z" = 40 the longest with the hysteresis-based logic. In the hysteresis-based logic, we need to allow a transient period for the multi-estimator to provide a reliable estimate for the parameters. Such a period is typically much longer than the dwell time of the Lyapunov-based design. During this transient period the system may operate under a wrong controller, which may be one that is designed for the wrong sign of the high- frequency gain. We have also observed, for all tried values of p, that (10 can be chosen small enough to ensure exact identification in the no disturbance case for both Lyapunov-based designs of Chapter 3. 95 Lyap.-B. [Out F.] Lyap.-B. [St. F.] S.-l. Hyst.-B. In all our simulations, the signal w(t) is generated by the ‘Uniform Noise Gen- Below we compare performance in the presence of noise for the state feedback When the level of noise is sufficiently low, the performance of the two control laws 10 u(t) ................... 10 Figure 4.3: Noise-free performance: the case of z" = 30 and initial seed 12345 or 12341. 4.4.2 Acceptable level of noise 96 erator’ block from ‘Communications Blockset’ of Simulink with sampling period 75 controller with the switching logic, proposed in Chapter 3, and for the state feedback controller with hysteresis-based scale-independent logic, pr0posed in [13]. For sim- plicity, we assume that both state variables are contaminated with noise of the same is similar to the noise-free case. The low-noise case corresponds to an S 10‘5 without low-pass filters and to an S 10‘4 with two filters, having no 2 4 and po = 0.01. Performance of the closed-loop system is unacceptable when an reaches 0 10‘4 under the Lyapunov-based state feedback without any filters, no .2 4 and [an = 0.01, 10’2 under the Lyapunov-based state feedback with two low-pass filters, having 10‘1 under the hysteresis-based logic without any filters, and o 101 under the hysteresis-based logic with two low-pass filters, having no = 4 and [Ln = 0.01. *1“) u(t) in) 50 . 10 so . IL . Q. _l 2 3 2 4 u. (L _l 3' ................... m' '- .J 5 10 u_ L 0. I _l ,—\_-_.__._____ 2 i 3 . l; 5 10 A . 3' : m‘ ....... g n. . I l S T' -2 -1o 4 o , o,- o 5 10 o 5 10 o 5 10 Figure 4.4: Performance of the state feedback controllers without and with filters: the case of 2"“ = 20 and an = 10’“ In Fig. 4.4 we show the simulation results for all four closed-loop systems when 97 on = 10“. The first and second rows corresponds to the Lyapunov-based state feed- back without and with filters, the third and fourth rows corresponds to the hysteresis- based logic without and with filters, respectively. In order to illustrate our modification for the switching logic of Chapter 3, in Fig. 4.5 we show performance under the Lyapunov—based state feedback without filters for different levels of noise. Figure 4.5: Performance of the state feedback controller without filters in the presence of noise of different levels: the case of z" = 40 When on = 10“ performance is almost the same as in the noise-free case. When on = 10'3, no candidate controller ensure that the Lyapunov inequality is satisfied with a0 = 0.1, however, after we switch through all of them and redefine: co = 1, stabilization is successful although with a pretty large steady-state error. When on = 10‘2 the system goes unstable. 98 S.-l. H.-B., no filter S.-I. H.-B.. un=o.o1 Figure 4.6: Performance of the controller with hysteresis-based scale-independent logic without filters and with filters, having po = 0.01 and no = 4: the case of z" = 40 and co = l ‘ Finally, we show how the filters prevent the system from instability in the case of the hysteresis-based switching. In Fig, 4.5 we compare performance under the hysteresis-based logic without and with filters in presence of noise with a significant noise level of an = 1. We see that the filters save the closed-loop system from instability although the effect of noise is not suppressed completely, as can be seen from the steady-state error. 4.5 Remarks 0 We have seen that the regulators based on the logic of Chapter 3 (summarized on p. 64) outperform the ones based on the pre-routed search. 99 e We note that, when the sampling period is sufficiently small, the sampled- data versions of the controllers, designed to work without sampling, perform as expected. 0 It seems that under ideal (disturbance free) conditions the output feedback controller outperforms both state feedback controllers. This is unusual though we should note that the state feedback controllers in our case are designed based on the output feedback design (for the known parameter case) and not the other way as is done more traditionally for a less challenging control problems. 0 It is also clear from the simulation results that in the presence of measurement noise, putting a low-pass filter does increase the acceptable level of noise. 0 It follows from our simulation results that a much higher level of noise may be allowed in the system when the hysteresis—based scale-independent switching logic design is used. However, we notice that no output feedback control design is available with this approach. So, the Lyapunov-based switching output feed- back control design, proposed in this thesis, is, to the best of our knowledge, the only solution known for the problem when not all the state variables are measured in the system (4.1). 100 Chapter 5 Conclusions In this thesis, we have addressed the problems of robust tracking and stabilization by output and state feedback for a class of single-input single-output nonlinear sys- terns with large-scale parametric uncertainty. The main ingredients of our approach are Lyapunov function-based dynamic feedback control design, high-gain observers, and logic-based switching with dwell-time. The general summary of the developed approach is as follows. 1. If needed, use the dynamic extension procedure and an appropriate change of coordinates to transform the system into a special form, where unavailable states (if any) are derivatives of the measured outputs. 2. To simplify design, split the large set of parameters into a finite number of significantly smaller subsets. 3. Use a Lyapunov-function-based technique to design a smooth partial state feed- back candidate controller for each subset of parameters. 4. Design an extended—order high-gain observer providing the estimates for all the unmeasured states of the transformed system and their derivatives. 101 Obtain a performance index for each subset, based on the algebraic equations CJ'I that must be instantaneously satisfied by the parameters, states, derivatives of the states, and the control signal. 6. Develop a dwell-time supervisor to orchestrate switching. We use the estimates, provided by the observer, to check whether the inequality for the derivative of the Lyapunov function, corresponding to the current controller, is satisfied and if it fails switch to the controller that corresponds to the smallest performance index. 7. Develop a sampled-data realization for the observer and each candidate con- troller. 8. For each subset of parameters, use numerical simulations without logic to tune the gains of the candidate controllers and the corresponding observers. Use numerical simulations to tune the dwell-time constant, that has to be small enough to avoid finite escape time and large enough to allow the transient in the observer, provoked by peaking, to pass. 9. If sensor noise is expected to be an issue, use simulations to design fast low-pass filters that almost do not influence the overall closed-loop system performance‘. We have gradually introduced this approach and provided some mathematical results and numerical examples to support it. We have given some motivations and have solved the practical tracking problem for a class of minimum-phase nonlinear systems in Chapter 2. There we have assumed that the relative degree and sign of the high-frequency gain are known and have used continuous sliding mode-based design with high-gain observers. 1This is possible since filter dynamics are equivalent to the fast sensor dynamics that has been recently analyzed in [24] in the noise-free case. 102 In Chapter 3, we have shown that our design procedure is applicable for a much larger class of nonlinear systems and for a general Lyapunov function-based design. We have proved that practical stabilization is achieved. A challenging nonlinear example, with a known solution for the state feedback case, involving a recently proposed scale-independent hysteresis-based logic, has been a subject of our investigation in Chapter 4. We have shown how to use our tech- nique to design an alternative state feedback and a new output feedback controllers. We have shown how to obtain a sampled-data realizations for our controllers (Mat- lab/Simulink programs are shown in Appendix A) and how to design a low-pass Butterworth filter in order to suppress the measurement noise. In addition, we have compared performance of the regulators numerically. Extensions of our approach to the multi-input multi-output case and extending the class of systems is a possible direction of future research. From practical point of view, it is of interest to investigate performance of our technique for a class of electromechanical systems and design some hardware exper- iments. . However, the most promising and interesting direction of our future research is providing theoretical support (stability proofs) for the discretization and low-pass filter design procedures, investigated numerically in Chapter 4. 103 Appendix A Listings of the programs We show here the Simulink diagrams and Level 1 Matlab files used to simulate the sampled-data state feedback controller with the modified logic of Chapter 3 that have been used generating the simulation results presented in Chapter 4. The main Simulink program (state_d.md1) is split into 8 parts as follows Page System Name 1 state-d state_d/Logic state-d/Logic/performance indeces state_d/Lyapunov Inequality state-d/Sensor1 state_d/Sensor1/filter state_d/Sensor2 0040501pr state-d/Sensor2/filter1 104 a: one... some mooNoca7: U253 assissnfisfixmozao n A: 5322.. 2 am 520.23 Elam Low LE .2365 2:259 b I8: 20: . eEOéoN cozocamzw BE 4 falsou, Figure A.1: state_d.md1, page 1 105 2N coma $355 $355 xouc_ 3m: :25: 38V 22.. .oEOémN cozocaufi .62sz 20822 \S + .050 some 95”.:qu: USP—tn _uesdfisunsfixmozao 808:. 8552.3 as. i .= E. :88; a. 538.; O. P l— ..E.3-.E.3._ _ «racy nuns Nisan ale . FIN D 1'8 ~I~ 3 fig Tm: 2:359 E“ E [8 23.532... Figure A.2: state_d.md1, page 2 106 m8 mama alsE some moomémzw “585.5 _uesussnuusfixmoafi «use» 29.3.2 @ x322 Al :vamoco «IN 25232 ale Nun 55: 4 8 IN 29:22 >3 A m WU seas. 4'18 Fla 0300c. coca—EotenbfiodvleaSu Figure A.3: state_d.md1, page 3 107 m3 mama 2:259 assuessunsfixmozao BS 885...: REE [3 - F 53 N: A F «EN, 8 3:252: scan-3:338: Figure A.4: state-d.md1, page 4 108 9m omen honor mocméazw 0355 _useusssunsfixmozao 5:25:50 862 88th 5.9234 4% ; ON slam: 25: SO 5:: 520-0th :— Ballso EA 4;? 52.2.5188. Figure A.5: state_d.md1, page 5 109 2o 83 52 88.57: 3%.. 5551233562533 :9me as“. @226. 5235 TA /|l SO Emumcoo 53.5 c. 8] TI tonic _[8 + ....Etoaccmanssa Figure A.6: state-d.md1, page 6 110 at. emma easiessuusfixmoaao 50 5:8: 25: 25:: 520.23 SO c_AlI_..H_. honor mocméflr: Beta 2. 5:80:00 5302 E555 5.5%: .3334 floacomaussu Figure A.7: state_d.ind1, page 7 111 Em moon :9me 55". 00.92 6:25 J A (All 29200 5:3 so 1 BA] TIJ to .25 Al] _ossusssulsixmoino honor mooNucaqrt “.355 c. toggoucomanssa Figure A.8: state-d.md1, page 8 112 G:\WORK\sim_d\hgold.m Page 1 January 23, 2005 10:10:44 PM function [sys,x0,str,ts] = hgold(t,x,u,flag,tau,vareps) % switch flag, %%%%%%%%%%%%%%%%%% % Initialization % %%%%%%%%%%%%%%%%%% case 0, [sys,x0,str,ts] = mdlInitializeSizes(tau); %%%%%%%%%% % Update % %%%%%%%%%% case 2, sys = mdlUpdate(x,u,tau,vareps): %%%%%%%%%% % Output % %%%%%%%%%% case 3, sys - mleutputs(x); %%%%%%%%%%%%% % Terminate % %%%%%%%%%%%%% case 9, sys - [1; % do nothing %%%%%%%%%%%%%%%%%%%% % Unexpected flags % %%%%%%%%%%%%%%%%%%%% otherwise error(['unhandled flag - ',num23tr(flag)]); end %end hgold % %---=3:--sa-==833=====‘=-=====---:===========fl==== % mdlInitializeSizes % Return the sizes, initial conditions and sample times for the S-function. %=====xmaslsluan===========z====n=========uu======x--=============a====- % function [sys,x0,str,ts] - mdlInitializeSizes(tau) % sizes - simsizes; Figure A.9: hgold.m (High-gain observer), page 1 113 G:\WORK\sim_d\hgold.m Page 2 January 23, 2005 10:10:44 PM sizes.NumContStates sizes.NumDiscStates ‘0 I I ‘e sizes.NumOutputs - sizes.NumInputs = sizes.DirFeedthrough sizes.NumSampleTimes = ‘e HOHHNO \ ‘0 ie sys = simsizes(sizes); x0 - [0; 0]: str - I]: ts = [tau 0]: % end mdlInitializeSizes mdlUpdate Handle discrete state updates, sample time hits, and major time step requirements. wwwaoapdpao function sys - mdlUpdate(x,u,tau,vareps) sys=[x(1)+tau*(x(2)+2/vareps*(u-x(1))); x(2)+tau/vareps“2*(u-x(1))l: %end mdlUpdate mleutputs Return the output vector for the S-function OPOFWWWW function sys - mleutputs(x) % sys - x(2); %end mleutputs Figure A.10: hgold.m (High-gain observer), page 2 114 G:\WORK\sim_d\lbsl.m Page 1 January 23, 2005 10:11:37 PM function [sys,x0,str,ts] = lbsl(t,x,u,f1ag,tau,a_0) % global P_curr N_curr aa_0 i_p % N_par=42; % for i_j=1:N_par, P_init(i‘j,1)=i_j; if i_j<22, P_init(i_j,2)=-1+O.l*(i~j-1): P_init(i_j,3)=-1; else P_init(i_j,2)=-1+O.1*(i_j-ZZ): P_init(i_j,3)=1; end end % switch flag, %%%%%%%%%%%%%%%%%% % Initialization % %%%%%%%%%%%%%%%%%% case 0, [sys,x0,str,ts] - mdlInitializeSizes(tau,a_O,N_par,P_init); %%%%%%%%%% % Update % %%%%%%%%%% case 2, sys = mdlUpdate; %%%%%%%%%% % Output % %%%%%%%%%% case 3, sys = mleutputs(t,x,u,N_par,P_init); %%%%%%%%%%%%% % Terminate % %%%%%%%%%%%%% case 9, sys - l]; % do nothing %%%%%%%%%%%%%%%%%%%% % Unexpected flags % %%%%%%%%%%%%%%%%%%%% Figure A.11: lbsl .m (Lyapunov-based switching logic), page 1 115 G:\WORK\sim_d\lbsl.m Page 2 January 23, 2005 10:11:37 PM otherwise error([’unhandled flag = ',num28tr(flag)]); end %end lbsl % %=3===-33:==========I=============—=—— ..... ======================== % mdlInitializeSizes % Return the sizes, initial conditions, and sample times for the S-function. %ggaggggggg====================a================g======a ——————— =5 % function [sys,x0,str,ts] = mdlInitializeSizes(tau,a_0,N_par,P_init) % global P_curr N_curr aa_0 i_p % P_curr-P_init; N_curr=N_par; aa_0=a_0; i_p=1: % sizes - simsizes; sizes.NumContStates = 0: sizes.NumDiscStates = 1; sizes.NumOutputs - 3: sizes.NumInputs = N_par+1; sizes.DirFeedthrough - l: sizes.NumSampleTimes - 1; sys = simsizes(sizes): x0 - i_p; str- H: ts - [tau 0]; % end mdlInitializeSizes % %=-================--=‘88::3233=3:======8======88====:fl-===fl====------fl. % mdlUpdate % Handle discrete state updates, sample time hits, and major time step % requirements. %====:= ________ __‘_==B==--=====8'=====--Egfl-----fl=8===-========. function sys = mdlUpdate Figure A.12: 1bsl.rn (Lyapunov-based switching logic), page 2 116 G:\WORK\sim_d\lbsl.m January 23, 2005 SYS=[]; %end mdlUpdate % z==2========a====a==============z=========a========== ----------- === ----- % mleutputs % Return the output vector for the S-function %__.. % function sys a mleutputs(t,x,u,N_par,P_init) % global P_curr N_curr aa_0 i_p % if u(1)