.%~%
,on
$6-
zl\v3!,
in»:
in};
5' II.
.. ....
. .
(A i,
:11.
.14
.
V In :1 f
l .low‘ufldni n?
.4 {lie
9.?
s:
_ “$1.5me
I: _
is...)
:vlv
:fl} .
.. u:
I {.3} .h 1
: ‘fifinxv .
t
8t
LIBRARY
Michigan State
University
This is to certify that the
thesis entitled
DIFFERENTIATION WITH HIGH-GAIN OBSERVERS IN THE
PRESENCE OF MEASUREMENT NOISE
presented by
Luma Vasiljevic
has been accepted towards fulfillment
of the requirements for the
M. S. degree in Electrical and Computer
Engineering
#7 (25,4441, (Tl/4%
Major W5 Signature
A6411"! 11179227 21707
Date
MSU is an alflnnative-action, equal-opportunity employer
PLACE IN RETURN BOX to remove this checkout from your record.
TO AVOID FINES return on or before date due.
MAY BE RECALLED with earlier due date if requested.
DATE DUE
DATE DUE
DATE DUE
6/07 p:/C|RC/DateDue.indd-p.1
DIFFERENTIATION WITH HIGH-GAIN OBSERVERS
IN THE PRESENCE OF IV‘IEASUREMENT NOISE
By
Luma K. \v’asiljevic
A THESIS
Submitted to
Michigan State University
in partial fulfillment of the requirements
for the degree of
MASTER OF SCIENCE
Department of Electrical Engineering
2007
ABSTRACT
DIFFERENTIATION WITH HIGH-GAIN OBSERVERS
IN THE PRESENCE OF MEASUREMENT NOISE
By
Lunia K. \I'asiljevic
The error in estimating the derivative(s) of a noisy signal by using a high-gain
observer is studied and quantified. The error is bounded in terms of the infinity
norms of the noise and a derivative of the signal. The error bound is independent of
the frequency or derivatives of the noise. Guidelines are presented for the observer
gain design when it is used for on-line differentiation. Analytical and simulation
results are presented.
I dedicate this thesis to my husband Njegouan for
his tremendous support throughout my graduate studies
iii
ACKNOWLEDGMENTS
I am grateful to my adviser Professor Hassan Khalil for bringing this topic to my
attention and for his priceless guidance throughout the research and writing of this
thesis.
iv
Contents
List of Figures vii
List of Tables ix
1 Introduction 1
1.1 IVIotivation ................................ 2
1.2 Higher Order Sliding Modes ...................... 3
1.2.1 The Idea of Sliding-Mode Control ............... 3
1.2.2 First and Second Order Sliding Modes ............ 8
1.3 High Gain Observers .......................... 11
1.3.1 Stabilization ........................... 16
Differentiation with High-Gain Observer in the Presence of Mea-
surement Noise 19
2.1 High-Gain Observer as a Differentiator ................ 20
2.2 The Differentiation Error in the Absence of Noise .......... 22
2.3 The Differentiation Error for Noisy Signals .............. 26
2.4 Computation of the Bound ....................... 30
2.5 Conservatism of the Bound ....................... 33
2.6 Computer Simulation .......................... 35
2.6.1 The Effect of the Eigenvalues and the Order of the Observer
on the Error Bound ....................... 35
2.6.2 Comparison Between the Actual Estimation Error and the
Error Bound ........................... 45
A Comparison Between High-Gain Observers and Exact Robust
Sliding-Mode Difl‘erentiators 51
3.1 Robust Exact Differentiation via Sliding Mode Technique ...... 51
3.2 Arbitrary Order Robust Exact Differentiator ............. 54
3.3 In Comparison to High-Gain Observers ................ 58
3.4 Computer Simulation .......................... 62
4 Conclusions 72
Bibliography
vi
76
List of Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
The error bound as a function of the high-gain observer parameter
5. .................................... 29
The function g(t). ........................... 32
The gain of the high-gain observer as a function of frequency. . . . . 34
The estimation error as a function of time. The first derivative of a
sinusoidal signal is estimated with a second order high-gain observer
with e = 0.01. ............................. 39
The estimation error as a function of time. The derivatives of a
sinusoidal signal are estimated with a third order high-gain observer
with e = 0.01. ............................. 40
The estimation error as a function of time. The first derivative of a
sinusoidal signal is estimated with a fourth order high-gain observer
with e = 0.01. ............................. 41
The noise u(t) in time and frequency domain ............. 42
The error bound for the derivatives as a function of e for a 4th
order high-gain observer with multiple real eigenvalues ........ 43
The estimation error as a function of time. The first derivative of
a noisy sinusoidal signal is estimated with a 2nd order high-gain
observer. The magnitude of the noise is Iluiloo = 0.012, Il‘ul’lloo =
1, and e = Egpt. ............................ 44
The estimation error as a function of time. The derivatives of a noisy
sinusoidal signal are estimated with a 3rd order high-gain observer.
The magnitude of the noise is Hulloo = 0.012, Hu(3)lloo = 1, and
e = (53)” + egpt)/2. .......................... 45
The estimation error as a function of time. The derivatives of a noisy
sinusoidal signal are estimated with a 4th order high—gain observer.
The magnitude of the noise is IIHIIOO = 0.012, ||u(4)||30 = 1, and
e: (egpt+egpt+egpt)/3. ...................... 46
Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u as described by (3.4) with a 3rd
order high-gain observer ......................... 48
vii
2.13 Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u given by (2.19) with a 3rd order
high-gain observer. ........................... 49
2.14 Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u(t) = sin(t) with a 3rd order high-
gain observer. .............................. 50
3.1 Sliding-mode/high-gain estimate of first and second derivative of a
noisy sinusoid ............................... 64
3.2 The estimate of derivatives of sin(10t)+/i(t) with high-gain/sliding—
mode observer. The noise magnitude is ||u||oo = 0.1 ......... 69
3.3 The estimate of derivatives of (3.14) with high-gain/sliding-mode
observer. The solid line depicts the actual derivative, whereas the
dashed line is the estimate ........................ 70
3.4 Sliding—mode/high-gain estimate of the derivatives of “1 given by
(3.16). The solid line depicts the actual derivative, whereas the
dashed line is the estimate ........................ 71
viii
List of Tables
2.1
2.2
2.3
2.4
3.1
3.2
3.3
3.4
3.6
3.7
3.8
The constant m for k = 1 and n = 2 ............ 36
. (1—7E2) k/n
The expressron Q k +1 Pk+1 for 1 S k g 2 and n = 3 ..... 36
. (14%) k/n
The express1on Qk+1 Pk+1 for l S k S 3 and n = 4 ..... 37
The expression Qicl-t-Ifi) Phil/ff for 1 g k S 4 and n = 5 ..... 38
A comparative summary of the features of high-gain observers versus
robust exact differentiators developed by A. Levant ......... 61
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(t) and Ilulloo = 0.01. .................. 63
The percentage differentiation error for a 6th“ order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(0.5t) and llulloo = 0.01. ................ 65
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(5t) and Ilfllloo = 0.01. ................. 65
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(10t) and llulloo = 0.01. ................ 66
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(50t) and llfllloo = 0.01. ................ 66
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is sin(0.1t) and Ilulloo = 0.1 .................. 67
The percentage differentiation error for a 6th order high-gain ob-
server and a 6th order HOSM differentiator. The differentiated
signal is siu(t) and ”Mice = 0.1. .................. 67
ix
3.9 The percentage differentiation error for a 6m order high—gain ob—
server and a 6th order HOSM differentiator. The differentiated
signal is sin(10t) and IIHIIOO = 0.1. .................
l The constants Pk for 2 S k S n, 2 S n S 10 for multiple real
eigenvalues ................................
2 The constants Qk for 2 S k S rt, 2 S n S 10 for multiple real
eigenvalues ................................
KI
CI!
Chapter 1
Introduction
Differentiation of signals in real time is an old and well-known problem. An ideal
differentiator would have to differentiate measurement noise with possibly large
derivatives along with the signal. In [25], the differentiation error for a higher-
order sliding mode differentiator is quantified in terms of the~ magnitude of the
noise; that is, a bound on the error is derived that depends on the magnitude of
the noise and not its derivative or frequency. Such quantification is useful because
it provides insight into the signal to noise ratio of the differentiated signal.
In this thesis we derive a similar bound on the differentiation error when a
high-gain observer is used to estimate the derivative(s) of a signal in the presence
of measurement noise. The error bound depends on the infinity norm of the noise
and the infinity norm of the derivative of the highest estimated derivative of the
signal.
Ideally, when no noise is present the error in estimating the derivative with
a high-gain observer shrinks to zero as the observer gain grows to infinity. How-
ever, large gain undesirably magnifies the measurement noise. There is a trade-off
between the closeness of the estimate to the true derivative in the absence of noise
and the noise amplification in the presence of noise. This trade-off is studied
and quantified in this thesis. Guidelines are provided for designing the observer
gain. Results are illustrated by numerical simulation. Simulation results show
that the performance of the high-gain observer with properly designed gain is at
least comparable to the performance of the sliding-mode observer in the presence
of measurement noise.
1 . 1 Motivation
In many cases, the differentiation problem is reduced to observation and filter-
ing problems. In the case when the frequency bands of the signal and noise are
known, band-pass filters are used to damp noises and the transfer function of the
differentiator is approximated by the transfer function of a linear system. When
stochastic models for the signal and noise are available, detection and linear fil-
tering theory could be utilized [19], [20], [8]. If no information is available about
the bandwidth or stochastic properties of the noise, it is useful to have insight
on the accuracy a differentiator could achieve. In [25] A. Levant points out that
no differentiator that is exact on input signals whose (n —— 1)th' derivative has a
Lipschitz constant L, producing the ith derivative, where i < n, can provide
(n—i)/n
for accuracy better than Lz/"H/rlloo , in the presence of uniformly bounded
noise u(t) 1. Indeed, being exact on signals whose (11 — 1)th derivative has a
Lipschitz constant L, the differentiator must be exact on sinusoidal noise with
amplitude II/tlloo and frequency (L/H/rlloo)1/", producing an error of at least
Li/nllu|[g_i)/n in the presence of noise. That is, a second—order differentia-
tor exact in the absence of noise on signals whose first derivative have Lipschitz
constant L, must make an error that is at least N/L/II/‘IIOO in the presence of
uniformly bounded measurement noise ,u.(t). In [25] and [24] A. Levant proposes
‘ H/Illoo = sum>0 lilitll
t} th
differentiators that compute the i 1' derivative exactly for signals whose (72. — 1)
derivative has Lipschitz’s constant L in the absence of noise and produce an er-
ror bounded by Ix'Li/n“hug—IV”, for some K > 0, in the presence of noise.
Levant’s differentiators utilize Higher Order Sliding Modes (HOSM).
In this thesis, we show analytically that the same accuracy could be achieved
with high-gain observers in the presence of noise, given the appropriate choice of
the observer gain. Vt'e utilized computer simulation to demonstrate our results
and compare the performance of HOSM observers and high-gain observers in the
presence of measurement noise. In this chapter, we briefly review the concepts and
ideas of HOSM and high-gain observers omitting formal proofs. We refer the reader
to appropriate sources for the formal theories. In Chapter 2, we show how the high-
gain observer could be used as a differentiator in the presence of measurement noise,
whereas Chapter 3 is a comparison between HOSM differentiators and high-gain
observer—based differentiators. Finally, Chapter 4 is a brief summary of results
presented in this thesis.
1.2 Higher Order Sliding Modes
1.2.1 The Idea of Sliding-Mode Control
An obvious way to achieve a control task under heavy uncertainty conditions is to
keep some constraints by ” brute force.” The most simple way to keep an equality
constraint is to react immediately to any deviation of the system stirring it back
to the constraint by sufficiently energetic effort. This approach leads to so—called
sliding modes [18]. To illustrate this idea, we present the following example.
Example — Stabilization Utilizing Sliding Mode
For the system (1.1) in regular form [22]
77 : fallllf),
. (1.1)
5 = Mitt) + gtmflu + (50,72,611),
where2 :1: = (71,33) E R2 is the state, u E R is the control input, fa, fb and g
are sufficiently smooth functions in a domain D C R2 that contains the origin.
We assume that fa and fl) are known, whereas 9 and 6 are uncertain. We
also assume that g is positive and bounded away from zero; that is g 2 90 > 0.
The function 6 is piecewise continuous in t and sufficiently smooth in (n,£, u)
for (t, 77,5, u) E [0, 00) x D x R. Suppose that in the absence of 6 the origin is
an open-loop equilibrium point. Our goal is to design a state feedback control law
to stabilize the origin for all uncertainties in g and 6.
We begin by designing the sliding manifold (constraint) 3 = if — 45(7)) 2 0
such that when motion is restricted to the manifold, the reduced-order model
77 : fa.(77a ¢(7’))
has an asymptotically stable equilibrium point at the origin. The design of c507)
amounts to solving a stabilization problem for the system
77 : fa(779€)
with f viewed as the control input. We assume that we can find a stabilizing
continuously differentiable function (M7)) with 96(0) 2 0. Next, we design u to
2For simplicity, we consider a single-input 2nd order system. See [22] for multi-
input higher-order system.
bring s to zero in finite time and maintain it there for all future time. Toward
that end, we write the s equation:
8d)
3 = M77. 5) - a—nfatmrf) + 907.013 + (50,77,611) (12)
In the absence of uncertainty; that is, when (5 = 0 and g is known, taking
it = —g"1 Ifb — (ad/anfia] results in s = 0, which ensures that the condition
3 = 0 can be maintained for all future time. Assume that
fie) — 333nm + am, u.)
90?)
S p(.-r) +kO||'1.1.||OO,V(t,7},§,u) E [0, 00) x D X R,
(1.3)
where the continuous function p(;r) 2 0 and [co 6 [0,1) are known. Utilizing
V :2 (1/2)s2 as a Lyapunov function candidate for (1.2), we obtain
(905
r, 2 z szg(:r')t + site) — a—nfatr) + 3m [23>] 5 9(3) {321. + Isllptr) + koluli}.
Take
it : —)3(.rr)sign(s), (1.4)
where
3(3) 2 1”“? + 30 we 6 D (1.5)
_ ‘0
and ’80 > 0. Then,
V S g(;r)[—,z3(:r) + p(.r) + k0/3(17)ll3l Z HUM—(1 _ [Cm-(3ft) + ”(film
S Maul—pt?) — (1 - k0),30 + P(:17)l|«9| S -go(-‘17)+’3o(1 “1630)l8l-
The inequality V S — 90(17),30(1 — k0)[s| ensures that all trajectories starting off
the manifold s = 0 reach it in finite time and those on the manifold cannot leave
it.
Higher order sliding modes (HOSM) generalize the basic sliding mode idea
acting on the higher order time derivatives of the system deviation from the con-
straint instead of influencing the first deviation derivative as it happens in stan-
dard sliding modes. A number of such controllers were described in the literature
[6, 7, 13, 26, 28]. HOSM is a movement on a discontinuity set of a dynamic sys-
tem understood in Filipov sense [17]. The sliding order characterizes the dynamic
smoothness degree in the vicinity of the mode. If the task is to provide for keeping
a constraint given by equality of a smooth function 3 to zero, the sliding order
is the number of continuous total derivatives of 3 (including the zero one) in the
th
vicinity of the sliding mode. Hence, the r order sliding mode is determined by
the equalities
s=s=§=~°=s(r_1)=0
forming an r-dimensional condition on the state of the dynamic system. The words
“ rth order sliding” are often abridged to “r—sliding.”
Real Sliding vs. Ideal Sliding
For the smooth time-varying dynamic system described by the equation
51': = f(t,;1:,u), (1.6)
where a: is a state variable that takes values on a smooth manifold X,t is time
and u E Rm is control. The design objective is the synthesis of a control u
such that the constraint s(t,.i:) = 0 holds. Here, 3 : R X X :—-> Rm and both
f and s are smooth enough mappings. A motion that takes place strictly on
the constraint manifold s = 0 is called an ideal sliding [26]. We also informally
call every motion in a small neighborhood of the constraint manifold a real sliding
[32, 33]. The 13f —order sliding mode (as in example above) exists due to infinite
frequency of the control switching. However, due to switching imperfections this
frequency is finite. The sliding mode notion should be understood as a limit of
motions when switching imperfections vanish and the switching frequency tends
to infinity [17, 1, 2]. The definitions below were introduced in [26].
Definition 1 Let (t,.r(t,e)) be a family of trajectories indexed by 5 E R with
common initial condition (t0,;r(7‘0)) and let t 2 to (or t E [t0,T]). Assume
that there ezrists t1 2 to (or on [t0,T] ) such that on every segment [t,,t”],
where tI 2 t1 (or on [t1,T]) the function s(t,:r(t,e)) tends uniformly to zero
with e tending to zero. In this case we call such a family a real sliding family on
the constraint 3 = 0. We call the motion on the interval [t1,oc) (or [t1,T]) a
steady state process.
The term control algorithm is used for a rule to form the control signal [26].
Definition 2 A control algorithm is called an ideal sliding algorithm on the con-
straint s = 0 if it yields an ideal sliding in finite time for every initial condition.
Definition 3 A control algorithm depending on a parameter 5 E R is called a
real sliding algorithm on the constraint 3 = 0 if with e —-> 0, it forms a real
sliding family for every initial condition.
1.2.2 First and Second Order Sliding Modes
Preliminaries
For the closed loop control system
i: = f(t,;1:,u) (1.7)
u = U(t,a:,£) (1.8)
5': an, lag) (1.9)
where U is a feedback operator, 5 is a special auxiliary parameter (’operator
variable’ as in [12, 11]). The initial value of 6 may be defined as a special func-
tion 5(t0) = {0(t0,;r0) or considered to be arbitrary. Equations (1.8) and (1.9)
constitute what is called a binary control algorithm [11, 12]. Let s(t,:r) be the
desirable constraint, with s 6 C1, and 8.9/0.1? 75 0.
Definition 4 Equations (1.8)/(1.9) are called the first/second order sliding algo-
rithm on the constraint 3 = 0 if a stable sliding mode of the first/second order on
the manifold s = 0 is achieved, and for every initial condition (t0,170) the state
a." is transfered to the sliding manifold in finite time.
First order sliding is characterized by a piecewise continuous function U and
2,0 = 0. The second order sliding algorithms are given by a continuous function U
and a bounded discontinuous function w , therefore, the sliding problem is solved
by means of a continuous control [29, 30, 31, 13, 14, 15, 16].
For simplicity, we take 3 E R, u E R and t, s(t), u(t) available. The goal
is to force the constraint s(t) to vanish. Assume the conditions:
1. In (1.7) the function f is C1. The function 3 is C2. We assume that
1? E X, where X is a smooth finite-dimensional manifold. Any solution of
8
(1.7) is well defined for all t provided that the control u(t) is continuous
and satisfies lu(t)[ S p < 1 for each t.
. Assume there exists “I 6 (0,1) such that for any continuous function
u, |u| > ul, implies 311 > 0.
Remark: This condition implies that there is at least one t such that s(t) =
0 provided u has a certain structure. Consider the differential operator
L..<-) = 53-) + 55mm, u)
where Lu is the total derivative with respect to (1.7) when u is considered
a constant. Define s as
S(t, :c, u) = Lus(t, 1:) = s[(t,:1:) + .s[l¢(t,:17)f(t,:1:,11.)
. There are positiveconstants so,KA[,K7-n and no <1, such that |s(t,;r)| <
so implies
as
0 < K < — < K .
m — 0U- — A1
for all u, whereas the inequality [11.] > no implies su > 0.
. The set {t,:1:,u : |s(t,a7)| < so} is called the linearity region. There is a
constant Co such that in the linearity region the inequality [LuL-us(t, :r)[ <
Co holds.
. The region |s| S so — (5, where 0 < 6 < so is called the reduced linearity
region.
Sliding Algorithms
The algorithm
—signs with Is/EI > 1
U a
—s/€ with Is/E £1
forms a real sliding algorithm of the first order.
For no] > am. > 0,1,1,” > “(111/509 am > Co/Km, and Ema}! —Co >
KMozm, +Co, the ”twisting algorithm” (below) is a second order sliding algorithm
[13,14,291
——u with Iul > 1
it = —(.1msi_qns with 3.5" g 0, |'11| S 1 a
—aA[s-igns with 3s > 0, |u| g 1
The algorithm (prescribed law of variation of s [13])
. —u for |u| > 1
U- : 3
—osign(s — 9(5)), for [ul 3 1
constitutes a second-order sliding algorithm on the constraint 3 = 0, provided
that 01 > 0 is sufficiently large and the initial conditions are within the reduced
linearity region. The function g(s) is smooth everywhere except on s = 0. Also,
all solutions of the equation 3 = 9(3) vanish in a finite time and that the func-
tion g(s)q(s) is bounded. For example, with /\ > 0 and 0.5 5 ’y < 1,g(s) =
—)\signs|s|7, may be used.
All the above examples of sliding algorithms use the derivatives of s calcu-
lated with respect to the system. The following is an example that does not utilize
this property [16]. For a, /\ > 0,0 < p S 1/2,01 > Co/Ix’m,a > 4KM/so, p()\Km)1/p >
10
(K2110 + Co)(2I\’M)1/p—2 and limo)! S p, the algorithm
11 = 211 + 21.2, (1.10)
. —u for In] > 1
ul 2 , (1.11)
—(1signs for |n| S 1
—-/\ s psigns for s > s
212: I ”I I I 0 , (1.12)
—)x psigns for |s| S so
8
constitutes a second-order sliding on s = 0.
In [25] Levant describes arbitrary order sliding algorithms. These algo-
rithms use the derivatives of the output to achieve higher order sliding. He also
develops exact, robust differentiators based on algorithm (1.10), ( 1.11) and (1.12)
that provide derivatives of arbitrary order. In Chapter 3 we review the robust,
exact differentiators deveIOped by Levant and then compare them to high-gain
observers.
1.3 High Gain Observers
In many practical problems we cannot measure all state variables due to technical
or economical reasons. Therefore, we have to use dynamic compensation to extend
state feedback designs to output feedback. One form of dynamic compensation is
to use observers that asymptotically estimate the states from output measure—
ments. High-gain observers guarantee that the output feedback controller recovers
the performance of the state feedback controller when the observer gain is suffi-
ciently high. The separation principle allows us to separate the design into two
11
tasks. First, we design a state feedback controller that. stabilizes the system and
meets other design specifications. Then, we design an output feedback controller
replacing the state at by its estimate :13" provided by the high-gain observer. A
key property that makes this separation possible is the design of the state feedback
controller to be globally bounded in :17. High-gain observers are robust to model
uncertainties and can be used in a wide range of control problems [22, 21].
Example
Consider the second-order nonlinear system [22]
i1 = $2
.172 = ¢(;E, '11) (1.13)
y=x1
where a: 2 [:171,:1:2]T. Suppose u = 7(1?) is locally Lipschitz state feedback control
law that stabilizes the origin :1: = 0 of the closed loop system
it =1 .
1 2 (1.14)
1'2 = 05(13 "111‘”
To implement this feedback control using only measurement of the output y, we
use the observer
:1 = .i". + h (g — :i: )
. 1 2 1 1 (1.15)
532 = ¢0($771')+ I1120/ - 371)
where (250(1r, u) is a nominal model of the nonlinear function qb(:r, u). The esti-
531 {1.71 -— i‘l
j; = = ,
172 1‘2 — 5:2
12
mation error
satisfies the equation
:1“ = —h :1" +15
.1 1 1 2 (1.16)
.172 = —/1211?1 + (5017,57)
where 603:?) = q)(.r,:)'(.if)). We want to design the observer gain H = [l11, I12]T
such that liintéoo :i‘ = 0. 111 the absence of the disturbance term (5, asymptotic
error convergence is achieved by designing H such that
—/11 1
.40 :
—h2 0
is Hurwitz. For this second-order system, .40 is Hurwitz for any positive constants
hl and [12. In the presence of (5 we need to design H with the goal of rejecting
the effect of 6 on .i'. This is achieved, for any 6, if the transfer function
82 +h18 + [2,2 3 + h'l
00(3)
from (5 to :1? is identically 0. Whilethis is not. possible, we can make SUI’wER lGo(jw)|
arbitrarily small by choosing 112 >> hl >> 1. In particular, taking
13.
w
h1= gel, 1:2 2 — (1.17)
m
for some positive constants 0'1, (1'2 and 5, with 8 << 1, it can be shown that
E 8
00(3) =
(as)? + sols + (.12 53 + 01
Hence, lim5_,o 00(3) 2 O. The disturbance rejection property of the high-gain
observer can be also seen in the time domain by representing the error equation 1.16
in the singularly perturbed form. Toward that. end, define the scaled estimation
13
GI‘I'OI' S
771: , 7]2 = 1172. (1.18)
{.1
m |,_,
The newly defined variables satisfy the singularly perturbed equation
5771 = —a-17]1 + 02, (1 19)
€772 = —012771 + 86(1),.”17).
This equation shows clearly that reducing 5 diminishes the effect of 6. It shows
also that, for small 5, the scaled-estimation error 7) will be much faster than
:13. Notice, however, that 721(0) will be 0(1/5) whenever 231(0) ¢ 331(0). Conse-
quently, the solution of (1.19) will contain a term of the form (1 /e)e_aII/E for some
a > 0. Whereas this exponential mode decays rapidly, it exhibits an impulsive-
like behavior where the transient peaks to 0(1/5) values before it decays rapidly
toward zero. In fact, the function (a/5)e_at/5 approaches an impulse function
as 8 tends to zero. This behavior is is known as the peaking phenomenon. It is
important to realize that the peaking phenomenon is not a consequence of using
the change of variables (1.18) to represent the error dynamics in the singularly per-
turbed form. It is an intrinsic feature of any high-gain observer with h2 >> hl >> 1.
Fortunately, we can overcome the peaking phenomenon by saturating the control
outside a compact region of interest to create a buffer that protects the plant from
peaking [22].
The full-order observer (1.15) provides estimates (£1,532) that are used to
replace (:131, 2:2) in the feedback control law. Since y = 2:1 is measured, we can
use $1 in the control law and only replace 3:2 by :12. Furthermore, we can use
14
the reduced-order observer
11') : —h(w + by) + (150(‘2?’ u) (1 20)
5172 = 11.1 + hy
where h = 0/5 for some positive constants a and 5 with 5 << 1, to estimate
'12. The reduced-order high-gain observer (1.20) exhibits the peaking phenomenon
as the full-order observer and is remedied by saturating the control as well.
The high-gain observer is l‘)asically an approximate differentiator. When
coo is chosen to be zero, the high—gain observer is linear. The transfer function for
the full-order observer from y to :1: is
0,) 1+ (sol/(12% 1
,2 “ —+ as 5 —> 0.
(53) + ales + (1:2 3 3
For the reduced-order observer the transfer function transfer function from y to
.172 iS
[9
————— —> s as E —> 0.
(5/01)s + 1
Thus, on a compact frequency interval, the high-gain observer approximates 1)
for sufficiently small 5. Realizing that the high-gain observer is an approximate
differentiator we can see that measurement noise and unmodeled high-frequency
sensor dynamics will put a practical limit on how small 8 could be. Examples of
application to induction motors and mechanical systems are given in [4, 9, 5]. It. is
for the first time in [23] and this thesis that this limitation is quantified. Guidelines
are provided for the choice of a. This choice takes into account the amplitude of
the noise, but it is independent of the frequency of the noise.
1.3.1 Stabilization
Consider the multi-input-multi-output system [22]
:1'; = A:1:+ B
[1 s 371—2 37L—1]T
as 5 —+ 0. Hence, asymptotically, as 5 -—> 0 , system (2.2) acts as a differentiator.
21
2.2 The Differentiation Error in the Absence of
Noise
For the chain of integrators
I = .41: + Bum), (2.3)
with
P0 1 01 '0-
0 0 1 0 0
4 = , B =
0 0 1 0
. 0 0 J .. 1 .1
and state vector _ q
u
u,
117 = :
”(n—2)
u(n—1) .
the nth order observer that estimates the states is given by (2.2). Consider the
scaled estimation error equation
1') = $111,777 — Bum), (2.4)
where , . -
f 711 q _ (.11—I1) _ (- (u—Il) .
. (LET-i?) (III—I9)
”2 5'”— 511—2
(377'1—1—-7A‘n.—1) (“In—2I-Ilz—1)
7772—1 5 5
t 7m . t (3371— in) .. L (“(12—1)_ In)
and
[ —ol 1 0-
—(1:2 0 1 0
All —-
‘Gn—l . . . 0 1
“an ... H. .H 0_
In order to show that the ultimate bound on 77k in steady-state, for 1 S k. S n,
is of the order 0 (e of”) ) , where l]'llIn)|]oo = snpt>o |u.(”)(t)|, we prove
:x: '—
the following lemma.
Lemma 1 Consider the stable linear time-invariant single-input system
2 = .112 + Nw, (2.5)
where M is Hurwitz and “3(0)” S a. Let
30
Al: 2/ [{e.rp(.l[r)N}k] (17',
. 0 .
for 1 S k S n, where {'}k is the km component of an n-dimensional vector.
Then
1. For all bounded piecewise signals 111 with [[112]]30 S c anal each 1 S k S
n, 2k is globally uniformly ultimately bounded by1 6 + Kkllwiioo where 6
could be arbitrarily smallg; that is, there is time T = T(a, c,6) such that
lzktT)! s Kkllwlloo + a, w 2 :r
2. There is a piecewise continuous w with [lwlloo S c, dependent on k and
‘Throughout this thesis 6 denotes an arbitrarily small quantity
2If 2(0) = 0, then 6 2 O
23
6, such that
(am) 2 Kkll‘u’lloc — 6.
Proof: The solution to (2.5) as a function of time t is given by
t
z(t) -—- e.rp(Mt)::(0) +/0 eIp(.lI(t — o)).‘\7111(o) do. (2.6)
Since M is Hurwitz,
lim {e;1tp(.1[t)z(0)}k = 0, for 1 S k S n.
t—>oo
For arbitrarily small 6 , there is T large enough that |{e:17p(Mt)z(0)}k| S 6 for
t 2 T. Hence,
]:k(t)| S 6 + ]f6{e:rp(;1[(t — o))N}kw(o) do]
g (s + [IwHoo f5 [{eIp(.’\-I(t — 0))N}k| do, (2.7)
= 6 + KkHwHoc.
On the other hand,
T
zk(T) — / {e17p(fi[(T — o))N}kuz(o) do S 6.
A 0
Using the inequality [Ia] — |b]| S [a — b] , we have
T
[zk(T)] — / {e.rp(.r‘11(T — o))N}kw(o) do S 6.
A 0
Substituting r = T — o, we get
T
[zk(T)| — / {e:rp(flIr)N}kw(T — 7') dr S 6.
. 0 .
24
For an input of the form
(t) e . sign({e.1:p(1\[(T _ t)IN}kl if 0 S t S T
U" = ’
0’ otherwise
we have
T
: [[11]]30/0 l{(3:1.‘p(ll-[T)l\l}k] (17'
= Kkllwlloo-
T
/ {e;rp(A-Ir)N}k w(T —- 7) dr
0
Hence
earn 2 11’)...”me — 6.
Lemma. 1 shows that by choosing 6 arbitrarily small, the ultimate bound
on the km state of the stable, linear, tin‘ie-invariant, single-input system (2.5)
will be arbitrarily close to the product of KI: and the infinity norm of the input.
Moreover, this bound is actually reached for some bounded input.
REMARK: By corollary 5.2 in [22], Zk is ultimately bounded by
2167110117(P)l2II‘NIIQHU’II
,\ (P) 00’
6 + (2.8)
m i n
where PM +MTP = —Q, for some positive definite matrix Q, where /\ma;1;(P),
/\
min(P) are respectively the largest and smallest eigenvalue of P and ”N”? =
\//\ma$(NTN). However, choosing Q to minimize the bound (2.8) is not straight
forward and Lemma 1 shows that the bound (2.8) can not be smaller than K k [lu’l]oo+
6.
We now return to the scaled estimation error governed by (2.4). By Lemma
25
1 the kth' component of the scaled estimation error is ultimately bounded by
,. 0C" 1
Hid") “30/ {eIp(—Aor)B}
0 5 k
00
= 5|]u(n)]]30/0 [{8Ip(.~'17’T)B}k] (1T +6.
dr+6
k—l)
Since 77k = (qu—I) — aik)/en_k , the estimation error (u( - 3%) is ulti-
mately bounded by
. oo
6 + en-k+1]|u(n)|]30/O [{eIp(A77r)B}k] (17', (2.9)
for 1SkSn—1.
2.3 The Differentiation Error for Noisy Signals
Let I be the state of the observer (2.2) when driven by the noisy measurement
1) = u + u , where u is the signal to be differentiated and ,u is uniformly bounded
measurement noise. By the linearity of the high-gain observer
i=£+c
where
O: 215 + Ba and (2 .L1C+ B11.
Without loss of generality, we take ((0) = 0. The estimation error can be written
as
Ik—u(k—1) = (€k_u(k_1))+B}i-.ldo
27
for 1 S k S n. Relations (2.9), (2.10) and (2.11) allow the ultimate bound on the
differentiation error for the kth derivative, where 1 S k S n — 1, to take the
form
Qk+1l|lllioo
, n rn—k
5+Pk+1||u()|(oo- + at
E bk(e). (2.12)
Relation (2.12) outlines the two additive components to the ultimate bound on
differentiation error in the presence of measurement noise. One component is
directly proportional to a power of e , whereas the other component is inversely
prOportional to a power of 5 . Although shrinking e to 0 in the absence of noise
improves the differentiation error, the error bound will blow up in the presence of
noise since
' e: <,< —.
511190ka oo,1_k_n 1
The error bound bk(e), as a function of e, attains a global minimum for
5 ___ E,opt 2 n. k n Qk+1HHHOO
k n — k
PMHMHOO’
since b’(ezpt) = 0 and b”(e) > 0 for all e > 0 and 1 S k S n -- 1. The
parameter 5 should be of the order 0( 71 I] IIIZIITIO ) . Note that the optimal
u 00
choice for 5 depends on the order of the derivative k , meaning that if, for instance,
we use a 3rd
order high-gain observer, the optimal choice of e for estimating
the first derivative will differ from the optimal choice for estimating the second
derivative. However, the order of Ezpt is the same for all 1 S k S n — 1 . We
show via simulation in the following section that averaging Ezpt for 1 S k S n —1
does not severely damage the performance of the high-gain observer.
Figure 2.1 depicts the behavior of the bound as a function of e . It is possi-
ble to choose the parameter 6 such that a predefined tolerance for the estimation
28
the error bound
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
the high—gain parameter epsilon
0.1
Figure 2.1: The error bound as a function of the high-gain observer parameter 5.
29
error is met. For instance, if the tolerance for the error is 0.3, for any 5 between
0.01 and 0.05 the estimation error is guaranteed to be within the tolerance. If
0,
e = ekpt, the error bound takes the form
basil”) = (Qkflll/tlloc)1’I’i(Pk+1llu("IHook/n (g — 1W” ( f k) + 6.
(2.13)
for 1SkSn—1.
The error bound (2.15) depends on the bound on the noise [[11]]00 , the
bound on the nth derivative of the signal |]u(n)]|oov and through Pk and Qk,
on the eigenvalues of the matrix A), and the order of the observer n. We will
explore this dependence via simulation in the next sections.
The bound (2.12) is not useful unless we can compute Pk and Qk, for
2 S k S n. In the following section, we provide an algorithm to compute Pk and
Qk, for 1 S k S n and illustrate the idea by an example.
2.4 Computation of the Bound
To illustrate how the constants Pk and Qk, for 1 S k S n, could be computed,
we start with an example.
3Td
Example 1. Suppose we want to design a order high-gain observer
to estimate the first and second derivatives of a bounded signal. Suppose further
that we want to place the eigenvalues of the observer at A1 = —1,/\2 = —2, and
A3 = —3. The matrices A7), B and B for this choice of eigenvalues are
—6 1 0 0 —6
A: —11 0 1 ,8: 0 ,3: —11
—6 0 0 1 —6
30
The 111atrices €I])(.‘17]T)B, and €.’I.‘])(:1)]T)B are
(%e—3T _ e—2r + %€—r)
eIp(A)]r)B = (38—3T— 4 1.3—2T + 3177)
(c_3T + lie—QT + 3e—T)
27 ,‘37 _ 8e—2T + %e—r
éfI1)(.4,)T)B = 818 3T — 326— 27 +ge e_T
2763—37. — 241727 + 38—7
The constants Pk and Qk are given by
P1 joOO]( .le":7 — (7'27 + %e—T)] (17'
P2 = fOO|(3 LIGHT-Ir 2e )[dr ,
P3 f(3>0|( (6—3T + 312—27 + 3e— )| (17
Q1 . 1090|2g€—3 —e8 QT +%e TI ([7
Q2 = fooc [8216—3r_ 328 2T + 2c —T] (17
Q3 fox [278—3T — lie—27- + 38—7] (17'
we show in detail how to compute
E—T)] (1r.
Q
N
H
-::\
8
Too
p—a
|
co
x]
co
NJ
[\9
x}
+
to) 01
Denote g(t) = (5216—3 — 32e_2t + gefl“). \I'e want to compute
00
/ lg(t)| (1t-
0
Figure 2.2 displays the graph of g(t). Recall that the definite integral is the
numerical value of the area between the graph of the function and the abscissa.
31
Figure 2.2: The function g(t).
The function g(t) has zeros at t1 = 0.3535 and t2 = 2.4315. We evaluate
oo t1=0.3535 t2: 5 oo
[0 lg(t)ldt = | [0 g(t)dtl+| (new
9 g(t) d15l-
t1=0.3535 t2=2.4315
In other words, to evaluate the integral of the absolute value of a function, we add
up the absolute values of the integrals of the function between its zeros. Symbolic
software like MATLAB and Mathematica has the capacity to compute Pk and
Qk, for 1 S k S n, for a given linear system. We provide as an Appendix
to this thesis a MATLAB script that computes the constants Pk and Qk, for
1 S k S n, from the eigenvalues of the matrix A7,. Application of the script to
32
the current example yields
P1 0.1667 Q1 1.4116
P2 = 1 , Q2 2 3.1297
P3 1.8333 Q3 1.8292
We now summarize the algorithm. To compute Pk and Qk, where 1 S
k S n,
0 Compute the matrix eIp(AUt)
0 Compute the kth‘ entry Fk(t) of the n-dimensional vector function F(t) =
eIp(.477t)B
0 Compute the kth entry Gk(t) of the n—dimensional vector function C(t) =
eIp(Ant)B
0 Find the zeros of Fk(t)
0 Find the zeros of Gk(t)
0 To compute Pk, integrate F k(t) between its zeros and add the absolute
values of the obtained integrals
0 To compute Qk, integrate Gk(t) between its zeros and add the absolute
values of the obtained integrals
2.5 Conservatism of the Bound
The bound given by (2.12) is conservative in the sense that it does not take into
account the low-pass filtering characteristics of the high-gain observer. To illustrate
this point, consider the transfer function of a 2nd order high-gain observer that
33
estimates the first derivative of the input:
02 .S‘
T(s) (2.14)
— 5252 + £0.18 + (12'
For a sinusoidal noise )1. of frequency w,
”@1130 S ITleiil/Illoo,
whereas
ago)
\/((12 — w252)2 +(o1w€)2
The resonance frequency for the second-order system (2.14) that maximizes |T(jw)|
ITlel =
50 u .
epsilon = 0.01—3
35-
30-
Gain
N
U!
I
l
20— a
15- -
I
J
10
0 50 100 150 200 250 300
frequency
Figure 2.3: The gain of the high-gain observer as a function of frequency.
34
is of the order 0 (%) . Although the resonance frequency might achieve the bound
12(5) in (2.12), the low-pass feature of the high-gain observer will diminish the
high-frequency noise since |T(jw)| —+ 0 as w —+ 00; hence, the high-frequency
components of the noise will be attenuated due to the low-pass filtering charac-
teristics of the high-gain observer. Figure 2.3 shows the plots of [T(jw)| as a
function of the frequency w for several values of 5. Note that the gain increases
as 5 decreases; however, it is always a bounded function and it rolls off for high
frequencies.
2.6 Computer Simulation
2.6.1 The Effect of the Eigenvalues and the Order of the
Observer on the Error Bound
Recall from (2.12) that the value of 5 that minimizes the error bound bk (5) where
th
the kth derivative is estimated with an n order high-gain observer is given by
5250]”: = n k n Qk+1HHHOO
k n—k
Pk+1llu(n)lloo
whereas
n — k
(2.15)
k - . n k 71 n
neg”)=(karlllulloofi—WPIHll'uInIlloc)I/"(E—l) / ( )
The error bound (2.15) depends on the magnitude of the noise [I #1100 , the bound
on the nth derivative of the signal ||u(n)]|oo, and through Pk and Qk, the
35
eigenvalues of the matrix A” and the order of the observer n. In (2.15) only
*) PA/n.
95..." 1+1 9-19
depends 011 the eigenvalues of the matrix A”. To investigate the dependence
of (2.16) on the eigenvalues of the matrix .47), we computed (2.16) for difierent
sets of eigenvalues. These results are sun'imarized in Tables 2.1, 2.2, 2.3 and 2.4.
Since the eigenvalues of the high—gain observer are equal to the eigenvalues of A);
(nlr—l
rescaled by , it is not surprising that the value of (2.16) does not change if all
eigenvalues of A); are rescaled by the same constant. In other words, rescaling
the eigenvalues of 4,] will result 111 a rescalede ski) ,but the bound bke( k opt) will
not change.
Table 2.1: The constant ,/Q-2P2 for k = 1 and n = 2
Eigenvalues: Eigenvalues: Eigenvalues: Eigenvalues: Eigenvalues:
-1. —1 -1, -3 1.5 -1, -10 eifi?
1.2131 1.2408 1.2669 1.3051 1.1356
(1k) l/n
Table 2. 2: The expression Qk+1n P
k+1 for 1SkS2 and n=3
Derivative Eigenvalues: Eigenvalues: Eigenvalues: Eigenvalues:
277
11.1 -1,-3,—5 —1,-5,—10 4.3%
1 2
1“: 1723623 2.0950 2.2135 2.3937 1.7607
2 21
2"aI : Péiog 1773719150 2.0928 1.3608
When the eigenvalues are real, the error bound is smallest for equal, mul-
tiple eigenvalues, and it increases as the distance between eigenvalues increases.
36
(1-5)
Table 2.3: The expression Q I: Pk/n for 1 S k S 3 and n = 4
+1 k+1
Derivative Eigenvalues: Eigenvalues: Eigenvalues: Eigenvalues:
'27r 571
-1, -1, —1, -1, -5, -10, -1, -1, gay—3",?
”37r
-1 —15 (FELT
1 3
15"; 172362.1 3.2164 3.8403 2.6351 2.4524
1 21
2'74; 1796,),2 3.6065 5.0592 2.6456 2.3237
3’“! : Pfoff 2.4465 3.5602 2.7560 2.3306
The error bound is smaller for complex eigenvalues distributed on a sen'ii-circle of
radius one in a butterworth structure than for real, multiple eigenvalues. How-
ever, it is argued in [9] that although the steady-state error is smaller for complex
eigenvalues, the transient response is oscillatory and the transient time is longer
than for multiple, real eigenvalues. To investigate this phenomenon, we simulated
real time differentiation with a 2nd,3rd and 4th order high-gain observer for
both choices of eigenvalues: multiple real and complex eigenvalues distributed on
a semi-circle in a butterworth structure. We differentiated noisy and noise-free
sinusoids.
\V’e simulated high—gain observers (2.2) for n = 2, n = 3 and n = 4.
The signal we differentiated is u(t) = sin(t), where t is time. In the absence
of noise, we set 5 = 0.01. For the 2nd order observer; for A), with eigenvalues
A1/2 = e ' , the coefficrents of the observer are (11 = 1.4142 and (12 = 1.
For A") with multiple real eigenvalues placed at -1, we have 011 = 2 and (1:2 2 1.
Figure 2.4 shows the estimation error, where the first derivative is estimated with a
272d
order high-gain observer. The amplitude of the steady-state error for complex
eigenvalues is 0.014 and 0.02 for real eigenvalues.
37
_h , ,
Table 2.4: The expression (221171,) pk/n
k+1 for 1SkS4 and 72:5
Derivative Eigenvalues: Eigenvalues: Eigenvalues: Eigenvalues:
—1, -1, -1, 4,2,3, -1, -3, -5, -1, —5, -10.
-1,-1 —4,—5 —7,—9 —15,-20
l :1
1st. PQOQ” 4.5223 4.7229 4.979 5.5275
g 2.
2"aI : Page); 6.5034 7.2341 8.0703 9.9357
3’”: P4562313 5.7919 6.5768 7.5188 9.7568
4
4th: ego],5 3.1808 3.4989 3.8921 4.8245
'377
For the 3rd order observer; for .47] with eigenvalues Aug 2 (FLT, A3 =
—1, we have (11 2 €12 :2 2.4142 and 013 = 1, whereas (.11 = 02 = 3 and a3 = 1
if A), has multiple real eigenvalues placed at -1. Figure 2.5 shows the estimation
error, where the lst and 2nd derivatives are estimated with a 3rd order high-
gain observer. For the first derivative, the amplitude of the steady—state error
for complex eigenvalues is 0.00024, for real eigenvalues 0.000299. For the second
derivative, the amplitude of the steady-state error for complex eigenvalues is 0.024,
for real eigenvalues 0.03. ,2
For the 4th“ order observer; for A7] with eigenvalues AU? 2 eiL3Z, A3/4 =
i '5” . .
e , the coefficients of the observer are (11 = (.13 = 2.7321, (12 = 3.7321, and
(14 = 1. For An with multiple real eigenvalues placed at -1, the coefficients are
a = a = 4, a = 6, and a, z 1. Figure 2.6 shows the estimation error,
1 3 2 4 ..
where the 131", 27m“I and 3rd. derivatives are estimated with a 4th order high-
gain ol.)server. For the first derivative, the amplitude of the steady-state error
for complex eigenvalues is 0.000002, for real eigenvalues 0.000004. For the sec-
ond derivative, the amplitude of the steady-state error for complex eigenvalues is
38
complex eigenvalues
0.1 I l I I
0.05
l
l
estimation error
0
l
_0.1 l l 1 J
0 2 4 6 8 10
time (3)
real eigenvalues
0.1 I i . .
estimation error
0
L
g5
—L
_
_
_
_
time (s)
Figure 2.4: The estimation error as a function of time. The first derivative of a
sinusoidal signal is estimated with a second order high-gain observer with 5 =
0.01.
0.00037, for real eigenvalues 0.0006. For the third derivative, the amplitude of
the steady-state error for complex eigenvalues is 0.027, for real eigenvalues 0.04.
Although the steady-state error might be smaller with complex eigenvalues, the
transient response is more oscillatory and the transient time is longer than with
multiple, real eigenvalues.
In the absence of noise, the estimation error is bounded by
s”“k+1||u(")||OOPk, 1 _<_ k g n, (2.17)
where k is the estimated derivative and n is the order of the observer. In the
absence of noise, reducing 8 improves the estimation error. For 5 < 1,5n—k+1
39
complex eigenvalues - first derivative real eigenvalues - first derivative
0.1 ~ 0.1 ~
0.05 0.05
estimation error
0
estimation error
0
5 10 ' o 5 10
time (s) time (8)
complex eigenvalues - second derivative real eigenvalues - second derivative
0.1 - 0.1 -
0.05 0.05
estimation error
0
estimation error
0
5 10 ' o 5 10
time (s) time (s)
Figure 2.5: The estimation error as a function of time. The derivatives of a
sinusoidal signal are estimated with a third order high-gain observer with 5 = 0.01.
is a decreasing function of the order of the observer 71.. On the other hand, we ob-
served from simulation that for a fixed derivative k, Pk is an increasing function
of the order of the observer n. Whether the bound (2.17) increases or decreases as
a function of the order it, depends on the choice of 5, the order of the estimated
derivative [6, and the differentiated signal through Hui") ”00. Figures 2.4, 2.5 and
2.6 indicate that for e = 0.01 and u(t) = sin(t), the estimate of the first deriva-
tive with a order observer is better than the one with a order observer,
whereas the estimate with a 3rd order observer is better than the estimate with
a 2nd order observer. Also, Figures 2.5 and 2.6 indicate that for 5 = 0.01 and
4th.
u(t) = sin(t), the estimate of the second derivative with a order observer is
better than the estimate of the second derivative with a 3rd order observer.
40
complex eigenvalues - 1st derivative real eigenvalues - 1st derivative
§ 0. 1 0.1
5
C
{-3 o 0
(I!
.§
8 -0.1 . —0.1 .
0 5 10 0 5 10
time (s) time (s)
6 complex eigenvalues — 2nd derivative real eigenvalues — 2nd derivative
:- 0.1 7 ~ .1 .
m 0
C
8 o 0
(13
E
75 -—0.1 - _0.1 .
‘1’ O 5 10 0 5 10
time (s) time (3)
complex eigenvalues — 3rd derivative real eigenvalues - 3rd derivative
(:3 0.1 . 0.1 .
63
C
{.3 o 0
m
.E
a”, —o.1 - —o.1 .
0 5 1O 0 5 10
time (s) time (s)
Figure 2.6: The estimation error as a function of time. The first derivative of a
sinusoidal signal is estimated with a fourth order high-gain observer with E = 0.01.
To investigate the effect of the eigenvalues of A], on the estimation error
in the presence of measurement noise, we differentiated the signal “motel/(t) =
32r1(t )+/i(t ), where g(t) is noise obtained from SIMULINK Band-Limited White
Noise generator. The noise power is 10—8, sampling time 0.001 seconds. Figure
2.7 displays the noise in time and frequency domain.
71- 1 END] . _
We chose the parameter E as ——1-Zk_ 1 5k , where n — 2,3 or 4. To
justify this choice we show in Figure 2.8 the error bound bk(s) as a function of
5, given by relation (2.12), for a 4th order high-gain observer with multiple real
eigenvalues. If 'u(t) = sin(t), then Hui4 )lloo = 1. We took H/lllocz 0.012.
The value of 5 that minimizes the error bound for the first derivative b1(€)
is 5(1)]1t__ — 0.2339. The value of 5 that minimizes the error bound for the second
41
0.02 i . i .
E
(U
E
O
'0
Q)
E
E
_33 —0.01
O
C
_0.02 I l l I
0 2 4 6 8 10
time (s)
.g 1 | I l l
E
O
U
3
C
a)
3
0'
9
E
Q)
.9
o , .
C
0 2 4 6 8 10
frequency
Figure 2.7: The noise g(t) in time and frequency domain.
derivative 122(5) is 53m : 0.2566. The value of 5 that minimizes the error bound
for the third derivative 193(3) is Egpt = 0.2664. For the 4th high—gain observer
with real multiple eigenvalues we set 3 = (0.2339 + 0.2566 -l- 0.2664)/3 = 0.2523.
Note that this choice of 5 does not significantly increase the error bound for the
derivatives.
For the 2nd order observer with multiple real eigenvalues, 5(1)], t = 0.0664.
For the 2nd order observer with complex eigenvalues, egpt = 0.087963. Figure
2.9 displays the estimation error for the first derivative with a 2nd order high-gain
observer.
For the 3rd order observer with multiple real eigenvalues, we set 5 2
(50m + 501” 2 = 0.1612. For the 3rd order observer with complex eigenvalues,
1 2
42
45 . T . v T . I
first derivative
— — - second derivative _
- ~ - - third derivative
I
35
l
30
I
25
15- l -
the error bound for derivatives
I
10
0 0.2 0.4 0.6 0.8 1 1.2 1.4
the parameter epsilon
4“).
Figure 2.8: The error bound for the derivatives as a function of 5 for a order
high—gain observer with multiple real eigenvalues.
we set 5 = (Sipt+ 50m)
/2— — 0.16739. Figure 2.10 displays the estimation error
for the first and second derivatives with a 3rd order high-gain observer.
For the 4th order observer with multiple real eigenvalues, we set 5 =
opt opt opt
(6 +82 +83 )
‘1
values, we set 8 = (5?” + Egpt-f €01”)
/ 3 = 0.2523. For the 4th“ order observer with complex eigen-
/3 — 0. 26093. Figure 2.11 displays the
estimation error for the first, second and third derivatives with a 4th order high-
gain observer.
Recall that bk(szpt) , given by (2.15) for a fixed 1 < k < 72—1, depends on
the order of the observer n. We observed that Q1 'Pk/n (n1)k/n ( n )
k+F Pk+1 75—71—17
is an increasing function of the order of the observer n for a fixed 1 S k g n — 1.
Hence, whether the error bound bk(ezpt) for a fixed 1 S k g n. -— 1, is an
43
0.4 I I I I
complex eigenvalues
- — - - real eigenvalues
‘9'
5
C
.9
‘16
.E
‘05
m
—0.6 ~ ~
—0.81 -
_1 I 1 l l
0 2 4 6 8 10
time (s)
Figure 2.9: The estimation error as a function of time. The first derivative of a
2nd order high-gain observer. The
magnitude of the noise is llfl'lloo = 0.012, III/[Hoe = 1, and 8 = 6(1),”.
noisy sinusoidal signal is estimated with a
increasing or a decreasing function of the order of the observer n, depends 011 the
ratio ||21('”')||OO/||/i||oo. Clearly, if ||u(")||30/||11H30 g 1, the error bound is an
increasing function of the order n and it is best to estimate the 19th derivative
with an observer of order k. + 1. But if ||u(")|loo/ll#|loo > 1, it might be better
to estimate the kth derivative with an observer of order higher than k + 1.
In our simulation, for u(t) = sin(t) and ||,u.||oo = 0.012, the estimate of
the first derivative with a 4th order observer is better than the estimate with a
3rd order observer, which is better than the estimate with a 2nd order observer.
4th
Also, the estimate of the second derivative with a order observer is better
than the estimate of the second derivative with a 3rd order observer. For the
44
first derivative
O.5 I n I I
5
E 0 - 7' —
C
.9
‘66
E, -0.5 - complex eigenvalues -
U)
0 - — - — . real eigenvalues
_1 l 1 l l
0 2 4 6 8 10
time (5)
second derivative
3 I I I 1
complex eigenvalues
is 2 __ _ - — - — . real eigenvalues .
8 .
C
.9
‘6
.E
275
(D
_1 1 L l l
0 2 4 6 8 10
Figure 2.10: The estimation error as a function of time. The derivatives of a
noisy sinusoidal signal are estimated with a 3rd order high-gain observer. The
magnitude ofthe noise is lllllloo :2 0.012, llu(3)|loo = 1, and 5 = (sipt+€gpt)/2.
convenience of practicing engineers, we provide as an Appendix to this thesis a
table of the constants Pk and Qk for 2 g k g n, 2 g n g 10 for multiple real
eigenvalues.
2.6.2 Comparison Between the Actual Estimation Error
and the Error Bound
To investigate how the estimation error relates to the error bound (2.12), we sim-
ulated differentiation in the presence of measurement noise for three different sig-
first derivative
9 1 . T complex eigenvalues
2 \p - — 1 — ~ real eigenvalues
.93 0 ~ - _ — —
(U
E
g _1 1 1 1 1
0 2 4 6 8 10
time (s)
4 second derivative
0
t 5 i 1 . I
(D
C
E
455 _5 1 i 4 i
‘1’ 0 2 4 6 8 10
time (3)
third derivative
estimation error
time (s)
Figure 2.11: The estimation error as a function of time. The derivatives of a
noisy sinusoidal signal are estimated with a 4th order high—gain observer. The
magnitude of the noise is H/illoo : 0012, ll'?l(4)lloo : 1, and 5 = (5(1’11t+€(2)1)t+
out
531 )/3.
nals. For a noisy signal3 “noisyitl : 21(t) + g(t), we varied the parameter 5
and computed the steady-state error as mathT l’tt(k)(t) — :1“: k +1 I, where 27k is
the km state of the high-gain observer with input Unoisy(t), t is time, and
T > 0 is transient time. We used a 3rd order high-gain observer with multiple
real eigenvalues in the simulation.
3The noise g(t) is noise obtained from SIMULINK Band-Limited White Noise
generator. The noise power is 10—8, sampling time 0.0001 seconds. Figure 2.7
displays the noise in time and frequency domain.
46
For *u.(t) being the state 21(t) of the 3rd order nonlinear system4
2'71 0 1 0 1:1 0
2'32 2 O 0 1 2:2 + 0 14171912413),
23 0 0 0 2:3 1
112.171,
b(rl,r2,:r3) = 0031.172,173)93(1’11f2113) ‘l‘ (1 ‘ a‘(1711$21373))9'11(4l711$2473),
$2+$2+$2
0(71’72’73): 1+xl+x2+m3’
94331925173) 2 —54;r.1 — 36.132 — 923,
gu (.131, .272, (173) = 5421 — 361:2 + 913,
(2.18)
the actual estimation error compared to the error bound for the derivatives is
depicted in Figure 2.12.
For
11 = y — [1082n.(0.051:) + 5], (2.19)
,.
where a: and y are the states of the system‘)
2': = I/ costp,
([2 = % tan—0,
9: c,
y = 1/ 82772.99,
6 = —20 sign{u(3) + 3(1'16 + {L4 + I'ul3)1/12 >< signlii. + (214 + |u|3)1/6
x sign(t't + 0.5Iul3/4 sign(u))]},
the actual estimation error compared to the error bound for the derivatives
is depicted in Figure 3.3. For u(t) = stn(t), the actual estimation error compared
4System (3.4) was used in [10] and [9] to test a numerical differentiator.
5The signal (2.19) is taken from [25].
47
first derivative
6 I 1 1 I
the error bound
§ »— — - the actual error
5 4 - -
C
.9
‘5
.E 2 — -
‘63
{D
o '\ —.+———---I ''''' L _"—L—'— flii— —_i—-l_
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
the parameter epsilon
second derivative
60 . T . .
the error bound
§ — - — , the actual error
5 40 7 l
C
.9
‘6
5 20 — _
(I)
0
'\
0 .__1___,_._ _1_._,_.,_._ _. _______
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
the parameter epsilon
Figure 2.12: Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u as described by (3.4) with a 3rd order high-gain
observer.
to the error bound for the derivatives is depicted in Figure 3.1. Note that for all
signals used in this simulation; that is u(t) given by (3.4), u(t) given by (2.19)
and u(t) = sm(t), the value of 5 that yields the smallest error differs from Ezpt,
which minimizes the error bound bk(€) (2.12). Relation (2.12) means that for all
signals with ||u(")||oo g A, the error is guaranteed to be less than
Pk+1A€n_k + animus
5
Choosing a = Ezpt to minimize the error bound bk(e), guarantees that the
estimation error will be less than bk(ezpt), which is not violated in simulation. It
48
first derivative
3 . i I 7 .
the error bound
9 - — - the actual error
5 2 - -
C
.9
‘5
.E 1 ~ -
t5
Q)
0 ‘ -—1 — -— - 1 1 i i 1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
the parameter epsilon
second derivative
20 v 1 I I .
the error bound
§ 15 _ -— - — - the actual error .
5
C
9 10 - ~
(U
E
8 5 ~ -
o ' ' 1
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
the parameter epsilon
Figure 2.13: Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u given by (2.19) with a 3rd order high-gain ob—
server.
is interesting to observe that the ”best” 5 for some signals with lluinllloo _<_ A,
. opt . . . ., .
could be different than 8 k . However, the analytic choice of 5 IS cons1stent With
the actual error.
49
first derivative
2 r I I I I
the error bound
1.5 _ — - — - the actual error .
estimation error
I
0 7‘-—1l" .l l l l l l
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
the parameter epsilon
second derivative
10 I I T I I I 1
the error bound
§ 8 r — - - - the actual error 7
55
l.
.5 6 1
E 4 : ‘
3 2 t _
O \ "1 — '— l— — iv 1 1 i
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
the parameter epsilon
Figure 2.14: Actual error compared to the error bound (2.12) for estimating the
first and second derivative of u(t) = sin(t) with a 3rd order high-gain observer.
50
Chapter 3
A Comparison Between
High-Gain Observers and Exact
Robust Sliding-Mode
Differentiators
In this chapter we review arbitrary order robust, exact sliding-mode differentiators
developed by A. Levant in [24] and [25], compare their features to the features of
high-gain observers and compare the performance of the two observers via extensive
simulation in the presence of measurement noise.
3.1 Robust Exact Differentiation via Sliding Mode
Technique
In [24] and [25] A. Levant developed differentiators that are exact in the absence of
noise after some transient time that could be made arbitrarily small, and provide
51
1—k/n.
for an error bound of the order O(|]/1]]QC Hui") Hg”), in the presence of noise,
if 11 is uniformly bounded noise, where the km derivative is estimated and n > k,
provided that the nth derivative of the differentiated signal '11 is bounded. we
state the theorems about robust exact differentiator as presented by A. Levant. in
[24] and [25]. See [24] and [25] for proofs of the stated theorems.
Let the input signal 11 = 21(t) + 11(1‘), be a measurable locally
noisy (t)
bounded function defined on [0, oc), u(t) is the base signal that has a derivative
1 L > 0 and g(t) is noise. The differentiation problem
with Lipschitz’s constant
is formulated as a control problem to keep the constraint .9 = u — .1: = 0. For
2-sliding we have .4 = 2': — ii = 0, and hence it = :r. Toward that end, consider
the auxiliary equation
it : y. (3-1)
Applying a modified 2-sliding algorithm (See [26]) to keep x — u(t) = 0, we obtain
3} : 311 — ’\ lI _ u7t()isy(t)l1/2 32.9"(517 " “noisyuna (3 2)
y°1 = —a’ sign(:r — 'u,,,0.,fsy(t)),
To ensure 2-sliding on s = :r — 11 = 0, define 9(a)” L) : [\I!(t*)|, where
($(t), \I!(t)) is the solution of
i: —|S|1/2+111,
5,: 7501—1), —|s|1/2+9>0 ,
—X1.Z(a+L), —|2:|1/2+\r go (33)
2(0) 2 0, \IJ(0) = 1, a > L, /\ > 0 and
r... :16pr t > 0, 2(1) 2 0, 11(7) < 0}.
1If the input u(t) is twice differentialfle with bounded second derivative, the
. . . . . II
Lipschitz’s constant of the first derivatlve 18 equal to ”u “oo-
52
Choose a > L, A > 0 and (19(07, A, L) < 1. The output of the system (3.1), (3.2)
is y(t) = :i:(t) = u’(t), whereas the solutions of (3.1), (3.2) are understood in
the F ilipov sense [17]. In practice (c2, A,L) is to be calculated via computer
simulation.
Theorem 1 Let a > L,A > 0 such that ((r,A,L) < 1. Then, with p..(t) E 0,
provided u(t) has a derivative with Lipschitz’s constant L, the equality g(t) :
u’(t) is fullfilcd identically after a finite time transient process.
Theorem 1 means that after a finite transient time, the output of the dif-
ferentiator (31) and (3.2) is the exact derivative of the input in the absence of
noise, provided that (3.3) holds. It could be inferred from the proof of Theorem
1 [24] that smaller (a',A,L), yields faster convergence of g(t) to u, (t). Also,
for fixed (1, increasing A decreases (I). A sufficient condition for convergence,
resulting from a crude estimation is
L
o>L, A224L0+L.
a—
The substitutions 01 2 ML, A 2 k2 \/L, for kl > 1, k2 > 0 in (3.3) eliminate L
from the equations for (t enabling (t to be computed regardless of the Lipschitz
constant L. Some triplets of A, a, and 65 are: A = \/L, a =1.1L, = 0.9888
and A = 0.5fl, (1 = 4L, <1> = 0.736.
Theorem 2 Let a > L,A > 0 such that 0.
If a = m, A = 152%, then wanting, = ifi,/n,i||oo, for 501:1, 52) >
0. The constants b and b could be determined via computer simulation.
53
Theorem 2 means that the steady-state differentiation error is proportional
to \/||p|[00||u”||30, when the differentiator (3.1) and (3.2) is driven by the noisy
measurement u.,,0,j5.,/(t) = u(t) + g(t), provided the second derivative of u(t) is
bounded.
3.2 Arbitrary Order Robust Exact Differentiator
Exact derivatives may be calculated by successive implementation of the
robust exact differentiator (3.1), (3.2) and (3.3) with finite time convergence. How-
ever, in the presence of noise, the differentiation error for the (n — 1)th derivative
1 27L
Wlll be proportional to || [1]] oo . Thus, the differentiation accuracy deteriorates
rapidly when noise is successively differentiated. It is proved in [24] that if the
(n — 1)th derivative of u(t) has a Lipschitz’s constant L, the best possible (lif-
ferentiation accuracy for the km derivative, where 1 g k g n — 1 is proportional
to L-k/fl'llallg—k/n) . Therefore, a special differentiator is to be designed for each
differentiation order. In [25] A. Levant proposed two similar recursive schemes for
designing an nth“ order differentiator that is exact in the absence of noise and
. 1—k n
prov1des for an error bound of the order 0(Lk/ ”H 11]].(30 / )), for 1 S k g n —1
in the presence of noise, when the 19”" derivative of u(t) is estimated. Let an
I
nt L -order differentiator Dn_1(u.n0.,j3y(-), L) pFOdUC€ outputs D£_1i'unoisylv
for 0 S k g n — 1, being estimations of u, u', - - - , rim—1), where the (n — 1)th
will
derivative of it has a Lipschitz’s constant L > 0. Then the (n + -order
‘
differentiator with outputs 3k = Di‘,(zinmjs,j), 0 S k g n, being estimates of
u, u’, - - - , um) is defined as
. (n-ll/n .
2:0 = V, l/ = ~A0 2:0 — 11.7lozj3y(t) S’Ign(20 — “noisyitll + 21,
z = D0 1/ - ,L ,
= Dzjet-i. L).
where the base differentiator D1(u(-), L) is a non-linear filter
D1 : 23 = —Asign(z — iinmjsy(t)), A > L (3.5)
The second-order differentiator resulting from scheme (3.4) and (3.5) is
20 = V, V = —A0 [:0 — Il.,,u)i8y(t)]1/2 sign(::0 — “noisyml + 2.1, (3 6)
2'1 2 —Alsign(31 - unoz'syitll = —Alsign(z0 " “noisyml’
Another recursive scheme is based on the differentiator (3.6) as a base one.
Two additional states are introduced with this scheme for each consecutive deriva-
tive. To estimate the kth derivative, for 0 S k g 72., an observer of the order 2n
I n—l)
is constructed. Let 152(n_1)(u,w,jsy,L) provide estimates of u,u ,...,u(
and L is the Lipschitz constant for rim—1) where 132(zi.nm-3y,L) coincides with
(3.6), then 52,,(11 L) is defined as
noisy v
n/(n+1)
3ign(20 - ”Rabi/(0) + Z1 + 1110,
|(n—1)/(n+1)
30 — “noisy/(t)
“,0 Z ‘00 20 _ “twist/(t) sign(::0 — 21,,,0,-8y(t)), (3 7)
31: Dg(n—1)(V(.)’L)’
~n—1
Zn 2 D2(n_l)(i/(-), L),
The 4th order differentiator that estimates the first and second derivative, result-
ing from scheme (3.6) and (3.7) [27] is
2:0 2 V0, 1/0 = —A0 [:0 — “rioisyitll2/3 sign(20 — “noisy(t)) + .21 + ”11.10,
“:0 = ’00 lZO “ urioisy(t)l1/3 51.9”(20 _ “noisyltlli
8'1: V1, V1 2 —A1|2:1 — V0]1/2 sign(zl — V0) + wl,
ail = —olsign(21 — V0); :52 = wl.
(3.8)
Similarly, an arbitrary order differentiator could be taken as a base one. If
the base differentiator is of order m, the differentiator that estimates the deriva-
tives lower than or equal to n would be of order mn. Whereas A. Levant checked
the schemes (3.6), (3.4) and (3.8), (3.7), the conjecture is that all such schemes
produce working differentiators, provided suitable parameter choice.
56
Differentiator (3.6), (3.4) could be written as
n/(n+1) .
S’lgll(30 — “'72()'l'.8'y(t)) + Zl’
3'0 = 120, V0 = 40 ’30 - “noisyltl
2'1 2 1x1, 1/1 = —)\0 [:1 — V0|("_1)/n sign(zl —— V0) + 32,
. 1 2 - ,
Zn—l = Vn—lv Vn—l = _/\n—1 Zn—l — V7t-2l / 37'9”(Zn—1 - Vii—2) + 37%»
Zn = —/\nv9"g”(3n — Vn,_1)~,
(3.9)
or eliminating the variables 210, 2/1, . - -,z/n_1 as
. , n/(n+1) .
30 : —’1‘70 '30 _ "71.()'isy(t) 52971030 _ “nois'z (ll) + 21’
. (‘n-i)/(n+1) .
Zi = ‘ki :50 — '11,.,0.,j8y(t) szgn(zO — ““noisyltll + zit-+1, (3.10)
i=1,~-,n—1
in = —Is'rnsign(.:0 — ‘11."0,i5y(t)),
where A70, k1, - - - , kin are calculated on the basis of A0, A1, - ~ - , An.
Theorem 3 In the absence of input nozse (g(t) E 0 ), if the parameters ”\iv (1,,
are properly chosen, the following equalities hold after a finite time transient process
20 = n(t); 2,; = l/l'_1 = 21..(".')(t), i = 1, - - -,n.
Theorem 4 Let the input noise satisfy H/llloo < 00. Then the following inequal-
ities hold after finite transient time for some bi(Ai7 0i) > 0
2,,(t) —u("')(t) g1),||;i||l,2‘i+1l/l"'+1), i=0,-~,n.
The parameters A73, oi, are to be chosen recursively such that /\1, 0'1, - - . , An, an,
provide for the convergence of the differentiator producing derivatives up to (n -
1)th, Lipschitz's constant L, and A0, 00, are sufficiently large, where 010 is
chosen first. The best way is to choose them by computer simulation.
Proposition 1 Let parameters (10,-, A0,,i = O, - - . , n of differentiators (3.6), (3.4)
or (3.8), (3.7) provide for epact nth -0rder differentiation with L=1, in the ab-
sence of measurement noise. Then the parameters 02' = 00iL2/(n—i‘l'1), A,- =
AOiLl/(n—H'I) are valid for any L > 0 and provide for the accuracy
ev-(t) _. 11(1)“) 3 Eli/(n+1)||/1.||gg_i+1)/(n+1), i = 0, - - -,n, (3.11)
“l
where l)l'(/\()i,(170.i) Z 1.
Proposition 1 allows for tabulating the parameters 001-, A0,,i = 0, - - - , n,
which would enable convenient design of the differentiators for arbitrary L > 0.
A. Levant provides the values for A00,)\01, - . . , A05, for the scheme (3.6), (3.4).
The differentiator with the parameters A00, A01, - . . , A05, provides estimates of
u,u’,~-,u(0), for u such that "(0) has a Lipschitz’s constant L = 1. For
kill
k: < n, the parameters for a -order differentiator coincide with the last
I
k parameters of an nn-order differentiator; that is A0117 2 A07,,A0k_1 =
Arm-1a ' ' ' , A00 = ”\0n—k'
3.3 In Comparison to High-Gain Observers
In Table 3.1 we list the features of high-gain observers and robust exact differen—
tiators side by side. To design a high-gain observer (2.2) to produce outputs that
approximate the kfl",1 S k _<_ n — 1, derivatives of an input u() under ideal
conditions with no measurement noise, the nth derivative of u(-), u("’)(-) needs
to be bounded. Since the differentiation error is ultimately bounded by (2.9)
, ,_ oo
(5 + Elni—AI+1||'IL(”)H*)O / l{(’IP(.47]T)B}k| (1T,
0
for 1 g k g n — 1 and 6 arbitrarily small, to ensure that the error is within a
given tolerance one needs to know ||u(")||30. To ensure the convergence of the
outputs of the robust exact differentiator (3.6), (3.4) to u(~),u'(-), - - - ,u("'—1)(-),
knowledge of the Lipschitz’s constant of the (n — 1)th derivative of the input u()
is necessary. If ”um—1) is differentiable and ill") is bounded, the Lipschitz’s
constant of the (n — 1)th derivative of u(-) is equal to ||u(”)||oo.
In the presence of noise, if the ratio ||u("')||oc/Ilulloo is known, it is pos-
sible to choose the parameter c of the high-gain observer (2.2), such that the
differentiation error is of the same order as the differentiation error of the ro-
bust exact differentiator (3.6) and (3.8), which is the smallest possible order as
proved by A. Levant [24]. The bound on the differentiation error in the presence
of noise for the high-gain observer is given by (2.12). We provide the constants
Pk and Qk, for high-gain observers of order less than or equal to 10 tabulated
in an Appendix which makes computing the error bound easy provided Hal") H00
and lel-Hoo are known. For the sliding mode observer the differentiation error is
bounded by (3.11). The constants gi(/\Oi, 00,-) could be estimated via simulation
for a particular choice of the parameters A0,, (’(li'
th’ order high-gain observer is used to
In the presence of noise when an n
differentiate a signal u(-,) the ratio between the bound on the nth derivative of
u(~) and the bound on the magnitude of the noise needs to be known in order to
ensure that the bound on the differentiation error is of the. smallest possible order
as given by (2.12).
In the absence of noise, reducing the high-gain observer parameter s in
(2.2) will ensure the convergence of the outputs of the observer to the vicinity of
the derivatives of its input if the nth derivative of the input is bounded regardless
of the size of the bound on u(")(-), ||u("’)||oo. To insure the convergence of the
outputs of the sliding-mode observer to the derivatives of the input it is necessary to
know the Lipschitz’s constant of the highest estimated derivative (which coincides
with ||u(”)||oo when ”um—1) has a bounded derivative) prior to the design. The
design of the sliding-mode differentiator (3.6) and (3.8) is independent of the bound
on the noise Hulloo and the error bound is of the smallest possible order as in
(3.11).
60
Table 3.1.: A comparative summary of the features of high-gain observers versus
robust exact differentiators developed by A. Levant
Feature High-Gain Observers Robust Exact Differentiators
Complexity Linear; we tabulated the Nonlinear; a system with
constants necessary to ma-
ke the design straightforward
differential equations with
discontinuous right-hand side
Accuracy in the
absence of noise
The observer could be
designed such that the
differentiation error
is arbitrarily small
after finite time
The differentiation error
is zero after transient
time
Accuracy in the
presence of noise
The differentiation error
is proportional to
Hu‘"’Ilia/"llnlléii'w",
where u is the signal to be
differentiated, til") is the
nth derivative of the
signal u, n is the order
of the observer, 1: is
the estimated derivative
(lngn—l)anduis
uniformly bounded noise.
The differentiation error
is proportional to
Bruins-W",
where u is the signal to be
differentiated, L is the
Lipschitz’s constant of the
(n — I)“ derivative, k is
the estimated derivative,
(OSkSn—1)andpis
uniformly bounded noise.
Computability
of the bound
We provide an algorithm
to compute the bound
given Ila-("’lloc and Hull»
The bound could be
estimated via simulation
Prior knowledge nee—
ded for the design
in the absence of noise.
The quantity Hul")||oo
The Lipschitz’s constant of
the highest estimated derivative
Prior knowledge nee-
ded for the design
in the presence of noise
The ratio ||u(”)Hoo/Hlllloo
The Lipschitz’s constant of
the highest estimated derivative
61
3.4 Computer Simulation
In this section we compare robust exact differentiators developed by A. Levant and
high—gain observers via simulation. The simulation was carried out via MATLAB
and SIMULINK. The noise used throughout the simulation is always a rescaled ver-
sion of noise shown in Figure 2.7. First we examine the error while differentiating
sinusoids of various frequencies contaminated with white noise of varied magnitude
using high-gain observer and higher-order sliding-mode robust, exact differentiator.
We then compare the performance of higher-order sliding-mode differentiators to
high-gain observers on more complex signals. The high-gain parameter 5 is always
chosen such that the error bound (2.12) is minimized as described in Chapters 2
and 3.
To differentiate sinusoids we used the 6th -order sliding-mode robust, exact
differentiator estimating up to the 5th derivative given by the equations (3.12)
. 5/6
20 2 V0, V0 : ‘121’1/6 ’50 _ “noisy” 329n(30 "unoisyl )) + 217
231 = 1/1,I/ =z—8L1/5l 1— V0|4/5 sign(zl — V0) + z2,
2'2 2 V2, V2: —5L1/4|7 2 — VIP/4 sign(z Z2 — V1) + 253,
(3.12)
23 2 V3, V—3 — —3L1/3|Z3—- V2l2/38ign(23 — V2) + z4,
24 2 V4, V4 2 —1.5L1/2|Z4 — V3|1/2 sign(Z4 - V3) + 25,
25 — ——1.1Lsign(z5 — V4),
where u (t )— — Asin(wt) +y(t), u(t) is white noise and L = Aw6 is the Lip-
noisy
schitz’s constant for the 5th derivative of the differentiated sinusoid of amplitude
62
A and frequency w. The 6th -order high-gain observer is given by (3.13).
iA71 : _%1(j.1 _ “noisyl + i2v
i2 2 —‘:i%(j2 _ unoisy) + £37
3’3 = fifth — “noisyl + it (3 13)
1‘4 = “grill-f4 '— 'uvnoisy) + 55’
575 Z ‘33“5 _ “noisyl + £767
if6 I _: (5:6 _ “noisyl'
Figure 3.1 displays the sliding-mode and high—gain estimate of the first and sec-
ond derivative of a sinusoid contaminated by white noise with magnitude no
higher than 0.01. Tables 3.2-3.9 display the estimation error for higher-order
sliding-mode differentiator and high-gain observer. The error was computed as
mathT |u(k)(t) — itk+1(t) (or zk(t))|, where t is time and T = 4 seconds is
transient time. The performance of high-gain observer and higher-order sliding-
mode observer is comparable.
Table 3.2: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(t) and
H/l-Iloo = 0.01.
Derivative High—Gain Observer HOSM Observer
13It 4.11 % 3.15 %
2nd 25.10 % 27.45 %
3"d 81.62 % 103.70 %
4th 151.59 % 150.96 %
5th 151.03 % 115.47 %
Observe from Tables 3.2—3.6 that the estimates for derivatives higher than
the second derivative are useless for the lower noise level of lll‘lloo = 0.01. Observe
too that the high-gain observer outperforms the sliding—mode observer for high
63
high-gain estimate of 1st derivative sliding—mode estimate of 1st derivative
2 f ~ f 2 e ~ -
0 W 0 W
-2 ‘ 7 7 —2 ‘ 7 7
0 5 10 15 20 0 5 10 15 20
the error in 1st derivative estimate the error in 1st derivative estimate
0.5 . . - 0.5 . - -
0 O
—0.5 ‘ ' ‘ -0.5 ‘ ‘ ‘
0 5 10 15 20 O 5 10 15 20
high-gain estimate of 2nd derivative sliding—mode estimate of 2nd derivative
5 f . . 5 - ' .
O W 0 W
_ o 5 1o 15 20 _ 0 5 1o 15 20
the error in 2nd derivative estimate the error in 2nd derivative estimate
2 . . - 2 . . .
0 \/‘W\ 0 W
_2 . . . _2 . . .
O 5 10 15 20 O 5 10 15 20
Figure 3.1: Sliding-mode/high-gain estimate of first and second derivative of a
noisy sinusoid.
frequency sinusoids, whereas the sliding-mode observer outperforms the high-gain
observer for low-frequency sinusoids. Tables 3.7-3.9 indicate that it is pointless to
estimate derivatives beyond first for the higher noise level of HHHOO = 0.1. For
both, high-gain observer and sliding mode observer part of the error is due to a
phase shift of the estimate with respect to the actual derivative as Figure (3.2)
shows. Note that even though in Tables 3.7-3.9 the magnitude of the noise is
10 times the magnitude of the noise in Tables 3.2-3.6 the error bound (2.15) for
corresponding signals increases by a factor of 10%, where k is the order of
the derivative. For the fifth derivative k = 5, and 10% = 1.46. This means
that we may reasonably expect the error for the 5th derivative estimated with
high-gain observer in Table 3.9 where the magnitude of the noise is 0.1 to be
64
Table 3.3: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(0.5t) and
lllllloo = 0.01.
Derivative High-Gain Observer HOSM Observer
18t 5.98 % 2.6 %
2ml 36.7 % 23.6 %
3rd 116.88 % 94.4 %
4th- 198.24 % 147.2 %
5ih 150.48 % 114.7 %
Table 3.4: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(5t) and
ll/llloo = 001-
Derivative High-Gain Observer HOSM Observer
13t 5.2 % 5.05 %
2'"(1 28.4 % 38.06 %
3rd 88.6 % 121.63 %
41th 156.2 % 176.6 %
5"h 153 % 122.5 %
1.46 times the error for the 5th derivative estimated with high-gain observer in
Table 3.5 where the noise magnitude is 0.01; however, the error is comparable. In
5 Eopt
this simulation we took the high-gain parameter a as 5 = k=1 . Perhaps
this causes the error to be closer to the error bound in the case of noise ,a with
Hulloo = 0.01.
Next, we compare the higher-order sliding-mode and high-gain observer on
the signal (3.14).
u : y — [105771.(0.051:) + 5], (3.14)
Table 3.5: The percentage differentiation error for a 6th“ order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(lOt) and
Hyllgo = 0.01.
Derivative High-Gain Observer HOSM Observer
1” 56% 83%
2""1 30.3 % 51.76 %
3"” 93.18 % 139.65 %
4th 160.2 % 180.9 %
5“" 155.95 % 124.04 %
Table 3.6: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(50t) and
“1th = 0.01.
Derivative High-Gain Observer HOSM Observer
17“ 8.4 % 13.63 %
2"“l 40.4 % 74.3 %
3"(1 110.6 % 188.8 %
4“ 178.1 % 222.4 %
575 162.7 % 142.13 %
where :r and y are the states of the system
.5 : V cosp,
9'9 2 1% tang,
Q = c,
y 2 V sings,
c = —20 sign{u(3) + 3(ii,6 + 114 + |u|3)1/12 x sign[ii + (114 + |u|3)1/6
x sign(u + 0.5Iul3/4 sign(u))]},
We differentiated the aforementioned signal “noisyltl = u(t) + [1.(t), using a 3rd
order high—gain observer with multiple real eigenvalues and a 3rd order sliding-
66
Table 3.7: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(0.1t) and
ll/l-lloo = 0.1.
Derivative High-Gain Observer HOSM Observer
1“ 66.9 % 17.5 %
2"(1 269 % 99 %
3ml 340 % 140 %
4th 317 % 147.05 %
5"h 857 % 814 %
Table 3.8: The percentage differentiation error for a 6th order high-gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(t) and
lllllloo = 01-
Derivative High-Gain Observer HOSM Observer
18t 20.48 % 14.45 %
2"“l 78.62 % 68.67 %
3ml 173.10 % 165.65 %
4th 205.97 % 163.47 %-
5th 143.29 % 109.05 %
mode observer given by equations (3.15) [25],
2'0 2 V0, V0 2 —3L1/3|80 — "(LIZ/3 sign(2:0 — u) + 21,
2'1 2 V1,1/1 = —1.5L1/2|21 — V0|1/2 sign(81 — V0) + 22, (3-15)
252 = —1.1Lsign(21 — V1),
where L : |]u(3)lloo = 6. We added white noise u(t) such that ll/illoo = 0.012.
For the 3rd order high-gain observer with multiple real eigenvalues, 5(1)” = 0.0836
and Egpt = 0.0939. In the simulation we set e = (5gpt+ggpt)/2 = 0.0887. Figure
(3.3) displays the estimates and estimation errors of the first and second derivatives
of u(t) with high-gain observer and sliding-mode observer. The steady—state error
67
Table 3.9: The percentage differentiation error for a 6th order high—gain observer
and a 6th order HOSM differentiator. The differentiated signal is sin(10t) and
ll/lvlloo = 0-1-
Derivative High—Gain Observer HOSM Observer
181E 28.25 % 42.59 %
2nd 96.2823 % 149.5259 %
3"d 194.48 % 265.36 %
4th 229.02 % 212.32 %
5th 151.70 % 121.63 %
for the first derivative after 3 seconds is 0.1361 for the high-gain observer and
0.1408 for the sliding-mode observer. For the second derivative, the steady-state
error after 3 seconds is 1.4655 for the high-gain observer and 1.2497 for the sliding—
mode observer. However, for a 3rd order high-gain observer with e = 0.05, the
steady-state error after 3 seconds is 0.0714 in the estimate of the first. derivative,
and 1.0051 in the estimate of the second derivative.
We now differentiate the signal u1(t) being the state 2:1(t) of the 3rd
order nonlinear system.2
.131 0 1 0 1:1 0
i2 = 0 0 1 £132 + 0 l)(.L‘1,I2,I3),
.23 0 0 0 1+3 1
u:;r1,
l)(;rl,:r2, :r3) : (17(11, :r2,:r3)gs(;r1, 172.13) + (1 — r.1(.’1.'1,;r2,.'r3))g.;1(.rl,:172,I3),
272+272+232
a'(.r11,.'r2,:173) = 2 ,
1+xl+r2+r3
g,s,-(.171,J.‘2..T3) = —5~l.'11‘1 - 361‘2 — 9.1‘3,
gu(;1‘11, 12,113) = 541?] — 36.1‘2 + 91‘3,
(3.16)
2System (3.16) was used in [10] to test a numerical differentiator.
68
high—gain estimate of 1st derivative sliding—mode estimate of ist derivative
20 . 20 r
—20 7 + —20 + ‘
0 2 4 6 0 2 4 6
the error in 1st derivative estimate the error in 1st derivative estimate
10 . ~ 10 e .
0 WW « 0 WW 4
-1O 7 ‘ -1O ‘ 7
O 2 4 6 0 2 4 6
high—gain estimate of 2nd derivative sliding-mode estimate of 2nd derivative
500 ~ . 500 + ~
0 W 1 0 W
-500 ‘ ‘ -500 ‘ ‘
O 2 4 6 0 2 4 6
the error in 2nd derivative estimate the error in 2nd derivative estimate
500 e . 500 - -
0 W . O W
—500 * ‘ -500 ‘ ‘
O 2 4 6 0 2 4 6
Figure 3.2: The estimate of derivatives of sin(10t) + u(t) with high-gain/sliding—
mode observer. The noise magnitude is llfllloo = 0.1.
The differentiator (3.15) and a 3rd order high-gain observer with multiple real
eigen-values were used. It was observed from simulation that the 3rd derivative
of 111 is bounded by 16, the parameter s was set as e = 0.064. The noise is
bounded by 0.012. Figure (3.4) shows the estimate of first and second derivatives
with a sliding-mode observer and high-gain observer. The estimation error for the
first derivative is 0.1 for high-gain observer and 0.12 for the sliding-mode observer,
whereas for the second derivative the estimation error is 1.7 for the high-gain
observer and 1.73 for the sliding-mode observer.
69
high—gain estimate of 1st derivative sliding-mode estimate of 1st derivative
5 . 5 - c
0 l 0 7
1 i _
1 ‘ I
-5 y 7 7 -5 7 7 7
0 5 1O 15 20 0 5 1O 15 20
the error in 1st derivative estimate the error in 1st derivative estimate
0.5 - . - 0.5 - - 7
0 r W o - W
-0.5 7 7 7 —0.5 7 7 7
0 5 10 15 20 O 5 10 15 20
high—gain estimate of 2nd derivative sliding-mode estimate of 2nd derivative
5 r, . . . 5 r . ~ .
I. \. _ \ l
0 \ -' \ / \ ’ ' '
./ . . ‘ . . ‘ .
' o 5 1o 15 20 0 5 1o 15 20
the error in 2nd derivative estimate the error in 2nd derivative estimate
2 - . - 2 - - .
-2 7 7 7 —2 7 7 7
0 5 10 15 2O 0 5 1O 15 20
Figure 3.3: The estimate of derivatives of (3.14) with high-gain/sliding—mode ob-
server. The solid line depicts the actual derivative, whereas the dashed line is the
estimate.
high—gain estimate of 1st derivative
1 -
f
5 1O _ 15
the error 131 derivative estimate
r
-0.5 ‘ ‘
0 5 10 15
high—gain estimate of 2nd derivative
v
10
O 5 15
the error in 2nd derivative estimate
2
0 1
_2 . 1
0 5 10 15
sliding—mode estimate of 1st derivative
1 - .
0 .
i
i.
_1 - -
0 5 10 15
the error in 1st derivative estimate
0.5 - -
O
-0.5 7 7
0 5 10 15
sliding-mode estimate of 2nd derivative
1O
5
the error in 2nd derivative estimate
15
2
0.
—2
0 1O 15
Figure 3.4: Sliding-mode/high-gain estimate of the derivatives of 11.1 given by
(3.16). The solid line depicts the actual derivative, whereas the dashed line is the
estimate.
Chapter 4
Conclusions
The theory of high gain observers is an asymptotic theory. Ideally, when no mea-
surement noise is present the estimation error shrinks to zero as the gain of the
observer grows to infinity. When noise is present, the high-gain of the observer
amplifies the noise. Hence, in the presence of noise there is a trade-off between the
error in the absence of noise and the amplification of the noise. The trade-off is
quantified through the ratio of a uniform bound on the noise and a uniform bound
on the nth derivative of the differentiated signal, where n is the order of the
observer. Due to this trade-off, when the high-gain observer is used for differentia-
tion, extra care is to be taken when designing the gain. The gain should be neither
th
too large, nor too small. We find that the gain of an 72 order observer should
(7%)
be of the order 0 (MW—”Q
it: loo
Inspired by Levant’s results in [25, 24, 26], where Levant showed that a n11-
merical differentiator can not provide for accuracy better than 0(Li/"||y||§,’$‘z)/"),
and developed differentiators that provide for this accuracy, we showed that the
high-gain observer can provide for this same accuracy if the gain is properly cho-
sen. First, we showed that the estimation error when high-gain observer is used as
72
a differentiator is bounded by
Qk+1lll1lloo
, __ , n Cn—k:
b1111)=6+17,.+1nu( >188. + 5,
7
where k is the estimated derivative, 71 is the order of the observer, Pk, Q k are
constants dependent on the order and eigenvalues of the observer, and 6 could be
made arbitrarily small. The effect of the order and the choice of eigenvalues on the
estimation error is analyzed in Chapter 2. We provide the constants Pk, Q 197 1 g
k g n and n = 1, 2, - - - , 10 for high-gain observer with multiple real eigenvalues
in Appendix 1 of this thesis.
Next, we choose
Sept __ n k n Qk+1lllull00
k — _ 1
n ’7 17111112104180
that minimizes the error bound bk(e). Note that, the choice of 5 depends on
. . . . . . 0 t
the derivative estimated. However, we observed during Simulation that 5 k1) does
not vary significantly with k, nor does bk(e) vary significantly with s, which
n opt
k=15
n
opt
k
error for a class of signals u with the same Main) ”00, rather than the error for the
allowed us to choose 5 = throughout the simulation. It is important
to realize that this choice of c minimizes the error bound on the estimation
particular signal at hand. In Lemma 1 we show that there is a signal for which the
estimation error comes arbitrarily close to the bound bk(s). Simulation examples
are presented to verify and clarify analytical results.
In Chapter 3 we compare high-gain observers to Levant’s sliding—mode dif-
ferentiators. Table 3.1 is a comparative summary of the features of high-gain
observers and sliding-mode differentiators. Simulation shows that in the presence
of noise the performance of high-gain observers and sliding-mode observers is com-
73
parable. It appears that high-gain observers are better for high frequency signals,
whereas sliding-mode observers are better for low frequency signals. The signal
to noise ratio 1 affects the usefulness of the estimated derivatives. In our exam-
3rd and higher derivatives was over 50%, making
ples, the error for estimates of
the estimates useless for both of the observers. We also observed from simulation
examples that large portion of the error is due to a phase shift of the estimate
from the actual derivative. Also, being a low-pass filter, the high-gain observer
deminishes the high-frequency components of the noise.
Results in this thesis for high-gain observers, as well as results in [25, 24, 26]
for sliding-mode differentiators outline the limitations on differentiation in the
presence of noise. They also provide design guidelines to achieve best results
within these limitations. They allow us to quantify the effect of noise on the
estimation error, without placing restrictive assumptions on the noise nor on the
differentiated signal. The only assumption on the noise is that it is bounded and
the bound is known. For the differentiated signal, we require that the bound on
the derivative consecutive to the one we wish to estimate is known. However, large
measurement noise remains a problem for on-line, real-time differentiation when
estimating higher derivatives as simulation examples in Chapter 3 show.
infinity norm of the signal
infinity norm of the noise
lBy signal to noise ratio, we mean
74
Appendix — Constants
Table 1: The constants Pk for 2 S k g n, 2 g n S 10 for multiple real
eigenvalues
n/k 2 3 4 5 6 7 8 9 10
2 2 0 0 0 0 0 0 0 0
3 3 3 0 0 0 0 0 0 0
4 4 6 4 0 0 0 0 0 0
5 5 10 10 5 0 O 0 0 0
6 6 15 20 15 6 0 0 0 0
7 7 21 35 35 21 7 0 0 0
8 8 28 56 70 56 28 8 0 0
9 9 36 84 126 126 84 36 9 0
10 10 45 120 210 252 210 120 45 10
Table 2: The constants Qk for 2 g k _<_ n, 2 g n S 10 for multiple real
eigenvalues
n/k 2 3 4 5 6 7 8 9 10
2 0.735 0 0 0 0 0 0 0 0
3 1.750 0.620 0 0 0 0 0 0 0
4 2.990 2.167 0.559 0 0 0 0 0 0
5 4.410 4.881 2.553 0.521 0 0 0 0 0
6 5.979 8.952 7.094 2.917 0.493 0 0 0 0
7 7.678 14.539 15.494 9.615 3.265 0.471 0 0 0
8 9.493 21.785 29.229 24.316 12.435 3.6 0.454 0 0
9 11.411 30.814 49.925 52.162 35.694 15.544 3.925 0.439 0
10 13.424 41.741 79.338 99.893 85.722 49.897 18.934 4.24 0.426
75
Bibliography
[1]
l2]
l3]
l4]
[9]
[10]
M. A. Aizerman & E. S. Pyatnitskii, “Foundation of a theory of discontin-
uous systems: Part 1” Automation and Remote Control, Vol.35, pp. 1066-
1079,1974.
M. A. Aizerman & E. S. Pyatnitskii, “Foundation of a theory of discontin-
uous systems: Part 11” Ibid, Vol.35, pp. 1242-1262, 1974.
J. H. Ahrens & H. K. Khalil, “Output Feedback Control Using High-Gain
Observers in the Presence of Measurement Noise” IEEE Trans. on AC,
Vol. 44, pp. 1672-1687, 1999.
B. Aloliwi, H. K. Khalil, and E. G. Strangas, “Robust speed control of
induction motors.” In Proc. American Control Conference, Albuquerque,
NM, 1997. WP1624.
B. Aloliwi, H. K. Khalil, and E. G. Strangas, L. Laubinger, J. Miller “Ro-
bust tracking controllers for induction motors without rotor position sensor:
analysis and experimental results.” IEEE Trans. Automat. Contr., 14: 1448-
1458, 1999.
G. Bartolini, A. Ferrara, E. Usai, “Applications of a sub-optimal discontin-
uous control algorithm for uncertain second order systems,” Int. J. of Robust
and Nonlinear Control, 7(1997)(4), pp.299-310.
G. Bartolini, A. Ferrara, E. Usai “Chattering avoidance by second-order
sliding mode control,” IEEE Trans. Automat. Control, 43 (1998) (2), pp.
241-246.
B. Carlsson, A. Ahlen, M. Stenard, “Optimal Differentiation Based on
Stochastic Signal l\r’lodels,” IEEE Transactions on Signal Processing, vol.
39(2), 1991.
A. Dabroom & H. K. Khalil, “Discrete—Time Implementation of High-Gain
Observers for Numerical Differentiation,” Int. J. Control, Vol.72, pp. 1523-
1537, 1999.
S. Diop & J. W. Grizzle & P. E. Morral & A. Stefanopoulou,“Interpolation
and Numerical Differentiation for Observer Design,” ACC, 1523-1537, 1994.
76
[11] S. V. Emelyanov, “Binary systems of automatic control,” Moscow institute
of control problems, (1984)
[12] S. V. Emelyanov, S. K. Korovin , “Applying the principle of control by
deviation to extend the set of possible feedback types .” Soviet Physics,
Doklady, 26 (1981) (6), pp. 562-574.
[13] S. V. Emelyanov, S. K. Korovin and L. V. Levantovsky, “ Higher order
sliding modes in the binary control systems,” Soviet Physics, Doklady, 31
(1986) (4), pp. 291-293.
[14] S. V. Emelyanov, S. K. Korovin and L. V. Levantovsky, “ Second order
sliding modes in controlling uncertain systems,” Soviet Journal of Computing
and System Science, 24 (4), 63-68, 1986.
[15] S. V. Emelyanov, S. K. Korovin and L. V. Levantovsky, “ Drift algorithm in
control of uncertain processes,” Problems of Control and Information The-
ory, 15 (6), 425-438, 1986.
[16] S. V. Emelyanov, S. K. Korovin and L. V. Levantovsky, “ New class of
second order sliding algorithms,” Mathematical Modeling, 2 (3), 89-100, 1990,
in Russian.
[17] A. F. Filipov, Differential Equations with Discontinuous Right-Hand Side,
Kluwer, Dordrecht, the Netherlands, 1988.
[18] L. F ridman, A. Levant, Sliding Mode Control in Engineering, Marcel Dekker
Inc., 2002.
[19] T. Kailath, “Detection of Stochastic Processes,” IEEE Transactions on In-
formation Theory, vol. 44, NO 6, October, 1998.
[20] T. Kailath, “Linear Filtering Theory,” IEEE Transactions on Information
Theory, March, 1974.
[21] H. K. Khalil, High-gain observers in nonlinear feedback control. In H. Ni-
jmeijer and T. I. Fosse, editors New Directions in Nonlinear Observer De-
sign, volume 244 of Lecture Notes in Control and Information Sciences, pages
249-268. Springer, London, 1999.
[22] H. K. Khalil, Nonlinear Systems, Prentice Hall, 3rd ed., 2002.
[23] L. K. Vasiljevic, H. K. Khalil, Difi‘erentiation with high-gain observers in
the presence of measurement noise, Proc. of 45th IEEE CDC, San Diego,
CA, 2006.
[24] A. Levant, “Robust Exact Differentiation via Sliding Mode Technique” Au-
tomatica, Vol.34, 1998.
77
[25]
[26]
[‘27]
[28]
[29]
[30]
[31]
[32]
[33]
A. Levant, “Higher Order Sliding Modes, Differentiation and Output Feed-
back Control” International Journal of Control, Vol.76, 2003.
A. Levant, “Sliding Order and Sliding Accuracy in Sliding Mode Control”
International Journal of Control, Vol. 58(6), 1993.
A. Levant, “Controlling Output Variables with Higher-Order Sliding Mode”
Proceedings of the European Control Conference, Karsruhe, Germany,
September, 1993.
A. Levant, “Arbitrary-order sliding modes with finite time convergence,”
Proc. of the 6th IEEE Mediterranian Conference on Control and Systems,
June 9-11, 1998, Alghero, Sardinia, Italy.
L. V. Levantovsky, “Second order sliding algorithms: their realization,” Dy-
namics of heterogeneous systems. Materials of the seminar, 1985, Moscow:
The institute of system studies, in Russian.
L. V. Levantovsky, “Sliding modes with continuous control,” Proc. of the All—
Union Scientific-Practical Seminar on Application Experience of Distributed
Systems, 1986, Novokuznezk, U.S.S.R., Vol. 1, Moscow, in Russian.
L. V. Levantovsky, “Sliding modes of high orders and their applications for
controlling uncertain processes,” Abstract of Ph. D. thesis, deposited at The
Institute of Scientific and Technical Information, Moscow, in Russian.
V. Utkin, “Variable structure systems with sliding modes: a survey,” IEEE
transactions on automatic control, 22, 1977.
V. Utkin, “Sliding modes in optimization and control problems,” Nauka,
Moscow 1981.
78
l]ll]l]]l]lll[]ljl][l[illil