llllllllllllVIIHIIHHUIHIIIIIIIllllllllllllllllllllllll 300885 3354 This is to certify that the dissertation entitled Stability of Solutions of Stochastic Differential Equations of Diffusion Type presented by Piotr Szlenk has been accepted towards fulfillment of the requirements for PhoDo degree in Stat‘lSt'iCS M9411:— Major professor Date July 27, 1993 MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 PLACE N RETURN BOX to move this chockom from your mood. TO AVOID FINES Mummorbofomdotoduo. DATE DUE DATE DUE DATE DUE STABILITY OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS OF DIFFUSION TYPE By Piotr Szlenk A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Statistics and Probability 1993 ABSTRACT STABILITY OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS OF DIFFUSION TYPE By Piotr Szlenk The stability and asymptotic stability of solutions of one dimensional stochastic differential equations of diffusion type are studied. Such problems arise frequently in applications to oscillation theory, automatic control and related fields, as it is explained in [9]. In 1971 Khasminskii and N evelson ([8]), considered the following stochas- tic differential equation : dX(t) = b(X(t))dt + a(X(t))dW(t) They assumed that a never vanishes and that every solution of the equation above is recurrent. Under these assumptions they proved that the distance in some suitable metric (the scale metric) between two solutions starting from two different points converges to some random variable which is either zero or is concentrated on two points. This dissertation consists mainly of two parts : 1. In the first part (Section 4) the same stochastic differential equation is studied when a is allowed to vanish. Different cases for which the convergence of the difference of two solutions takes place are analyzed. 2. In the second part (Section 5) the case when the drift coefficient b is constant is studied. In this case the limit of the difference of two solutions starting from two different points always exists. First, the case when all solutions are recurrent is discussed. Then the case when solutions are transient is studied. An investigation on the conditions on a for which the limit of the difference of the solutions is zero is carried out. It is proved that if we assume that 0(3) is concave or convex for sufficiently large x, then the limit of the difference of two solutions is zero if and only if f:°°(a’(z))’dz = +00 The main steps of the proofs of these results hinge on a Comparison Theorem of Skorokhod for the solutions of Stochastic Differential Equations, the convergence theorem for nonnegative supermartingales, representation of continuous local martingales as time-changed Brownian Motions, the exact growth of transient solutions of Stochastic Differential Equations, and the fact that a continuous local martingale is convergent on the set where its quadratic variation converges. To my parents iv ACKNOWLEDGMENTS I thank Professor Rafail Khasminskii for an introduction to the problems studied here and for fruitful discussions during his visit to Michigan State University. To Professors Flank Hoppensteadt and Habib Salehi I extend my sincere gratitude for their guidance and encouragement during my studies at Michi- gan State University. Without their constant support this thesis would not have been possible. I extend my sincere appreciation to Professors Raoul LePage and Shlomo Levental for having served on my thesis committee. I extend my special thanks to my wife Iwona for her patience and under- standing during the preparations of this thesis. Finally I thank the Department of Statistics and Probability, Michigan State University and the National Science Foundation (MSC 8901599) for providing me partial support during my stay at Michigan State University. Contents 1 INTRODUCTION 1 2 GENERAL THEORY 6 2.1 STOCHASTIC INTEGRALS WITH RESPECT TO MAR- TINGALES ............................ 7 2.2 STOCHASTIC INTEGRALS WITH RESPECT TO LOCAL MARTINGALES ......................... 10 2.3 LOCAL MARTINGALES AS TIME-CHANGED BROWN- IAN MOTIONS .......................... 11 2.4 ONE DIMENSIONAL, TIME-HOMOGENOUS STOCHASTIC DIFFERENTIAL EQUATIONS ................. 13 3 DIFFUSIONS ON THE FINITE INTERVAL 18 4 SINGULAR DIFFUSIONS 24 5 DIFFUSIONS WITH CONSTANT DRIFT 43 6 SUMMARY 56 vi 1 INTRODUCTION In their paper of 1971 ([8]) Khasminskii and Nevelson (for more detailed discussion see also [9], pages 302-309) studied the following problem : Consider the following stochastic differential equation : (1) dX (t) = b(X(t))dt + a(X(t))dW(t). Assume the following : I) Equation (1) has a unique solution almost surely. II) a’ never vanishes. III) Every solution of (l) is recurrent. Define the scale function : (2(1) = f 6-2!: ' a’dudy. 0 Then condition III is equivalent to the following : Q(+°°) = +oo. Q(-°°) = —oo If we put Y(t)=Q(X(t)), then Ito’s formula ([4], Theorem II-5.1) implies that : (2) dYU) = 01(Y(t))dW(t) where 01(31) = ”(0“(y))Q’(Q’1(3/))- If we take two solutions of (2) starting from two different points (say y1, y;, y; > yl) then he) - mm = 312 - y: + £[a:(Y2(s)) — «(momma Since the solution of (2) is unique, Y2(t) - Y1(t) Z 0 for all t a.s. Since P(/o'[a,(Y,(s)) — 01(Y1(s))]2ds < +00) = 1 for all t, f; [01(Y3(s)) — 01(Y1(s))]dW(s) is a continuous local martingale (see [4]:Chapter II, Section 1). Since every positive local martingale is a supermartingale, Y3(t) — Y1(t) is a positive supermartingale, and therefore Y2(t) — Y1(t) converges to a positive random variable, say 6, a.s. as t -» +00. (see [10], Section 39). The work in [8] deals with the analysis of the random variable 5 . It can be presented by a Lemma 1.1 and Theorems 1.1 and 1.3 below(the proof of Lemma 1.1 which is a crucial step is contained in the proof of Theorem 1 in [8] and will not be reproduced here). Theorem 1.2 is a modification of Theorem 1.1, where in part (c) the form of the distribution of the limiting random variable 6 is new. Lemma 1.1 01(y + 6) = 01(y) for every 3/ a.s. Theorem 1.1 Under assumptions 1, II, III we have : a) if 01 is not periodic, thenf = 0. b) if 0'1 is periodic with period 0, and ”5}“ = k is an integer, then Y2(t) — Y1(t) = y; — yl a.s. for all t. In particular this is the case if 01 is constant. c) if 01 is periodic with period 0 and “—3“ = k is not an integer, then the distribution of f is concentrated on two points 0[lc] and 0([lc] + 1),where [7:] denotes the integer part of 1:. Let us compute the distribution of the random variable 6 in case 01 is periodic with period 0. Let k = “in, {k} = k— [k]. Repeating the argument of Khasminskii and N evelson we conclude that 91k] 5 Y2(t) - Y1“) S 9([kl + 1) But Yea) - mt) = n - n + [Iota/2(a)) - atm(s))1dW(s) and from Lebesgue’s dominated convergence theorem we conclude that E6 = E33006“) — Yt(t)) = 3/2 — n Therefore we have : 01k]P(£ = otk1)+ oak] + 1)(1 — P(€ = 0[k1)) = n - n = me From this we have PM = 0[k1)= 1 - {k} . Po: = o([k1+1)) = {k} Therefore we can restate Theorem 1.1 as follows : Theorem 1.2 Under assumptions I,II,III we have : a) if 01 is not periodic, then£ = 0. b) if 01 is constant, then Y3(t) - Y1(t) = y; — yl a.s. for all t. c) if 01 is periodic with period 0, then P(£ = 0[k]) = 1 — {k} and P(£ = 0([k]+1)) = {k} where k = “3%“. Therefore, if we define a new metric in R by r(MI) =| QM - Oh!) I then we will have : Theorem 1.3 Under assumptions I, II, III we have : For every two solutions of (1 ) X3(t) and X1(t), starting respectively from 2:2 and 2:1 (2:2 > 2;) we have : 7(X2(t)tX1(t)) *6 0.8- when 6 = 0 if «(3') = a(62"(tI))Q’(Q“(y)) is not periodic, f = r(zg,zl) if 01 is constant, and __ T(33,$1) _ r(zg,:r1) _ r(:c2,xl) _ r(:c ,31) if 01 is periodic with period 0. Our principal contributions in this dissertation are presented in Sections 4 and 5. In Section 4 we will discuss the convergence of the difference of two 4 solutions of (1), where we allow a to vanish. In Section 5 we will discuss the same property of solutions of (1), where we assume that the drift coefficient is constant. The novelty of some results in these sections is in dealing directly with the difference of solutions, rather than considering the distance between solutions with respect to a scale metric. Sections 2 and 3 contain preparatory materials for Section 4 and 5. 2 GENERAL THEORY In this Section we present the known theory which is needed in sections 3, 4, and 5. We refer the reader to [4] for details. We start with the definition of a martingale. Let (0,7,P) be a complete probability space. Let .7} be a filtration, that is f} is a family of o-algebras for which .7, Q 3F} if s S t. Let fi‘i' =ns>c}-‘ We say that the filtration .7", satisfies the usual conditions if .770 contains all P-null sets and if f}... = .75} for all t. Definition 2.1 We say that a stochastic process X (t) is a fi-martingale (submartingale, supermartingale, respectively) if and only if X (t) is fit-measurable, E(|X(t) I) < +00 for every t, and E(X(t) I .7.) = X(s) (2,3, respectively). The martingale is a square-integrable martingale if EX’(t) < +oo for every t. We denote the collection of square-integrable Ji-martingales by M2. Sim- ilarly, we denote the collection of continuous, square-integrable martingales by M3 For supermartingales we have the following theorem, which is used ex- tensively in the thesis : Theorem 2.1 ([10], Section 39) Let X (t) be a supermartingale. Let X ‘(t) denote the negative part of X (t), that is X ‘(t) = —X(t)X{X(t)5o}- Assume 6 that sup EX"(t) < +00. t Then tlir'noo X (t) exists a.s., and Ecliffil» X (t) < +oo. In the next two subsections we introduce the stochastic integrals with respect to martingales and local martingales. These integrals are extensions of the classical Ito integrals with respect to Brownian Motion. The need for developing stochastic integrals with respect to Brownian Motion, or more generally with respect to local martingales, arises from the fact, that the sample paths of such processes are of unbounded variation with probability one. Therefore, the usual Lebesgue-Stieltjes integral for the sample paths of such processes cannot be defined. 2.1 STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALES In this subsection we define the stochastic integral with respect to a square- integrable martingale M (t) The next Proposition will help us to determine the class of integrands, that is the class of processes for which our integral will be defined. Proposition 2.1 ([4], Proposition 11-2. 1) Let M (t) 6 Mg. Then there is a unique, increasing, integrable process denoted by (M )(t) , such that M 2(t) — (M )(t) is a martingale. Definition 2.2 The process (M )(t) from Proposition 2.1 is called the quadratic variation process of M (t) Example 2.1 Let B(t) be a Brownian motion. Then (B)(t) = t. The next definition specifies the class of processes for which the stochastic integral with respect to the square-integrable martingale M (t) will be defined. Definition 2.3 Let M 6 Mg. Let £2(M) be the collection of predictable processes (t) such that (n <1» use)“ = E{/0T‘I”(s,w)d(M)(s)} < +oo for not T > 0. For E £2(M), we put +00 ll 4’ IIM= Z 5(Il ‘1’ ”if. A1) n=l where a: A y = min(:z,y). Proposition 2.2 ([4], Section II-I) £2(M) forms a complete metric space with the metric defined by [I — ‘1! ”i!- Let £0 be the collection of processes (t) for which there exists an in— creasing sequence {t.-};-L_°3 (0 = to < t1 < t; < < tn < ...) and a se- quence of random variables {f,-(w) :3, such that f,- is Jig-measurable with 811p ll f.- lloo< +00. and new) = fotwmeom + :fi(w)X(t.-.tt+tl(t) The following Lemma shows that every process from £2(M) can be approx- imated by jump processes from £0. Lemma 2.1 ([4], Lemma II-2.1) The subspace £0 is dense in £2(M). Definition 2.4 Let X 6 M3. Let IXIT = [Brant Let +00 1 IX! = 2: -,:(I X I. A1). rs=1 Lemma 2.2 ([4], Lemma 1.2) Mg form a complete metric space with the metric defined by IX — Y]. Moreover M; is a closed subspace of M2. Now, we will define the stochastic integral with respect to a square in- tegrable martingale M (t) We proceed as follows. For (I) 6 £0 with the expansion Q(t,w) = fo(w)x{,=o}(t) + 22:3 f,-(w)x(,,,t, +1](t), we define +00 [M(‘I’th = 2:0 f.-(M(t.-+1 A t) - M(ti A t))- It can be shown ([4], Section II-2) that IM ((1)) 6 M3 and (ISOMETRY) | IM() |=|| II},M . Therefore we obtain an isometry from £0 to M 3. Now using Lemma 2.1 we can extend this definition to the whole space [.2(M). From now on, we will write IM()(t) E f; @(s)dM(s). Therefore we have defined the stochastic integral of a stochastic process belonging to the class .6; with respect to the martingale M. In the next subsection we extend this definition to the more general class of integrators, which are called local martingales. 2.2 STOCHASTIC INTEGRALS WITH RESPECT TO LOCAL MARTINGALES As before, we follow [4] in the presentation of this section. Definition 2.5 ([4], Definition II-1.7) Stochastic process M (t) is called a local martingale if there exists an increasing sequence of ft-stopping times 13,, such that 7,. -> +00 a.s. and M (t A 1,.) is a Jig-martingale for every n. The collection of continuous, square-integrable local martingales will be denoted by M3”, and MI,“ will denote the collection of square-integrable, local martingales. We can define the quadratic variation process of a local martingale M in a same way as in the case of a martingale. Such a process is denoted by (M) (t) and M’(t) - (M )(t) is a local martingale. Definition 2.6 Let M E M1206. Let £'2°°(M) be the collection of predictable processes 0(t) for which there exists a sequence of Jig-stopping times 0,, —> +00 that TM... m]0 Q’(t,w)d(M)(t)} < +00 for every T and n=1,2,3,4, ..... 10 Let M 6 Mg“ and <1) 6 £§“(M). Then there exists a sequence of f,- stopping times 0,, —+ +00 such that M (t A 0,,) is a square-integrable mar- tingale and 0(t) with 0,, satisfies the condition from Definition 2.6. Let ,,(t) = x{,n2,}(t). Let M,,(t) = M (0,, A t). Since M,, is square-integrable martingale we can define the stochastic integral I M"(,,)(t). It is known that IM"(,,,)(t) = I M"()(t) such that [M(QXt A 0,,) = IM"( with respect to M and is denoted by f; 0(s)(w)dM(s). Such stochastic integral is a continuous local martingale. In the next subsection we discuss very interesting and important property of local martingales which will be needed in Section 5. 2.3 LOCAL MARTINGALES AS TIME-CHANGED BROWNIAN MOTIONS Theorem 2.2 ([4], Theorem 11-12) Let M 6 My“. Assume that ,figgolMXt) = +00 - Let Tr = inf{u : (M)(u) > t}. Then the stochastic process B(t) = M (1,) is a fin-Brownian Motion. 11 The conclusion of the Theorem 2.2 is also true without the assumption that thin (M )(t) = +00, but in this case one has to extend the underlying -o 00 probability space, so that the existence of the appropriate Brownian Motion can be guaranteed. Definition 2.7 Let ((2,153, P) be a probability space with filtration f}. Let (fl’,f",f',’, P’) be another probability space. Let it = O x 9', f = f X .7", P = P x P’. Let «(52) = to ford) = (w,w’). If}; is afiltration for (QT, P) such that J7 x .77 Di}: flxfll’fl}, then we call (O,f',f~},P) a natural extension of the probability space {Eff}, P). Theorem 2.3 ([4], Theorem II-7.2’) Let M E Mg’l‘”. Let inf{u : (M)(u) > t} ift < (M)(+00) +00 ift Z (M)(+oo) T: = Let f} = V00 in)». Then there exists a natural extension (OJ? ,i}, P) of the probability space {Staff}, P) and a fi-Brownian Motion B(t), such that B(t) = M(‘r,) fort E [0, (M)(+00)). Moreover M(t) = B((M)(t)). We also have the following useful lemma : Lemma 2.3 ([12], Lemma 34.8) Let M E Mfi'l‘”. Then on the set {(M)(+°°) < +00} t-l-iinoo M(t) exists and is finite. l2 Now, we are ready to review some basic facts on stochastic differential equations. These will be used in Sections 3 and 4. 2.4 ONE DIMENSIONAL, TIME-HOMOGENOUS STOCHASTIC DIFFERENTIAL EQUATIONS Definition 2.8 Let b and 0 be real measurable functions on the real line. Let (9,}33, P) be a probability space with filtration ii. Let W(t) be a f,- Brownian Motion. We say that a stochastic process X (t) is a solution of stochastic differential equation (3) m0 = b, with the initial condition X(O) = x0 if and only if X(t) = e. + [0' b(X(s))ds + £0(X(s))dW(s) a.s. for every t. The second integral in the equation above is the stochastic integral with respect to the Brownian Motion W, which was defined in Section 2.1. In the work that follows we assume that (3) has a unique solution starting at x (for all x), and the functions b and 0 are defined on the real line. The most general conditions which guarantee the existence and uniqueness of solutions for such equations are as follows : Theorem 2.4 ([4], Theorem I V-3.2) Let us assume the following : 13 1. There exists strictly increasing function p(u) on [0,+00) such that p(0) = 0 and [0+ Pifidu = +00 andl 0(x)—0(y) |_<_ p(| x—y I) for all x,y E R. 2. There exists strictly increasing, concave function k(u) on [0,+oo) with k(0)=0 and [0+ filfidu = +00, such that I b(x) - b(y) IS k(| x — y ]) for all x,y E R. Then there exists a unique solution of (3) with the initial condition X (0) = x. We see that the conditions of Theorem 2.4 are satisfied when functions b and 0 are Lipschitz continuous. In this case p(x) = Lx, k(x) = K x, where L and K are Lipschitz constants for b and 0, respectively. In Section 4 we will also need the following comparison theorem due to Skorokhod. It was first proved by Skorokhod (see [13], Section V-3), and extended by other authors. Theorem 2.5 ([4], Theorem VI—1.1) Let 0 be a function satisfying condition 1 of Theorem 2.4. Let b1 and b; be two functions satisfying condition 2 of Theorem 2.4. Consider two stochastic diflerential equations : dX (t) = b1(X(t))dt + 0(X(t))dW(t). dY(t) = b,(Y(t))dt + 0(Y(t))dW(t). Assume that b, _>_ b1 and Y(0) = yo 2 x0 = X(O). Thenfor all t, Y(t) Z X(t) almost surely. l4 We will now discuss the asymptotic behavior of the solutions of equation (3). Assume that the state space of the solutions of (3) is the interval (l,r), where l and r are finite or infinite. Assume also, that 0(x) 75 0 for all x E (l, r). Let X(t) be the solution of (3) and X(O) = x. Let r] = inf {t : X (t) = l or X (t) = r} be the explosion time. Define the scale function : Q(x) = L: e-zf': fidudy where x0 6 (l,r) The following Theorem describes the behavior of solutions of (3) as t ap- proaches the explosion time 1] : Theorem 2.6 ([4], Theorem VI-3.1) We have the following : I) If Q(l+) = -00 and Q(r-) = +00, then 1%, = +00) = P,(lirtriinan(t) = z) = P,(lir:1_.s;1pX(t) = r) = 1. 2) If Q(l+) > —oo and Q(r-) = +oo, then P,([i_13X(t)= I) = 1. 3) If Q(I+) = -00 and Q(r—) < +oo, then gqtgxu) = r) = 1. 4) If Q(l+) > —00 and Q(r-—) < +00, then gqifixu) = z) = 1 - 0013"“) = r) = «iii-3&3)“ 15 Definition 2.9 ([9], Section III-7 and III-8) We say that the solution of (3) with initial condition X (0) = x0 is recurrent in the interval (l,r) if for every y E (l, r), P(X(t) = y for infinitely many t) = 1. From Theorem 2.6, we see, that when P0] = +00) = 1, then every solution of (3) is recurrent if and only if I Q(l+) ]=| Q(r—) |= +00. Let us now define : k(x) =./,:Qi(y)./e-:0Tz)15’(z—)d2dy where x0 6 (l,r). We have : Lemma 2.4 ([4], Corollary VI-3.1) If k(r—) < +00, then Q(r-) < +00. If k(l+) < +00, then Q(l+) > —00. The theorem below gives the conditions under which the explosion occurs (Feller test of explosion). Theorem 2.7 ([4], Theorem VI-3.2) a) If k(r—) = k(l+) = +00, then P,(r] = +00) = 1 for all x E (l,r) b) If k(r—) < +00 or lc(r+) < +00, then P,(17 < +00) > 0 for all x E (l,r) 16 c) P,(r7 < +00) = 1 for all x 6 (l, r) if and only if one of the following holds : (i) lc(r—) < +00 and lc(l+) < +00. (ii) k(r—) < +00 and Q(l+) = —00. (iii) lc(l+) < +00 and Q(r—) = +00. 17 3 DIFFUSIONS ON THE FINITE INTERVAL As in the introduction let us consider the stochastic differential equation : (4) 4X0) = b(X (1Wlt + 0(X(t))dW(t) , X (0) = $- The following results are preliminary to later work and they are mostly known. However, we include derivations of them for completeness of the presentation, and some proofs may be of independent interest. Throughout this Section we will assume that the coefficients b and 0 are Lipschitz continuous. Let us consider the case where for some numbers c and d (c < d) we have : (i) 0(c) = 0(d) = 0. (ii) for all p E (c, d), 0(p) ¢ 0. (iii) b(c) 2 o and W) s. 0. We consider the solutions of (4) starting from the interior of (c,d), that is x E (c, d). Let Z (t) and Y(t) satisfy the equation (4) with initial conditions Z (0) = c and Y(0) = d respectively. Let b and b be the Lipschitz continuous functions, for which we have b S b S b and b(c) = b(d) = 0. We see, that Z(t) = c satisfies the stochastic differential equation : dZ(t) = “Z(tlldt + ”(Z(t))dWU) ’ Z(O) = 0 Similarly, we see, that 17(t) = d satisfies the stochastic differential equation : d17(t) = b(17(t))dt + a(}"(t))dW(t) , 17(0) = d 18 From Comparison Theorem (Theorem 2.5) we conclude, that if X (t) is the solution of (4) with the initial condition X (0) = x 6 (c, d), then c = _Z_ (t) 5 Z (t) S X (t) S Y(t) S I’(t) = d. Therefore, under assumptions (i)-(iii), the solution starting from interior of (c,d) will never leave [c,d]. Let X1 (t) and X2(t) be two solutions of (4) starting from x1 and x2, respectively, x1 < x2. Let Q(x) be the scale function, that is Q(x) = [2841210 {fidudy for some x0 6 (c, d). so We will use the following lemmas : Lemma 3.1 a) If b(c) > 0, then Q(c) = — b) If b(d) < 0, then Q(d) = +00. Proof. a) Since 0 is Lipschitz continuous, 1 l 0(x)2- L>2(x— c)2 for all x E (c, d) and some constant L. Since b(c) > 0, b(x) > e for all x 6 (c,c+6) for some 6 and e. Without loss of generality we can assume that x0 < c + 6. We have : 0(6): [:3 4f“, h) du udy <_ _l-‘Foe 21:0 mad!) a02¢ = _/ 337('51_¢-8o-_3)dy_ — _ The proof of b) is entirely similar to the proof of a). 19 Lemma 3.2 (see [9], Lemma V-2.1) Let X (t) be the solution of (4) with initial condition X (0) = x0, where x0 6 (c, d). Then if functions b and 0 are Lipschitz continuous, then P(X(t) = c for some t) = 0 and P(X(t) = d for some t) = 0. From Lemma 3.2 we conclude that the explosion time 17 = inf{t : X (t) = c or d} = +00 a.s. From Comparison Theorem (Theorem 2.5), X1(t) S Xg(t) for all t a.s. We have the following three cases : CASE 1. Q(c) > -00, Q(d) < +00 1 In this case by Lemma 3.1 we have b(c)=b(d)=0. Since X1(t) _<_ X2(t), then {w = X10) -* al} S {w = X20) -+ d} and {w = X2“) -* c} S {w = X10) —’ C} From Theorem 2.6 and Lemma 3.2 we conclude P (£15100 X2(t)—X1(t) = 0) = (”(3530 X1“) = d)+P (£11100 X2“) = C) _ 0(le — Q($1) ‘1 0(d) - 0(c) and P(tliploo X2(t)—Xl(t) = d—c) = P(tlignoo X1(t) = c,tlin1°°X2(t) = d) = 0(32) — Q(31) 0(d) - 0(6) ° 20 CASE 2. Q(c) = —00, Q(d) < +00 or Q(c) > —oo, 0(d) = +00 If Q(c) = -00 and Q(d) < +00, then by Theorem 2.6 and Lemma 3.2 X1(t) —+ d and X3(t) -> d as t -> +00. If Q(c) > —00 and 0(d) = +00, then by the same argument X2(t) —r c and X1(t) -+ c as t —r +00. Therefore in these cases X2(t) — X1(t) -» 0 a.s. CASE 3. Q(c) = —00, Q(d) = +00 Let Y2(t) = Q(X2(t)) and Y1(t) = Q(X1(t)). Then as it was explained in Section 1, Y,-(t) satisfies the stochastic differen- tial equation : dY,(t) = 01(Y,-(t))dW (t) with initial condition Y;(0) = Q(x,-), i=1,2. Therefore we conclude from Section 1 that EU) -— Y1(t) —> £. £ = 0 when 01(y) is not periodic and 6 is concentrated on two points if 01(y) is periodic. We have the following : Lemma 3.3 If b(c) > 0 or b(d) < 0, then the function 01(31) = 0(Q“(y))Q’(Q“(y)) cannot be periodic Proof. Without loss of generality we can assume that c=0. Assume that b(O) > 0. Since b is continuous, there exists 6 > 0 such that b(x) > e for all x E [0, 6] for some 6. Let 20 be such that 1353 > L for all z S 20, where L is the Lipschitz 6 u constant for 0. We will show that f (z) = 0(x)e2 1" 5737“ is decreasing in the 21 interval [0, 20]. We have : a ,, 5 u f(z + h) - f(Z) _ 0(2 + h)82 1+}. (.041! _ O’(Z)82f' (“)du h — h 2 tin-21.411“ , l (I) (u) _ u = 0(2 +h me +1. + 0(z + h) 0'(Z)egf’ ' («)2 h h 2 ’ du_ 2 f , u S 0(2 + h)‘3 H. + Leaf: Md" h The right hand side of the inequality above tends to (L —2-b—((;:l))e2 I: 0 fr)“ 5 0 as h —r 0. This means that for all z _<_ 20, there exists I? such that f (z) is decreasing in [2, z + h]. Therefore, we can conclude that f (z) is decreasing on [0, 20]. Since f (z) is positive and decreasing for 2 small enough, 113(1) f (2) exists (finite or infinite). It is easy to see that limo0 Q'l (y) = 0, and therefore lim 01(y) exists. Therefore 01 (y) cannot be periodic. v-r-oo The same proof holds for the case b(d) < 0. This completes the proof. Therefore in CASE 3 if b(c) > 0 or b(d) < 0, then 01 cannot be periodic, so Yg(t) — Y1(t) —+ 0. Since Q'1 is uniformly continuous, Xg(t) —X1(t) —+ 0 too. Therefore, if b(c) > 0 or b(d) < 0, then X2(t) —X1(t) -> 0. Unfortunately, 01 can be periodic (or constant) when b(c) = 0(c) = 0, and b(d) = 0(d) = 0 in case (I is finite. In such cases the convergence takes place in the scale metric and not in the Euclidean metric as the following examples show. Example 3.1 Consider the following stochastic difierential equation on (0,+00) : dX(t) = -;-X(t)dt + X(t)dW(t) 22 In this case Q(x) = ln(x) and 01(y) = 1. The solution of this equation is X (t) = xew“), which is called the geometric Brownian Motion. Example 3.2 Consider the following stochastic difierential equation on {—123 +221) : dX(t) = —ésin(2(X(t)))cosz(X(t))dt + 0082(X(t))dW(t). In this case, Q(x) = tanx and 01(y) = 1. The solution of this equation is X(t) = arctan (tanx + W(t)). We can easily see that in these two examples Xg(t) - X1(t) does not have a limit as t —-r +00, but the limit exists in the sense of convergence in the ”scale metric” (r(x,y) =| Q(y) - Q(x) I), since lnyew“) — lnxewm = In: and tan y + W(t) — tanx - W(t) = tany — tan x, respectively. 23 4 SINGULAR DIFFUSIONS In this Section we consider again the stochastic differential equation : (5) (“(0 = “X(OW + 0(X(t))dW(t) , X (0) = x We investigate convergence of solutions starting from two different points, but we allow 0 = 0. Let us start with an example that shows that in some cases it seems to be impossible to find any sense of convergence for X2(t) - X1 (t). Therefore some classification of points on the real line seems to be necessary. Example 4.1 Let coefficients b and 0 of equation (5) be such that : (i) for some numbers c, p, d, c < p < d, we have 0(c) = 0(1)) = 0(d) = 0. (ii) for all v in (c,p), 0(v) 7‘- 0. (iii) for all v in (p,d), 0(v) 79 0. Let us also assume that b(c) = 0, b(p) > 0, b(d) < 0, and for some x0 6 (c, p) we have x 80 u f 0 e2]! (“lady < +00. 6 Consider two solutions of our equation X1 (t) and X2(t) with initial conditions x1 and x3 (x1 < 222), respectively, both in (c,p). Then Page, mm = egg, m1 = c) = P(,_1,i;nmx.(t) = c) = %((‘2)‘_‘§,(;’)) P(for some t1,t2 :X1(t1) = p and X2(t2) = p) = P(X1(t1) = p for some t1) = 0(331) " Q(c) 0(p) - 62(6) 24 P(‘liian1(t) = c, for some t; :X2(t2) = p) = P( lim X1(t) = c,liminf Xg(t) = p, limsup X2(t) = d) t—o+00 t—o+00 t—O+N = 1 _ Q(P) - Q0132) _ 0(31) — 0(c) = 0(32) - Q(x1) Q(P) - 0(0) 00’) - 0(c) Q(P) - 0(0) These calculations follow from worlc in Section 3, Comparison Theorem (The- orem 2.5) and Theorem 2. 7. Therefore, there is a positive probability (namely W) that X10) "’ C, 13111ij X2“) = P and limSUp X2(t) = at, so with _. °° t-¢+00 positive probability the limit of X3(t) — X1(t) does not exist. Let A={x: 0(x) = 0}. From now on, X ”(t) will denote the solution of (5) with the initial condition X 2 (0) = x. We make the following assumptions : I. For every real number x one of the following holds : (i) There exists 2 > x, such that 0 = 0(2) = b(z). (ii) There exist y > x and 2 > x, such that 0(y) = 0(2) = 0 and b(1011(2) s 0. (iii) sup{x : x 6 A} < +00, that is A is bounded from above. II. For every real number x one of the following holds : (i) There exists 2 < x, such that 0 = 0(2) = b(z). (ii) There exist y < x and z < x, such that 0(y) = 0(2) 2 0 and b(y)b(2) S 0. (iii) in f {x : x E A} > —00, that is A is bounded from below. 25 We will use the following notation : r, = inf{t : X(t) = x} , P,(r, < +00) = P(X”(t) = x for some t). r, is the time when process X (t) hits a point x for the first time. Note, that r, is the whole collection of random variables, since it depends on X (0) P,(r, < +00) is the probability, that a process starting at y reaches x in finite time. Similarly, for the subset B of the real line, we introduce 713 = inf{t : X(t) E B} 9 Py(7'B < +00) = P(X”(t) 6 B for some t). Definition 4.1 We say that x and y communicate (x ~ y ) if and only if P,(r,, < +00) > 0 and P,(r, < +00) > 0. Relation ”~” defines the equivalence classes in R. Definition 4.2 We say that x is strictly inessential if and only if for some y < x and 2 > x P(lim sup X’(t) S y) > 0 and P(lim+inf X ”(t) Z 2) > 0. t—o+00 "’ °° CASE 1 from Section 3 illustrates the set of the strictly inessential points. Proposition 4.1 Let x be strictly inessential,and let I, be its equivalence class under ”~ ”. Then 26 a) I, consists of at least two points. b) 0(2) at 0 for all z in 1,. c) I, is an open set. d) I, is a connected set. e) If y ~ x, then y is strictly inessential. f) IfI, is bounded, i.e., I, = (c, d), then 0(c) = 0(d) = 0 and b(c) S 0 S b(d). Proof. a) We have 0(x) 76 0 because otherwise x would not be strictly inessential. So there exists a neighborhood U of x such that 0(y) at O for all y E U. So x ~ y for all y E U. h) Suppose that there exists 2 in I, (2 > x), such that 0(2) = 0. Then if b(z) Z 0, then P,(r, < +00) = 0 so x and 2 do not communicate. If b(z) < 0, then P,(r, < +00) = 0 so x and 2 do not communicate. c) follows from a) and b). 27 d)InyI, andzEI,,thenforallusuchthatySuSzwehave P,('r, < +00) = P,(r,, < +00)P,,(r, < +00) > 0 and P,(‘ry < +00) = P,(‘r,, < -l-00)P,,('ry < +00) > 0, so u E 1,. Therefore I, is connected and it must be an open interval. 6) We know that I, is an interval, say (c,d) and 0(2) 76 0 for all 2 in 1,. Therefore we can define the scale function : Q(Z) = f: ‘4]: E“"‘h‘dtl- Since x is strictly inessential, then Q(z) must be bounded and then for all v in I, (see Theorem 2.6) P(litI-nzgp X”(t) S c) = 2:2 : gig > 0 P(limjpof X”(t) 2 d) = 1— 3g; : 223 > 0 So v is strictly inessential for all v in I,. f) Suppose that 0(c) 7‘- 0. Then from (1) there exists y < e such that y ~ x so y E I, which is the contradiction. So 0(c) = 0. The same way we can show that 0(d) = 0. Now suppose that b(c) > 0. Then we conclude from Section 3 that P(lim sup X z (t) Z d) = 1, so x cannot be strictly inessential. Therefore we musfhhte b(c) S 0. The same way we show that b(d) Z 0. Cl Definition 4.3 We say that x is right inessential if i) x is not strictly inessential. 28 and ii) There exists 2 > x such that P,('r, < +00) > 0 and P,(1', < +00) = 0. Definition 4.4 We say that x is left inessential if i) x is not strictly inessential. and ii) There exists 2 < x such that P,(1', < +00) > 0 and P,(r, < +00) 2 0. Proposition 4.2 Let x be right inessential and let I, be its equivalence class under ”~ ”. Then a) I, = {x} if and only if 0(x) = 0 and b(x) > 0. b) If I, consists of at least two points, then I, is an open and connected set, and 0(2) 75 0 for all 2 in I,. c) If I, is bounded, i.e., I, = (c,d), then 0(c) = 0(d) = O, b(c) 2 0 and b(d) > 0. d) y is right inessential for all y in I, . 29 Proof. a) Suppose that I, = {x}. Then 0(x) = 0 because otherwise x will com— municate with y for every y E U, where U is some neighborhood of x. If b(x) S 0, then for every y > x P,(‘ry < +00) = 0 and then x is not right inessential. So b(x) > 0. If 0(x) = 0 and b(x) > 0, then for every y > x P,(r, < +00) = 0 and for every 2 < x P,(r, < +00) = 0 so x cannot communicate with any other point. So I, = {x}. b) If there exists 2 E I, such that 0(2) = 0, then x and z cannot communicate by the same argument as in Proposition 4.1(b). Therefore 0(2) # 0 for all 2 in I,. From that we conclude, that for all z E I, there exists a neighborhood Uofz,suchthat0(u)9é0forallu6U. Then,u~zforalluEU,so since 2 E I, and U Q I,, then I, is open. The fact that I, is connected follows by the same argument as in Proposition 4.1(d). c) 0(c) = 0(d) = 0 for the same reason as in Proposition 4.1(e). If b(d) S 0, then for every 2 > d P,(r, < +00) = 0 and for every 2 such that x S 2 5 d we have x ~ 2. So x cannot be right inessential. Therefore b(d) > 0. Suppose now that b(c) < 0. Since 0(2) 75 0 for all 2 E (c, d), then we can define 0(2) = f: .43 it“s). Since b(d) > 0 and b(c) < 0, then Q is bounded in (c,d), P(lim sup X 2 (t) S c) > 0 and P(lim}?! X ’(t) Z d) > 0, so x is strictly inessentiihvohich is the contradiction. 30 d) Follows from (c). Similar facts are true when x is left inessential. Corollary 4.1 a) If x is right inessential, then I, must be bounded from above. b) x is right inessential. if and only if there exists 2 > x such that P,(r, < +00) = 1 and P,(r, < +00) = 0. Similar facts are true when x is left inessential. Definition 4.5 We say that x is essential if the following two conditions hold : (i) For all y such that P,(r, < +00) > 0 we have P,(r, < +00) > 0 (ii) For all y < x and for all 2 > x, P(limsup X’(t) S y) = 0 or P(lim‘égf X‘(t) 2 2) = 0. “+00 CASES 2 and 3 from Section 3 are examples of essential states. Proposition 4.3 Let x be essential, and let I, be its equivalence class under ”N ’7. a) I = {x} if and only if 0(x) = b(x) = 0. 31 b) If I, consists of at least two points, then I, is an open and connected set, and 0(2) ¢ 0 for all 2 in I,. c) If I, is bounded, i.e., I, = (c, d), then 0(c) = 0(d), b(c) Z 0, and b(d) S 0. d) Ify ~ x, then y is essential. ( Proof. a) Assume that I, = {x}. If 0(x) 96 0, then x communicate with points of E some neighborhood of x, so I, 79 {x}. If b(x) > 0, then x is right inessential. ~- If b(x) < 0, then x is left inessential. Therefore b(x) = 0. If 0(x) = b(x) = 0, then obviously I, = {x}. Such point is called a trap (see [5], Section 3.4). b) The proof of this statement is entirely similar to the proof of Proposition 4.2(b). c) As before let 0(2) = fie-2f: Tandy- Because x is essential Q cannot be bounded in (c,d) (if it is bounded, then x is strictly inessential). If b(c) < 0, then Q is bounded from below so Q(2) -> +00 as 2 —> d and then x is left inessential. If b(d) > 0, then Q is bounded from above so Q(2) -> —00 as 2 —> c and then x is right inessential. So b(c) 2 0 and b(d) S 0. 32 d)If I, = {x}, then there is nothing to prove. If I, # {x}, then I, is an interval, say (c,d) and Q from (c) is not bounded. If Q(y) -r +00 as y —> d, then for every 2 E (c,d) P(limépofX’U) = c) = 1 so for all y > c P(limgng‘U) 2 y) = 0. If Q(y) -+ —00 as y —» c, then for all 2 E (c,d) P(limsupX’(t) = d) = 1 so for every y < d P(limsupX‘(t) S y) = 0 Thégofe (ii) from Definition 4 is satisfied. (i) fromtDzfihition 4 is satisfied because for all 2 ¢ I, P,(r, < +00) = 0 from (c). D Note that strictly inessential, right inessential, left inessential and essen- tial points cover all points in R1. Distinguishing between them requires the knowledge of the scale function Q in each equivalence class with respect to N. Proposition 4.4 Suppose that x is not strictly inessential. Then one of the following holds : a) There exists an essential point 2 such that P,(1'1, < +00) = 1, and if 21 and 22 are two such points, then 21 ~ 22. b) There exists 2, such that P(tlimm X’(t) = 2) = 1. Proof. : We will use the assumptions I and II from the beginning of this Section. If x is essential, then there is nothing to prove because (a) holds. Suppose that x is right inessential. We cannot have x > sup{2 : 2 6 A}, because then x would be essential or strictly inessential.Let us assume that 33 either the condition (i) or (ii) from Assumption I holds. Then there exists 2 > x such that 0(2) = 0 and b(z) S 0. Letd=inf{2€A:b(2)SOand2>x}. Then0(d)=0andb(d) $0 from continuity of b and 0. Since x is right inessential it follows from the Corollary 4.1, that there exists y > x such that P,(‘r, < +00) = 1 and P,(r, < +00) = 0. Let B = {z 2 x : P,(1'. < +00) = 1 and P,(r, < +00) = 0}. We will show that B consists of at least two points. Clearly one of them is y, such that y is the upper bound for I,. Since b(y) > 0 and b is continuous, then there exists b and 6, such that b(u) > b for all u E (y — 6,y + 6). Let b1(u) = b(u) A b. Let Y(t) be the solution of the equation : CW(t) = 51(Y(t))dt + 0(Y(t))dW(t) Y(0) = u E (y - city + 5) From comparison theorem (Theorem 2.5) we have that for all t Y(t) S X “(t) a.s. Let 1' = inf{t : Y(t) ¢ (y — 6,y + 6)}. We will show that Eur < +00. Let f(u) = “:4“. Then f(u) > 0 in (y-6,y+6) and gamma) + 0(a) = —1 So Eur < +00 for all u E (y — 6,y + 6). It is obvious that 1' Z n+5 P,-a.s. for all u E [y,y + 6]. Therefore P,(r,, < +00) = 1 for all u E [11.11 + 6) and we have P,(‘r,, < +00) = P,(r, < +00)P,(1'., < +00) = 1 and Pu('r, < +00) = P,(r, < +00)P,(‘r, < +00) = 0. So u E B for all u 6 [y,y + 6). There- fore B consists of at least two points. It is obvious that B is connected because if we have P,(r,, < +00) = 1, P,(r, < +00) = 0, P,(r, < +00) = 1, P,(r, < +00) = 0 for some y and 2 (y < 2), then 34 P,(r,, < +00) 2 P,(r, < +00) = 1 and P,,(r, < +00) _<_ P,(r, < +00) = 0 for every u such that y _<_ u 5 2. So B must be an interval. Let c = sup{2 : 2 E B}. We will show that 0(c) = 0 and b(c) S 0. If 0(c) 75 0, then 0(u) 9t 0 for all u such that u e (c-6,c+6). Let c1 = sup{x S c : 0(x) = 0}. We have c1 < c, b(c1)> 0, so Lemma 3.1 and Theorem 2.6 imply that limsup X‘(t) > c. t-0+00 Therefore, we conclude that for u E [c, c+ 6) we have that P,(1',, < +00) = 1 and Pu(r, < +00) = 0 which contradicts the definition of c. So 0(c) = 0. Assume now that b(c) > 0. If 0(u) aé 0 for u E (c — 6, c) for some 6 > 0, then c E B and by the same argument as above we conclude that there is a 6’ > 0 such that u 6 B for all u E (c,c + 6’), what is again contradiction with definition of c. So assume that there exists 2,, such that 0(z,,) = 0 for all n, 2,, < 2,4,1, and “1131002,, = c. Since b(c) > 0, then there exists no, such that for all n 2 no b(z) is bounded away from zero for every 2 6 [2,0, e]. Let b > 0 be such that b(z) 2 b for every 2 E [2,,0,c]. Let Y(t) be a solution of the equation : atY(t) = M + 0(Y(t))dW(t) t Y(O) = zno+1 From comparison theorem we have that Y(t) S X’M“ (t) a.s. for all t. Let 1) = in f{t : Y(t) é (2,,0,c)}. We will show that E, f(x) = 9-3—2. Then f(x) > 0 in (2,,0,c) and r) < +00. Let ”0+1 genre) + 0(x) = —1 35 So E,”0 ,117 < +00. Since for all t Y(t) S X’W“(t), then E P,” +1(‘1', < +00) = 1. Therefore, P,('r, < +00) = 1 and since b(c) > 0, ,uo +17, < +00 so then P,(r,, < +00) = 1 for all u E (c,c + 6) for some 6. This again is the contradiction with the definition of c. So we must have 0(c) = 0, b(c) S O and therefore c = d. Now if there exists c < d such that 0(c) = 0 and 0(2) 96 0 for all 2 E (c, d), then we must have that (c,d) is an interval of essential points and P,(r(,,d) < +00) = 1. If there exists an increasing sequence 2,, such that 0(2,,) = 0 for all n, and 2,, —r d, then since P,(‘r,,, < +00) = 1 for every n we must have P(‘limoo X"(t) = d) = 1 and b(d) = 0. This completes the proof of the Proposition in case (i) or (ii) from Assumption I holds. The proof of the Proposition in case (iii) from Assumption I holds is entirely similar and will not be reproduced. The proof for the case when x is left inessential can be carried out similarly. [3 Definition 4.6 Let x and y be two points which are not strictly inessential. Let us define the following relation : x z y if and only if one of the following holds : i) There exists an essential point 2 such that P,('rz, < +00) = 1 and P,(1'1, < +00) = 1. or ii) There exists 2 such that P(thinm X 3’(t) = 2) = P(tfiinoo X”(t) = 2) = 1. 36 Let us show that ”z” defines the equivalence classes among all points which are not strictly inessential. It follows from Proposition 4.4 that x z x. It is obvious that if x z y, then y z x. So assume that x z y and y z 2. In this case, either a) There exists an essential point r such that P,(1'1, < +00) = 1 and PAT], < +00) = l 01' b) There exists a point u such that P(tlirknoo X z (t) = u) = P(tliinm X 1’(t) = u) = 1. And similarly for y and 2 : either c) There exists an essential point v such that P,(n, < +00) = P,(rI, < +00) = 1 01' d) There exists a point p such that P(tfiinoo X ”(t) = p) = P(tliinoo X ’(t) = p) = 1. If (a) and (c) occur, then I, = I,. and x z 2. If (a) and (d) occur, then we must show that P(,_1§;n°°X‘(t) = p) = 1- 37 Since Pm. < +oo) = 1. P(,_1,i§,n°°X”(t)= p) = 1, then Theorem 2.6 applied to the interval I, implies that for all u 6 I, X “(t) —r p a.s. Since P,(1'I, < +00) = 1, then from Markov property we get P(‘liinmX'(t) = p) = 1 and x z 2. Similarly if we have (b) and (c), then x is 2. If we have (b) and ((1), then u = p and x z 2. Thus, ”z” defines equivalence classes among all points which are not strictly inessential. Now, we are ready to state and prove the main result of this section. It should be noted, that Theorem 4.1 below deals with the Euclidean distance between two solutions of (5). Results of Remark 4.1 following Theorem 4.1 are concerned with the convergence of the distance’between the two solutions of (5) with respect to the scale metric which is weaker than the Euclidean metric. Theorem 4.1 Let x and y be two points which are not strictly inessential. 1. Ifx z y, then one of the following holds : a) There exists an essential point 2, such that I, is a right my and P,(r;, < +00) = P,(1'1, < +00) = 1. b) There exists an essential point 2, such that I, is a left my and P,(1'1' < +00) = Py(TI, < +00) = 1. c) I, = I, = (c,d) for some finite numbers c and d, and b(C) = b(d) = 0(0) = 0(0‘) = 0- 38 d) P(X"(t) — X”(t) -r 0) =1. 2. IfP(X"(t) — X'(t) —> 0) = 1, then x z y. Proof. 1. Assume that x z y. If there exists 2 such that P(tligrnooX (t) = z) = P(tliinm X”(t) = 2) = 1, then there is nothing to prove. Suppose that there exists an essential point 2 such that P,('rI, < +00) = Py(1'1, < +00) = 1. We need to show, that (1) occurs when a), b), c) do not occur. Let r = inf{t : X’(t) E 1,}, r] = inf{t : X'(t) E 1,}. Let u = 7V r; (x Vy denote the maximum of x and y). Consider two dimensional Markov process (X‘(t),X”(t)). Assume x < y. From CASE 3 from Section 3 and from Lemma 3.3 we conclude that for u E I, and v E I, P(inf mlP(X"(t) - X"(t)) S 6) = 1 l>0 ‘2. From Markov property for (X 2 (t), X I’(t)), we have for every 6 P (inf 811I>(X”(t) - X’(t)) S 5) I)“ :2. = E=~(Pli§£sup(xm(t) — XX'W» s 61) = 1. 2.. tax 39 Therefore P(inf sup(X"(t) — X‘(t)) _<_. 6) = 1 l>0 ‘2. for every 6 > 0. So P(,1,ig_n,(x=(t) — W» = 0) = 1. . Let us assume that P(,1§n,(xv(t)- X’(t)) = 0) = 1. x and y are not strictly inessential. For x we have : either a) There exists an essential point 2 such that P,(TI, < +00) = 1 or b) There exists a point it such that P(t liinoo X’(t) = u) = 1 Similarly for y : either c) There exists an essential point v such that P,('rI, < +00) = 1 or d) There exists p such that 1991359, W) = p) = 1 If (b) and (d) occur, then u = p and x z y. Suppose that (a) and (c) occur. If I, = I”, then x z y. Suppose that I, 79 I,. Then I, n I, = it. Let I, denote the closure of I ,. Since P(,_1§gnm(X”(t) — X‘m) = 0) = 1, 40 then I, n I, = {r} and - z _ _ ° 11 _ _ P(tlinlooX (t) — r) — P(thinooX (t) — r) — 1, so x z y. If (a) and (d) occur, then p E I, and we must have warm = p) = 1, so x z y. Similarly for the case when (b) and (c) occur. This completes the proof. Remark 4.1 The case when a), b) or c) from part 1 of Theorem 4.1 occur must be treated separately. Let us consider the case when a) occurs first. Assume I, = (c,+00). If x ¢ 1, or y ¢ 1,, then we conclude from the proof of Proposition 4.4 that b(c) > 0. Assume that Q(+00) = +00. By Lemma 3.3, 01(y) = 0(Q‘l(y))Q'(Q’1(y)) cannot be periodic and Theorem 1.3 (together with Markov property for the two dimensional process (X ’(t),X ”(t)) ) implies that r(X"(t),X"(t)) -+ 0 a.s. IfQ(+00) < +00, then X’(t) —r +00 and X”(t) —r +00 a.s. Then r(X’(t),X"'(t)) —+ 0 a.s. If both 2: E I, and y E I,, then b(c) can be zero. It follows from the discussion of Section 3, that in this case r(X”(t),X"(t)) —r£ a.s. for some 41 nonnegative random variable f. 6 = 0 in case 01(y) = 0(Q‘1(y))Q’ (Q'1(y)) is not periodic, and£ is concentrated on two points in case 01(y) is periodic. The case when b) occurs can be treated similarly. The case when c) occurs was discussed in Section 3. 42 5 DIFFUSIONS WITH CONSTANT DRIFT In this Section we again consider the equation : (6) dX(t) = b(X (1))dt + 0(X(t))dW(t) , X (0) = x We investigate the situation when we drop the assumption III from Section 1 on the recurrence nature of the solutions of (6). Let X1(t) and Xg(t) be two solutions of this equation starting from x1 and x3 respectively. Deterministic examples (0 = 0) show that in general X3(t) - X1(t) does not have to have a limit as t —r +00. But if the drift b is nonincreasing, then we have X2(t) - X1 (t) —r 5 Z 0. We will consider the special case where b(u) = c, where c is a positive constant; that is, we consider the stochastic differential equation : (7) dX(t) = cdt + 0(X(t))dW(t) where c > 0 is a constant and 0 satisfies the conditions which guarantee the existence and uniqueness of the solutions of (7). We would like to investigate the limit of the difference of two solutions starting at two different points. Let X1(t) and X2(t) be two different solutions of (7) starting at x1 and x2 (x3 > x1) respectively : X2(t) = e, + ct + [0‘ 0(X2(s))dW(s). X,(t) = c, + a + [0‘ 0(X1(s))dW(s). 43 Then Y(t) = X2(t) - X1(t) = x, — x1 + f;[0(X2(s)) — 0(X1(s))]dW(s) is a positive supermartingale, and therefore there is a random variable £ 2 0 such that X2(t) — X, (t) —> 5 a.s. Our problem is to determine for which 0 the limit 5 = 0. We consider first the case when the solutions of (7) are recurrent in some interval (d,+00), where d = sup{x : 0(x) = 0}. Therefore we make the assumption that 0(x) does not vanish for x large enough. Recall that 62(3) = (2 8-24;; ”lauds . so 6 (d, +00). so By Theorem 2.6, we have Q(+00) = +00, and therefore 0 must be un- bounded. We may now reproduce the proof given in [8] to conclude that if X3(t) — X1(t) tends to the nonzero limit, then 0 must be periodic. But if 0 is periodic, then it must be bounded. Therefore if all solutions of (7) are recurrent in the interval (d,+00), then tfimm(X2(t) — X1(t)) = 0. We now proceed to the case when all solution of (7) are transient. We will make the following assumptions : I. 0 is differentiable, and there exists a1, such that for all x > a1, 0’ is monotone (that is 0 is either convex or concave). II. All solutions of (7) tend to +00 as t —t +00. Assumption II is equivalent to the assumption that all solutions of (7) are transient. From assumption I it follows that there exists b such that I 0(x) I) 0 f orx > b. Without loss of generality we may assume that b = -1 44 and 0(x) > 0 for x > -1, so we can define a scale function : Q(x) = f”, "We 2712‘s,, From assumption II and Theorem 2.6 it follows that Q(x) has a finite limit as x —-+ +00. Let us denote this limit by (1. That is Q(x) —-r d as x —+ +00. We will need the following Lemma : Lemma 5.1 If 0 is locally Lipschitz continuous, X1(t) 7i X2(t) for allt a.s.. Proof. Let 7,, == inf{t : X2(t) -- X1(t) = i} and r = inf{t : X2(t) -— X1(t) = 0}. Assume P(r < +00) > 0. Following ([3]) we see that : X2“ A T") "" X1“ A Tn) = _ W" a(X200) -0(X1(8)) "(0(X2(8))-0(X1(8)) 2 ‘ (”"zlle’m/o X,(o) —X1(s) “W" ) 2./0M( X (s)— X (s) ”3} Letting n —r +00 we have for t < 1' (see also [6]) : X2“) - X1“) = _ I 0(X2(8)) - 0(X1(s)) 1 ‘ 0(X2(s)) — 0(X1(s)) 2 " (23—21)” {/0 x,(s) — X1(s) am") " 2/o( X2(s) — X1(s) ) d3}' On the set {1’ < +00} Hm o(X:(s))—0(X1(s)) a... o Xg(s)—X1(s) ”3 45 exists and is finite, because (006(3)) - 0 (X1(3)) Xg(s) — X1(s) Therefore (Lemma 2.1) )2 S L2 where L is the Lipschitz constant for 0. W 0(X2(3)) 0(X1(3)) tlimo/o X2(s)- X1(s) dW(s) exists and is finite. This is of course a contradiction because ‘0(X2(8))-0(X1(3)) 8 _l ‘ 0(X2(8)) -0(X1(3)) 2 8 [1 X2(8) — X1(8) dW( ) 2 j ( X3(3) — X1(3) ) d should tend to —00 since X2(t) —X1(t) ——-1 O as t —-> r on the set {7 < +00}. Cl The main result in this Section is the following theorem : Theorem 5.1 Under assumptions I and II, X2(t) — X1(t) —) 0 if and only if f+°°[0’(u)]2du = +00. Proof of Theorem 5.1 Because of Lemma 5.1 and since X 2(t) — X1 (t) is a positive local martingale we have ([6], see also [3]) : X20) - X1“) _ t¢7(X2(-9))-0(X1(8)) 1 ‘ a06(8)) -0(X1(8)) 2 " (”"2‘)“p{/o x,(s) —X1(s) 2W“) ‘ if“ X43) —X1(s) ”3} From assumption I and Lemma 5.1 we conclude that X2(t) -— X1(t) > 0 a.s. for all t. Therefore X2(t) - X1(t) —+ 0 a.s. if and only if r 0(X3(s)) -- 0(X1(s)) 0(X2(s))— 0(X1(3)) 2 l0 X2(8)-X1(s) dW(3 )' '2']:( X2“) -X1(s) ) d8 a.s. 46 tends to —00, which will happen if and only if * 0(X2(s)) -0(X1(3)) 2 3 .. 00 [J X3(8)—X1(3) )d —+ ' Indeed, on the set {w : f:°°(' it," :j‘lx: ' )2ds < +00} . ‘ 0(X2(8)) - 006(3)) t-li-I+n¢°/o X2(s) - X1(s) dW(s) exists and is finite (Lemma 2.1), therefore on the set {1.1; refifiggggf—Gglfllydc < +00} X2(t) — X1(t) will not tend to o. If )2ds = +00 a.s., /+°°(0(X2(8)) - 000(8)) 0 X3(8) - X1(8) then let us define 1', = inf{u : 131" 1522'. 3‘3: ’ )2ds > t}. Then from Theorem 2.2 we conclude that M (t) = 5" WM“) is a new Brownian motion and therefore M(t) — % [(42:22] $330))”, = M(t) _ gt _. —00 a.s. Since thlpioJXgU) — X1(t)) exists, then . ‘ 0(X2(8)) - 006(8)) 1 ‘ 0(X2(3)) — 0(X1(3)) 2 $5231, X3(8) _ X1(8) dW(3) — EL ( X2(8) _ X1(8) ) d3} exists too. Since ‘ a(X200) - 000(8)) 1 [0' X2“) _ X1“) dW(s) — 51—. —00 therefore almost surely 0(X2(8)) - a(X100) _ 1 ‘ 006(8)) - 006(8)) [0' x,(s) — x,(c) ”(3) 2 ./o( X,(s) — X1(s) )2ds —> —00 47 Therefore X2(t) — X1(t) —> 0 a.s. if and only if /‘(0(X2(3)) - 006(3)) X209) _ X109) ) ds —r +00 Now we need to show that under assumptions I and II /*(0(X2(s)) — 0(X1(3)))2d8 = +00 if I*°°[o’(u)]’du = +00 X2(s) - Xt(s) < +00 if f+°°[0’(u)]2du < +00 Let us first assume that 0 is bounded. Then from the assumptions I and II it follows that I 0’ (x) | is decreasing for x large enough because otherwise 0 would grow faster than a linear function, and then Q(x) would tend to +00 as t -r +00. It is known ([2]) that 31(9- —+ c a.s. So for t > T,(w) we have (c — e)t S X1(t) S X3(t) S (c + e)t. From the assumption II it also follows that there is T ,1 (to) such that for all t > T,1 (to) we have X3(t) > a; and X1(t) > 01. Therefore we have from the assumption I and from the mean value theorem : LT-IVT¢(0(}:;(.(SZ) :g’éfgfsll )2d3 + L:°:T‘[OJ((C + e)t)]2dt x0, 0’ (x) > 0 and 0’ (x) is decreasing. Indeed. Since 0(x) is convex or concave for large x’s, then 0(x) is decreasing or increasing for large x’s . Sinw it is unbounded it must be increasing. Next, if it is convex, then it grows at least as fast as a linear function and then Q(x)—> +00, which is contradiction with assumption 11. Therefore 0’ (x) Z 0 and 0’ is decreasing for x large enough, so 0 is concave. Let T,,, = inf{t : X2(t) Z xo,X1(t) 2 x0}. Since ("NW”) - “WWW + jT:°°[a(X2(s))1’ds X2(3) — X1(3) +°° 0’(X2(8)) —0(X1(3)) 2 5]., ( X2(s)—X1(8) ) d8 2'0 0(X2(8))-0(X1(3)) 2 +°° I 2 o ( X2(8)-X1(3) )ds+/Tso [0 (X1(8))] d8 then we need to investigate the convergence of I: °° [0’ (X (s))]2ds where X (s) is a solution of (7) starting at x > —1. Let Y(t) = Q(X(t)), so that Y(t) is a process on the natural scale and (see Section 1) dY(t) = 01(Y(t))dW(t) Where 01(2) = 0(0"(v))Q’(Q'l(y)) and Y(O) = to = 0(x)- From Theorem 2.3 we conclude that there exists a Brownian Motion B(t) starting at S yo on some natural extension (see Definition 2.7) of our basic probability space, such that Y(t) = B((Y)(t)). Let M (t) = Y(t) — yo. Let 7, = (M )(t) It is known, that n = (t) = (am/(ands 49 Let A, be the inverse function of 7,. Then we have : =/.. (Y(TT» 7' So .4: 1 t A: -/0 03(Y(8))d7' —/0 01(Y1(A«))du but from Theorem 2.3 we see, that Y(A,)= B(u) and therefore t 1 d "'/o a—timu» " So we have shown, that Y(t) can be represented as B(m) where 7, is the inverse function to r A,= [02 01 ( B ( ——-)—-)du and B(s) is a Brownian motion starting at yo = Q(x). Let r, = inf{s : B(s) = d} We have : (+mla’(X(s))]’ds = L+°°[0’(Q‘1(B(7,)))]2d3_ [o ‘ [0,(0-(1hftiii)))]zd u __. f" [cue-«3(a)»? o’(Q"(B(u)))[Q'(Q-‘(B(u)))]’d" Let _ mom»? F") ‘ 02(Q"(z))[Q’(Q‘1(x))]’ We will need the following lemma which follows from ([1]) : 50 Lemma 5.2 Let d > 0 and let F : (—00,d) —+ R be nonnegative and continuous. Let r, = in f {t : B(t) = d}, where B(s) is a Brownian motion starting at x < d. Then f0" F(B(u))du < +00 a.s. if and only if f: F(y)(d — y)dy < +00 and if f: F(y)(d — y)dy = +00, then f3" F(B(u))du = +00 a.s. Let us show the proof of this lemma : Proof. Let B(u) be a Brownian motion starting at 0. It is known that : [01' F(a: + B(u))d“ = f: F(x + ”giddy where I: is the local time for Brownian motion. We have : d—x -8 d—x /. F(z+y)e.-,dt= f... F(z+y)1¥,-,dy+[_z F(z+y)zz.-,dy= 0 d = / F(z)z:;_=;dz+ [o F(2)l::_‘:dz Since 13.3, is continuous and almost surely inf B(u) > —00, “574-: then If“, F(2)l:::d2 < +00 a.s., so we have to investigate the convergence of f: F(2)l::_‘: dz. It is known that ([11]) I” = .;.(w,2(d_._.+.)+wg(d_._.+.» = %(Wt’(d—z)+W22(d-z)) I'd—s 51 where W, and W, are two independent Brownian motions starting at 0. Therefore our problem is to determine for which functions F d d [0 F(2)W(d — 2)2d2 = [o F(d — t)W2(t)dt is finite a.s., where W is a Brownian motion. First it follows from Bluhmen- tahl’s 0-1 law that d . P(fo F(d - t)W2(t)dt < +00) = 0 or 1 Now, assume that f: F(d — t)tdt < +00. Then if d E / F(d—t)W2(t)dt —. [a F(d— t)tdt < +00 so E f: F(d — t)W2(t)dt < +00 which implies d m]o F(d — t)W2(t)dt < +00) =1 Assume now that d P(f0 F(d — t)W2(t)dt < +00) =1 Let H = {g : [0, 1] —-r R : f: F(d — t)g2(t)dt < +00}. Then H is a Hilbert space with the scalar innerproduct given by : (g, h) = I: F(d — t)g(t)h(t)dt If I: F (d — t)W2(t)dt < +00, then W(t) defines a gaussian random element W with values in H, and therefore 11 d 2— — 2 — — E I] W I] — E/o F(d t)W (t)dt _jo F(d t)tdt < +00 Which completes the proof of the lemma. 52 From this lemma it follows that we need to check whether the integral 4 [a’(Q‘1(z))]’ .... .- loa’(Q-‘(z))[Q'(Q-‘(x))l’(d “ isfinite. We have : . [am-1(2))? _. .= +°° [o’(y)]’ _ f002(Q’1(z))[Q’(Q-l(x))]2(d )d /0 02(y)[Ql(y)](d Q(y))dy co ’ 3 s 00 8 = f [0421.6ch new f e-ch. , ..dudzdy 0 02(3’) v Assume first that fo+°°(a’(y))2dy < +oo. Then a(y)a’ (y) —* 0, as y ——) +00. Indeed, first we show that o o I - lggggé 0(y)0 (y) — 0- Suppose that it is not the case. Then there is 6 > 0 and yo such that for all y > y... cows) 2 6° The“ “(1’)” Z wig) +00 +00 j 37%; = +00 because of assumption 11, so / (a’(y))2dy = +00 which yo yo is a contradiction to f:°°(a’(y))2dy < +00. Hence for y 2 yo. But . . I _ 1559,33} 0(y)0 (y) — 0. Next we have : for every 6 > 0 there is yo such that for all z 2 yo and y 2 yo, [,7 (a’(u))2du S 6. Then since a" is decreasing, we have : . 2 A (my... = .(z).'(.)_.(y).I(.)_/y ado'(u) 2 a(z)o'(z)-—a(y)a'(y) Choose y such that a(y)a’(y) S 6. Then for all z 2 y we have 26 Z a(z)a"(z) so a(z)a’(z) -+ 0 as 2 —» +00. It follows now from d’Hospital’s rule that lim 1 8261: 33(7)“ /+°° 43-h]: abdudz = yup-+00 02(y) y 53 _e-Zcfo’ (..) du = lim ”F” 2”(10011067 f° " ’"-0’(y);r(- ’2) ”2°!“ = lim 1 = (-1—) v-v+oo 2c — 20(y)a”(y) 2c Therefore if f:°°(a"(y))’dy < +00, then [+00 [_0'__’2(y)]263¢LU;,1—du/+°°e_2¢j; (“)dudZdy < +00. 0 a (y) y Assume now that [0 °°(0’(y))2dy = +00. It is easy to see that : 82¢]: -“.—du °° 8-261: . du _ _— ddyk () [a () dz]... 0?“ )3 Therefore we have by integration by parts: “(____O”(y))282c1:7}.—du +°° 8.26"]; . )du 0 0’(y) ( v/V ( dzdy =/.“2 ( «III/I)“ [.32)‘ “cf" 1,“...[00 "4° ”’3‘" dzdy = /. (0’(II))’[— [37,: ). N W“ f” e"’°f° pl?“ «12— 11dy+ /. (dummy = (a’(a))’[e “‘5 w" [_ ”a“ <->“‘dzI -(a'(o»’d _2/‘a'(y)e’°fo' awn/”“844; I.)‘“dzda'(y)+ /o°(a'(y))’dy > (. (a))’e" “‘ Hf” .445 WWI — (a'(0))’d + A“(a'(y))’dy .>_ /: (a'(y))’dy — (a'(0))’d ——» +oo .. y -+ +oo Therefore if f:°°(e’(y))2dy— - +00, then +°° [___a’(y)]2 8’2ch “)du +°° e-zc o . du _ lo 0—9—(34) ”sly I’ ah dZdy — +00 which completes the proof of the Theorem. €26]? 31—du/“° e-2cj: (“)dudz_1 V 54 The following example shows that, two solutions of (7) can hit each other with positive probability in case a is not Lipschitz continuous. Example 5.1 Let W(s) denote the Brownian Motion starting at 0. Let B(t) = [syn(W(s»dW(s) Consider the following stochastic difi'erential equation : (8) dX(t) = dt + 2,/X(t)ds(t) X(O) = :1: Let 17 = inf{t : X’(t) = 0}. Theorem 2.7 implies that P,(17 < +00) = 1 for all :1: Z 0. Consider two solutions X1(t) and Xg(t) starting at 2:1 and 2:; respectively (2:; > :1). Comparison theorem (Theorem 2.5) implies that Xg(t) Z X1(t) for allt a.s. Since P,,(17 < +00) = 1 and X1(t) Z 0 a.s., then r = inf{t : X2(t) = X1(t)} < +00 a.s. In fact we can solve the equation (8) (see [7], exercise 5.35). The solution of ( 8) with initial condition X (0) = z is given by : X(t) = { («E + B(t))“ ift < w; W2“ - Tfi) 1ft 2 7%; where 1', = inf{t : B(t) = —x}. 55 6 SUMMARY In this dissertation we present some results concerning the stability and asymptotic stability of the solutions of stochastic differential equations. These are in most cases extensions of work of Khasminskii and Nevelson ([8]). Unlike the case of equilibrium points, the stability properties of arbitrary solutions of stochastic differential equations are not thoroughly studied. For the case of equilibrium points there is a well developed theory created by Khasminskii and others (see [9]). In Section 3 we analyzed the behavior of solutions on a finite interval. Most facts presented there are well known, and we treated them as an intro- duction to sections 4 and 5. In Section 4 we investigated the stability properties of the solutions of stochastic differential equations where we allow a = 0, which was not allowed by Khasminskii and Nevelson (see [8]). We showed that some classification of the points of the real line is necessary, and we established stability results for various cases. It should be noted, that most of these results are formulated in terms of convergence in the Euclidean metric, which is stronger than the scale metric considered by Khasminskii and N evelson. In Section 5 we treated the case when the drift coefficient b is constant. In this case the recurrence property of solutions from [8] may be violated, but the limit of the difference of two solutions still exists. Under some additional assumptions, we showed that differences converge to zero almost surely. We believe that these additional assumptions may be relaxed in the future, and a similar analysis may be carried out without them. 56 There is a variety of additional problems connected with the discussion of Section 5. One interesting problem is to give conditions on the drift and diffusion coefficients b and a, under which two solutions starting from two different points never hit each other (with probability one). More specifically consider the stochastic differential equation : dX (t) = b(X(t))dt + 0(X(t))dW(t) Let X1 (t) and X2(t) denote two solutions starting from two different points. The problem is to give conditions on b and a under which 1' = in f {t : X1(t) = X2(t)} is almost surely finite. Another problem is to determine when X2(t) - X1(t) —’ 0 (a.s.) when the drift coefficient is de- creasing and non-constant, without further restrictions on a'. The discussion of this dissertation does not cover the case when the coef- ficients of the stochastic differential equation are time—dependent. We point out, that in this case questions similar to those considered in this dissertation may be posed. The main difficulty in analyzing those problems is to find an analogue of the scale function Q. When the coefficients of stochastic differ- ential equation do not depend on time, then the scale function ”removes” the drift, so the martingale theory can be applied. Therefore, there is a hope that once an analogue of the scale function for the time dependent case is introduced, then similar results to these obtained in this dissertation can be established. The methods developed and used in this dissertation do not apply directly to higher dimensional systems since they rely on linear ordering of R1 . Other methods are needed to study higher dimensional problems. 57 References [1] H.Ezawa, J .R.Klauder, L.A.Shepp : ”0n the divergence of certain in- tegrals of the Wiener processes”, Ann. Inst. Fourier, Grenoble 24, 2 (1974), 189-193. [2] I.I.Gihman, A.V.Skorokhod : ”Stochastic differential equations”, Springer-Verlag, Berlin 1972. [3] I.I.Gihman, A.V.Skorokhod : ”Stochastic difi'erential equations and their applications”, ”Naukova Dumka”, Kiev, 1982 (in Russian). [4] N .Ikeda, S.Watanabe : ”Stochastic difierential equations and difl'usion processes ”,North-Holland, 1981. [5] K.Ito, H.P.McKean : ”Difl’usion processes and their sample paths”, Springer-Verlag, 1965. [6] G.Kalianpur : ”Stochastic filtering theory”, Springer-Verlag New York Inc., 1980. [7] I.Karatzas, S.E.Shreve : ”Brownian Motion and Stochastic Calculus”, Springer-Verlag New York, Inc., 1991. [8] R.Z.Khasminskii, M.B.Nevelson : ”0n stability of solutions of one dimensional stochastic equations”, Soviet Math. Dokl., vol 12 (1971), no. 5, pp 1492-1496. [9] R.Z.Khasminskii : ”Stochastic stability of difierential equations”, Si- jthoff and Nordhoff, 1980. 58 [10] M.Loeve : ”Probability Theory”, part II, Springer-Verlag, 1978. [11] D.Ray : ”Sojourn times of difusion processes”, Illinois J. Math. 7 (1963), 615-630. [12] L.C.G.Rogers, D.Williams : ”Difl'usions, Markov processes and mar- tingales”, John Wiley & sons Ltd., 1987. [13] A.V.Skorokhod : ”Studies in the theory of random processes”, Addison-Wesley, 1965. 59 MICHIGRN STRT E UNIV. HIIIIIIIIIHEI IIIIIIIIIIIHIIHI 312 300885 LIBRARIES NlIHIIlHIIHW 3354