FD .: Ewan y . . . 11‘}. x .94.. m 54?... Pei... L... EEK.“ 3.1%. . . .fi 1 .3 LEE ‘ . I§ ‘1... :2: :5»: ‘JnHLmuk 3mm: . A 4. an? ‘ . .tl. . n. A: unmAkllfl , . . , V 5 .n..m;.....,.u.n.c.n.. 59L”. . , . , >1 .llu-IA . ‘ . . . Mr . . . when. A Skim ; 1.5;“... Wu? 7.... Mb .. 2.! :1. .x 5:: id.» . :wa- If . «J . i. .. , . u..fl..r..im.r . ..r.. 5:23.35: 35‘ V ‘LUOv. c; . a 5 1112.3 2:! .: 1):?) 521.9% In if...» ‘ Wu; .3 4.. a o .1: ma. : ..2..r . .1... l. .A t: ILA .- ‘1‘ ..,...,!.... 3 at... “96.3% x . - 2).». .. art-v.0»... I 3247\uh... .. , ..- ms} S c zurmwnufi um inwwfiréwsm ‘ _ i ”139:3“! .u :5..qu . 1H , . . hm. . . 92.055 qua . 5.: H . 3&3...“ I): i- . , . if... buie. 3:: y 5}..." ‘22:! Ram .utJWvNVImhw .554. . :7, 1.. ‘6 lufi I m5. appémflkg .23.“? 7.1.1. Erwin»! mum» . . . . h§§.§¥mmm? . . . i y . V . V , , ‘ -1 a» 7 .:4. z fifwéméwsm A _ AA. ‘ I . umiiiiiiiiii (I ‘3 > This is to certify that the dissertation entitled Ji‘ocA/wflc Pi‘flz re nh‘al E? Ltd/075 and f/m‘Y Mum n‘cd 14/7; roxfmah‘am presented by [flying , Huang. has been accepted towards fulfillment of the requirements for P h. D degree. in AW af/z 2/724 {/15 LIBRARY ’ Michigan State University PLACE It RETURN BOX to remove We checkom trom your record. TO AVOID FINES return on or More due due. DATE DUE DATE DUE DATE DUE MSU leAn Nflrmdive ActiorVEquei OM Inflation i ——~._fi V 77 A Stochastic Differential Equations and their Numerical Approximations By Liying Huang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 1995 ABSTRACT Stochastic Differential Equations and their Numerical Approximations By Liying Huang This thesis is on numerical methods for Fokker-Planck equations, especially those equations with degenerate diffusion coefficients. Emphasis is focused on the two- dimensional case which corresponds to second order stochastic differential equations. First, Gauss-Galerkin/finite difference methods are proposed. To solve two di- mensional Fokker-Planck equations which involve two spatial variables and one time variable, the idea of variable splitting is adopted. More specifically, finite difference methods are utilized in one of the spatial variables and Gauss-Galerkin methods are employed in the other. As a consequence, the Gauss-Galerkin approach is used only in one dimension at each time step. After two-dimensional Fokker-Planck equations are discretized to semi-discrete equations by finite difference methods in one direction, the convergence of the dif- ference approximation is established based on energy type estimates. Using theory of measures and moments, the Gauss-Galerkin approximation for the semi-discrete equations is shown to converge when one-dimensional Gauss-Galerkin approximation is used in the second direction. Combining the above results, the convergence of the Gauss-Galerkin/finite difference approximation is established. Computer implementation and intensive numerical tests of Gauss-Galerkin/finite difference methods are then carried out. The methods appear very efficient and very accurate. To the loving memory of my mother iii ACKNOWLEDGMENTS I would like to thank Professor David H. Y. Yen, my dissertation advisor, for his constant encouragement and support during my graduate study at Michigan State University. I would also like to thank him for suggesting the problem and helpful directions. I would like to thank my other dissertation committee members: Professor Qiang Du, Professor T. Y. Li, Professor Habib Salehi, and Professor ZhengFang Zhou for their valuable suggestions and time. iv Contents 1 Introduction 1 2 Basic definitions 3 2.1 Random variables and stochastic processes ............... 3 2.2 Transition probability density ...................... 4 2.3 Markov process and Chapman-Kolmogorov equations ......... 4 2.4 Wiener process, Gaussian distribution .................. 5 3 Stochastic differential equations 3.1 First order stochastic differential equations ............... 3.2 Diffusion process ............................. 3.3 Transformation of second order stochastic differential equations . . . 11 4 Approximations of Fokker-Planck equations 12 4.1 Brief introduction ............................. 12 4.2 Iterative methods ............................. 13 5 Gauss-Galerkin methods 21 5.1 Fokker-Planck equations for nonlinear oscillations ........... 21 5.2 Model equations .............................. 21 5.3 Difference approximation in y ...................... 24 5.4 Gauss-Galerkin methods ......................... 25 5.5 Some useful results ............................ 27 5.6 Existence theorems for Gauss-Galerkin approximations ........ 30 6 Convergence of Gauss-Galerkin approximation 31 6.1 Outline ................................... 31 6.2 Convergence of the semi-discrete difference approximation ...... 32 6.3 Convergence of the Gauss-Galerkin approximations .......... 34 7 Application to nonlinear oscillation of second order 40 7. l Formulation ................................ 40 7.2 Convergence of the semi-discrete difference approximation ...... 42 7.3 Convergence of the Gauss-Galerkin approximations .......... 45 8 Computational algorithms 8.1 Runge-Kutta and multi-step methods .................. 8.2 Predictor-corrector scheme via moments ................ 9 Implementation 9.1 Description of the algorithm ....................... 9.2 Model test problem 1 ........................... 9.3 Model test problem 2 ........................... 9.4 Model test problem 3 ........................... 10 Discussions and Concluding Remarks 10.1 Possible generalization of Gauss-Galerkin methods ........... 10.2 Simulation of nonlinear oscillation model ................ 10.3 Other applications vi 48 48 48 50 51 53 56 62 81 81 81 82 List of Tables compare 10 11 l2 l3 Gauss-Galerkin/finite difference solution: Three nodes in y direction. 0, A, and t are symbols for nodes. (see Figure 2) ........... Gauss—Galerkin/finite difference solution: Three nodes in y direction. Gauss-Galerkin/finite difference solution: Three nodes in y direction. Difference solution: 20 grid points in y direction ............. Difference solution: 40 grid points in y direction ............. Gauss-Galerkin / finite difference solution: The values of numerical mo- ments of order zero to three along a: = 0.0 ................ Gauss-Galerkin/finite difference solution: The values of numerical mo— ments of order four to seven along a: = 0.0 ................ Gauss-Galerkin/finite difference solution: At a: = 0.0, difference be- tween numerical moments and exact moments of order zero to three. Gauss-Galerkin/finite difference solution: At a: = 0.0, difference be- tween numerical moments and exact moments of order four to seven. Gauss-Galerkin/finite difference: Maximum of the time-difference of the numerical moments. ......................... Gauss-Galerkin/finite difference solution: At a: = 0.0, there are four nodes in the y-direction. ......................... At t = 12.0, the zero-th order exact, numerical moments and the errors. (:c = -—4.0 -) —0.2) .......................... At t = 12.0, the zero-th order exact, numerical moments and the errors. (a:=0.0—)4.0) ............................ vii 54 56 57 58 59 63 64 65 66 67 67 79 List of Figures Algorithm for the construction of the tridiagonal matrix. ....... Gauss—Galerkin/finjte-difference solution: Movement of nodes to the boundary 0.0 and 1.0 as t -—> oo ...................... Difference solution at t = 0.0 and 30.0: 20 grid points in y ....... Difference solution at t = 0.0 and 30.0: 40 grid points in y. ...... Exact solution with the Dirac-delta initial condition. ......... Exact moments with the Dirac-delta initial condition .......... Exact solution for t = 0.1 ......................... Exact solution for t = 5.21 ......................... Exact solution for t = 12.0 ......................... Comparison of the moments when t = 5.21,.1: e [-2.0, 2.0]. . . . . . . Comparison of the moments when t = 5.21,: 6 [—4.0,4.0]. . Comparison of the moments when t = 12.0,:c 6 [-4.0,4.0]. . . . . . . Comparison of the moments when t = 12.0,: 6 [—6.0, 6.0]. . . . . . . Flow chart for Gauss-Galerkin/finite difference method ......... viii 52 55 60 61 69 70 72 73 74 75 76 77 78 83 1 Introduction This thesis is on numerical methods for Fokker-Planck equations, especially those equations with degenerate diffusion coefficients. It has three major parts: 1). Study of various settings in which Fokker-Planck equations are formulated; 2). Derivation and convergence analysis of Gauss-Galerkin/ finite difference methods for two-dimensional Fokker-Planck equations; 3). Numerical implementation of Gauss-Galerkin/finite difference methods and the application of these algorithms to problems in nonlinear random oscil- lations governed by second order stochastic differential equations. The first part of this thesis provides a study of Fokker-Planck equations corre- sponding to second order differential equations excited by white noise and various solution techniques such as the iterative methods. Fokker-Planck equations, which are defined in unbounded domains with unbounded coefficients, are reduced to equa- tions defined in bounded domains with bounded coefficients at finite time. This reduction lays the groundwork for efficient numerical approximations. In the second part, Gauss-Galerkin/finite difference methods for solving two- dimensional Fokker-Planck equations numerically are proposed. These generalize the results of one-dimensional Gauss-Galerkin methods originally given by Dawson [5]. The basic idea of one-dimensional Gauss-Galerkin methods is that discrete mea- sures are constructed for the approximation of the expected values and the Gauss quadrature formula is used to find the nodes and the weights in the numerical com- putation. To deal with a two—dimensional problem, the idea of variable splitting is adopted. More specifically, finite difference methods are first utilized in one vari- able and Gauss-Galerkin methods are employed in the other. As a consequence, the Gauss-Galerkin approach is used only in one dimension at each time step. After two-dimensional Fokker-Planck equations are discretized to semi-discrete equations by finite difference methods in one direction, the convergence of the dif- ference approximation is established based on energy type estimates. Using theories of measures and moments, the Gauss-Galerkin approximation for semi-discrete equa- tions is shown to converge when one-dimensional Gauss-Galerkin approximation is used in the second direction. Combining the above results, the convergence of Gauss— 1 Galerkin/ finite difference approximation is derived. The analysis is much more rigor- ous and general than the corresponding results for one-dimensional Gauss-Galerkin methods previously obtained by Abrouk, Dawson, HajJafar, Salehi and Yen [1, 5, 6]. In the third part, computer implementation and intensive numerical tests of Gauss-Galerkin/finite difference methods are carried out. In order to find nodes and weights from moments in one-dimensional Gauss-Galerkin methods, a different implementation from those by Abrouk and HajJafar [1, 6] is used which appears to be very efficient and very accurate. The coupling of difference approximation with Gauss-Galerkin approach is used successfully for the two-dimensional case. Numerical results on several test problems are also presented. For the model test problem 1, we compare the numerical solution based on the Gauss-Galerkin / finite difference method with that based on the conventional finite difference method. It is demonstrated that the Gauss-Galerkin/ finite difference methods seem to be superior than the two- dimensional finite difference method in achieving high accuracy. For the test problem 2, the exact solution is known analytically. This, therefore, allows us to compare the numerical solution with the exact solution. For the test problem 3, a simple second order linear random oscillation problem is formulated in terms of its Fokker—Planck equation and the partial differential equation is solved by the Gauss-Galerkin/finite difference methods. Graphics plots are provided to show various behaviors of the exact solution and the numerical solution. We now outline the remainder of the thesis. In Chapter 2, some basic definitions related to stochastic processes are given. In Chapter 3, we discuss the first and the second order stochastic differential equations and their corresponding Fokker- Planck equations. In Chapter 4, some general approximation methods for Fokker- Planck equations are discussed. In Chapter 5, some model problems and their Gauss- Galerkin/finite difference approximation are presented. Chapter 6 is devoted to the convergence analysis of the two-dimensional GausS-Galerkin / finite difference method. In Chapter 7, the convergence analysis of Gauss-Galerkin/ finite difference method is extended to a second order nonlinear random oscillation problem. In Chapters 8 and 9, computational algorithms and their implementations are presented. Discussions on the future work and some concluding remarks are given in Chapter 10. 2 Basic definitions In this chapter, we introduce some definitions and notations needed for our work. Some results from stochastic analysis are also presented. 2.1 Random variables and stochastic processes Probability theory pertains to the various possible outcomes that might be obtained and possible events that might occur when an experiment is performed. The collection of all possible outcomes of an experiment is called a sample space of the experiment, denoted by 0. An event E can be regarded as a certain subset of possible outcomes in the space. In defining a random variable we shall proceed in accordance with a probability measure, with each elementary event 6, we associate with it a certain number X = f(e). We say that X is a random variable if the function f(e) is measurable relative to the probability. In other words, we demand that for each value of :1: (-—00 < a: < +00) the set A1: of those 6 for which f (e) < a: should belong to the set of random events and hence, that for it there should be defined the probability P(X < 2:) = P(Az) = F (x) which we have called the distribution function of X. Thus a random variable is a variable quantity whose values depend on chance and for which a distribution function of probabilities has been defined. After introducing the random variables, we will introduce the definition of a stochastic process. Let U be a set of elementary events and t a continuous parameter. A stochastic process is defined as the function of two arguments: X (t) = y(e, t) (e 6 U). For every value of the parameter t, the function y(e,t) is a function of 6 only and, consequently is a random variable. For every fixed value of the argument 6 (i.e. for every elementary event), y(e, t) depends only on t and is thus simply a function of one real variable. Every such function is called a realization of the stochastic process X (t) We may regard a stochastic process either as a collection of random variables X (t) that depend on the parameter t, or as a collection of the realizations of the process X (t) Thus a process is determined if the probability measure in the function space of its realization is specified. Two stochastic processes X (t) and Y(t) are said to be stochastically equivalent if, for every t, we have X (t) = Y(t) with probability 1. Then X (t) is called a version of Y(t) and vice versa. 3 2.2 Transition probability density Assume that the condition probability density functions exist. A stochastic process is Markovian if, for t1 < t; < < in < < tm+m P(l'nH, tn-Hi - ° - ixn+ma tn+m I 131, t1; ° - - i 1:119 tn) (2.1) = P(xn-Hs tn-Hi - - - i $n+ma tn+m I xna tn) It means that “the past and future are statistically independent when the present is known”. The following is generally true for probability density functions. P(l'la t1; ' ' ' i $719 tn) : p(323 t2; ' ' ' i xna tn I 313 t1) ' p(xla t1) = p(;r3,t3;...;rn,tn I 171,11; $2gt2I'PI-T2it2 I I1,t1)°p($1,t1) = p(xn,tn I 2:1,t1; x2,t2;---;xn-1,tn_1)-~p(.r2,t2 l 2:1,t1) -p(21,t1) If t1 < t; < < tn, the Markovian property of :r:(t) simplifies the above equation: p(:r1,t1;. ..;:rn,tn) = 13(15an I din-1,1114) ' "P(l‘2,t2 I Inti) 'P($1,t1) The condition probability density function p(x.-,t,- | z;_1,t.-_1) is, for the Markov process, called the transition probability density function. The transition probability density function gives the density of probability of a transition from one point in phase space, 25-1, at time t,--1 to another point in phase space, :r,~, at time t.-, where t.” > t.‘_1. 2.3 Markov process and Chapman-Kolmogorov equations Using the Markov property; if t1 < t2 < t3, p($2at2i$3at3 I 1'1,t1) = P($3,t3 I $2J2) 'P(I2,t2 I r1,t1) Integrating over $2, this becomes p(.r3,t3 I 1:1,t1) 2 Am p(:53,t3 |12,t2)-p(22,t2 | 3:1,t1)d:r:2. (2.2) This equation is known as the Chapman-Kolmogorov or Smoluchowski equation [2]. 4 2.4 Wiener process, Gaussian distribution A Wiener process or Brownian motion is a special stochastic process, satisfying (i) 2(0) = 0. (ii) If t; < t; < - H < tn, the differences [w(tz - w(t1)l, [w(tz) - w(t2)], ' ' ' . [w(tn) - w(tn—1)l are independent. (iii) If 0 S s < t, w(t) — w(s) is normally distributed with E[w(t) — w(s)] = (t — s) -u ,E((w(t) - w(s))i) = 02 . (t - s) where p is called the drift and 02 is called the variance. We can express these properties in terms of differentials (i) dw(t,-), i: 1,... ,n, are independent. (ii) dw(t) has a normal distribution and E(dw(t)) = pdt, E(dw2(t)) = azdt. Combining (i) & (ii) together, we have E[dw(t) - dw(s)] = 02 - 6(t — s)dt - d3 where 6(t — s) is the Dirac-delta function. If p = 0, 02 = 1, it is called normalized Wiener process. For any Brownian motion with mean p and variance 02, (flit—Ml) is a normalized Wiener process. A random variable I has a normal distribution with mean p and variance 02 (—00 < p < +oo,a > 0) if :1: has a continuous distribution for which the probability density function f (1: | p, 02) is given as follows: -%(’—;“)’ for — 00 < a: < 00. (2.3) 2 -_1_ f(fo‘va)-‘/2—Trae If :i? E R", 5 has a normal distribution with mean vector [Ii and covariance matrix I: if 53 has a continuous distribution for which the probability density function f (i? | ii, k) as follows: as I M) = mass-WWW. (2.4) A d—dimensional process w(t) = (w1(t), . . . , wd(t)) is called a d—dimensional Brownian motion if each process w,( t) is a Brownian motion and if the a-fields f(w,(t), t Z 0), 1 _<_ i S n, are independent. 3 Stochastic differential equations 3.1 First order stochastic difi'erential equations Many problems in applications can be modeled as systems of first order differential equations of the form dx_ d—t_ a(,t 2:) + Z( bk(t :r(to) = y (3.2) x)dw"tt( ——) (3.1) where :r,a,bk, k = 1,...,m are m vectors and wk(t) for k = 1,2,...,m are inde- pendent processes of Brownian motion. The vectors a(t,:r), bk(t,:r) are defined for t 6 [to, T], :r: E R” and their ranges are in R”. We can write (3.1) in differential form dx since 27' may exist almost nowhere, dz = a(t,:r)dt + f: bk(t,1:)dwk(t), (3.3) k=1 $(to) = y. (3.3) is equivalent to the integral equation 1' t) = y + ft:a[s,:r(s)]ds + I: A: bk[s,:c(s)]dwk(s). (3.4) Theorem 1 (Ito 1951) Leta(t,;1:) and {bJ-(t,:r), j = 1,2,. . . ,m} denote Borelfunc- tions defined fort 6 [to,T] and a: E R” with ranges in R”. If there exists a constant re, such that Ia(t.:r)l2 + 21am»? 3 n"(1+lxl)2, k=1 |a(t,x) - a(tat/)l + Z lbk(t,$) - bk(t,y)| S Nix - yls k=l for every x,y E R”, then (3.4) has a solution :r(t) which is unique up to a stochas- tic equivalence and continuous with probability one. This solution :r(t) is a Markov process whose transition probability P(A,t|y,s) for s < t is given by the relation P(A,t|y,s) = P(r,,y(t) E A) where x,,y(t) is the solution of the integral equation me) = y + f cit. x...(oidc + ‘2‘, /‘ brie, z.,.(o1dw.(o 8 k=1 8 If the functions a(t, z) and bk(t, 1:), k = 1, . . . ,m are continuous with respect to t, then the process a:(t) is a diffusion process with transfer vector a(t, r) and diffusion operator B(t, z) satisfying the equation [B(t, :r)z, z] = Z[bk(t, :r), z]2. If a(t,.r) and bk(t,1:), k = 1,. . . ,m do not depend on t, and satisfy the condition of the above theorem, then the equation dz = a(r)dt + 22;, bk(:r)dwk(t) is transition invariant with respect to t and the solution :c(t) is a homogeneous Markov process, that is, the transition probability P(A,t + r I y, t) is independent of t. 3.2 Diffusion process A stochastic process w(t) is a diffusion process if the following conditions are satisfied: (a) For every y and every 5 > 0, the transition probability density function p(:r,t I y, 3) satisfies the condition / p(r,t | y,s)d.r = o(t — s) (3.5) II-y|>€ uniformly over t > s and x, y 6 R”. (b) There exist functions a,(t,:r), B,j(s,y) such that for every y E R” and every 5 > 0, (i) forISiSm, (My - mm 1...)... = «(W - s) + on — s), (3.6) (ii) for 1 S i,j g m, /Ix_y'p(x.tly.s)1 and 02 (ax-Tag) )iBUUx) P(xatlyasll 1,]=1,...,m, then p(r,t I y,s) for a: ,y 6 R'" andt E [3,T] satisfies the equation 6p 7'—Zfilm”fl+ Zajggl BiMflfl mm i=1 i ,J=1 with initial condition ltiglflw I y, s) = 5(9: - 31). (3-10) Proof. Without loss of generality, let us consider m = 1, i.e. we try to prove: a 2 Suppose that Q(x) is any function with continuous first and second derivatives, and has compact support in [a,b]. Then by continuity Q(a) = Q’(a) = Q”(a) = Q(b) .—. Q’(b) = Q"(b) = 0. 1 =[:megfiu b 8 = l. a(u)5,-p(u.t l y,s)du __ - b )P(uat+Atly,S)-P(u,t|y,8) -.l:2t/< QM A, du —ziigoztI/bQ(u)/: pa:(,ts,,|y,)(ut+AtI:ct)d:rdu — l. Qp(u.t | y,s>duI +oo (by(2.2).p(u.t+AtIy,s)=/ p(x tIy,s ) p(u t+Atlxt>.dx) = lim 1{/:°() p(,s):rt|y, )I/Q(u) )p(,ut+At|x,t)du]d.r At-+0 At -—/b( Q(:i: p(:r,t I y,s)dr} = git-£10317]: p,(:r t I y, s)[/b( p( u, t+ At I a: ,t)Q(u)du — Q(r)]d:r. By (3.5) and the fact that Q(:r) is bounded since it is continuous on [a, b], we have [I I p(U,t + At I x,t)Q(u)du = o(At). u-r )c So ' b Kt [ap(u,t+At|:r,t)Q(u)du—Q(I)I = — flu-”<6 p(u,t + At | 1:, t)Q(u)du - Q(z)] + o 1 We expand Q(u) about x: Q”(:c) 2 (u—r)2+o(u—:r)2. (2(a) = 62(3) + Q'(r)(u - 4r) + Then fi [flu—.r:|c 1 +Q’(I)E IHKC p(u,t + At I 1:, t)du Q"($) 1 2 + 2 At .HKJU-x) p(u,t+At|x.t)du 1 — _ 2 At lu-IISc0(u :r) p(u,t+AtI1;,t)du = Q':r( )a (t, :r)+Q”—2(—b 1:) b(t,:r), asAt—>0, (By (3.5), (3. 6)an2d (3. 7). ) p(u,t + At I :r, t)du Therefore Q”;z __)_b p(xtly,3) (330,)“ I)+-—— “(find-T ff: =/_°°a WtIysIQ'mdx + 00 fl? b(t xIp (lastly, )Q”(x)dr. 9 Since /+°°apQ'(xIdx = (apQI|::-/_:°Q§;(apIdx -00 = —/_:;°°Qf() ap),dz /_:°bpo"(x)dx = Mira-(551mm: = I-—(bp:QII_+.:° +[: Qa:(Idx,bp = _ :50 WW) wehave 1 = _ :Qa—pdx = —[: Q— ‘:(apIdx+-2-[: Q5—6:,(dxpr = [ °°QI——(apI+;aa—:—,0,|e|0, E(dw(t)2) = 21m, (4.6) Jr(0)=y ,i(0)=i/- 13 We rewrite (4.4) in the form of an integral equationa P(5,t|9')=P0(5,t|9')+€/(;A2P0(5t ”[635:— ®p(£,rls)1d§ds (4.7) Integrating by parts w.r.t {2, we obtain pans): m(stIm-e/fgus”) marm— -%[po(s,t—s|€>1dé'dr. (4.8) We introduce the following iterative scheme: “sum: m(stm—e/fgmé‘ marlmg—wat- finder. (4.9) In order to start this scheme, we need to solve po(:t,t I 37) first. we transform (4.6) to a system of first order differential equations (Section 3.3) dg‘ _—. A g‘dt + deb'(t) _,_ a: _ 0 1 an : 0 0 y-m < .I w <0 m.)- Assuming that po(:i:',t I 37) is the solution of (4.5), then m(fJ I 37) is the transition with probability density function for (4.6). Since (4.6) is a linear system and w(t) is a Wiener process, p0(:i:',t I 37) is Gaussian with mean value vector m(t I j) = e“ 3] and covariance matrix K (t I 37) = f; eA‘BB’eAI’ds . We now establish Lemma 1 (2') m(t I 37) is bounded. (ii) K(t I if) is positive-definite , Vt > 0. Proofi (i) is clear since the eigenvalues of A have negative real parts. (ii) Clearly K(t I 37) is nonnegative—definite. Suppose for some t > 0, 3 nonzero 37, 3 37K}? = 0, i.e. g'Kg = 37(1) eA’BB’eA"d.s)g' t = ffeA’BB 8’“de C II o 14 Thus B’e A"y =0 Vs E [0, t] (since y eA‘BB’e A" 'y- = IIB’e A ’gTII2__ > 0). Let 37: ( £1 ) , 8"“ = ( 011(8) 012(8) ) . 52 021(3) 022(8) We have 0 = BIC A'sy = @(3 WI?“ “"‘3)I(“I 021( 8) 022(8) 52 = “2'51"" I (“ I 021(3) 022 0(3) 52 = .55( )6?) . 021(SIO~51 + 022(3 Thus 021(S)£1+ 022(S)Ez = 0,VS E [0,”. (4.10) Moreover d d a; (64") = AIeA's => d3 (€4,331): AIeA'ys and d (64“) i ( 011($)§1+012(3)€2 ) ‘13 d 021(3)§1 + 022(3I52 _ _d_ 011(3)£l +012(3I§2 — ds 0 ’ “W = (0 —1 I (“11(3I51'I' cums) 1 —fl 0 - ( ° I 011(3)€1 + 012(SI52 . Thus _d_ ( 011(3I€1+012(3I§2 ) = ( 0 ) d3 0 011(3)§1 + 012(5)§2 . Therefore a11(s)§1+ a12(s)£2 = O , Vs > 0. (4.11) 15 Combining (4.10) and (4.11) together, we obtain: 8‘"? = 0, which implies g‘ = 0. It is a contradiction to the assumption 37 ;£ 0! Thus K (t I y) is positive—definite. D The above implies that po(:i:',t | 9') can be written explicitly, 1 e_%(£_cAtmIK—l(£_c.4¢m . f,t = M Iii) 27r|KI% Theorem 4 There exists some constant 0 < 7 < 1 such that |p1(5s‘,t | y") -po(a3,t l 37II S 7po(ai',t | C137) - Proof: Assume that c is a generic positive constant. For 0 < a < 1, Ifztpouzt I 37)“ _ — no, 1)eA‘K"(as _ was» e-‘s’'K-‘. po(a:i:',t I 0137) Define u=K”i'(:E—e’4‘3]) , v=K-%8A't((l)) . I(0, IIeA‘K-‘(s- ems): = Iv'ul s Iv’vI2 Innis I 0 1 = NO. 1)e"‘K“e’“ ( 1 I I% '|($-eAt3}')'K‘l(5—6A‘37I15 Thus _3. “,t lamp Is)“ S no, l)e,,tK_,e,, 0 I; P0(0$,t I017) 1 (I) . |(f_ eAtm’K-l(i _ 8A¢m|%e_l—2a2 (f-cA'fl'K‘l(1?-eA'f) . (Ii) 1 1-02 , . . (II) is bounded since f(1') = lithe—T” IS a bounded function on (—oo,oo) 1f 0 < a < 1. Now we only need to consider (1). Case (1): t > 6. Then X(t) > K(6) => K‘1(t) < K"(6). So K ‘1 is uniformly bounded. Iv’vla = llvll 16 . 0 = IIK-Ie“( In 1 ’_,_}_ A" 0 s IA slllle ”(1)” S cIIeA"II (since K'lis uniformly bounded) Since the eigenvalues of A have negative real parts, it follows that Iv'vli ~ cc-" . (4.12) Case(2):0 0 as n -> 00. In addition, Vn, we have lim Ipn(5,t I37) -P(5,t|17I| = 0- IcI—+0 Therefore, the iterative scheme (4.9) is convergent. 20 5 Gauss-Galerkin methods 5.1 Fokker-Planck equations for nonlinear oscillations Let 3:1 2 :1: and $2 = 2': and let us recall the Fokker-Planck equation obtained previ- ously from the second order stochastic differential equation that models the nonlinear oscillation with random forcing term, @ = -x a” + 35::{IIIHIx2 + ngIIIp} + 059—” (5.1) at 25271 62:3 ’ with initial condition P(31I502IOI = (1(31, 172) . (5.2) Our interest is to solve the above equation numerically by extending the idea of Gauss-Galerkin approximation developed for the one-dimensional Fokker-Planck equations. The most straightforward extension of the one dimensional Gauss-Galerkin method involves applying a finite difference type approximation in one of the spatial variables and the Gauss-Galerkin method in the other spatial variable. There are numerous issues to be considered such as how should the difference approximation in an infinite domain be applied. 5.2 Model equations In order to construct and analyze the Gauss-Galerkin type approximation for equa- tions of the form (5.1), we first study the following model equation: 8p 6p 32p 6 1 62 2 for z,y E (—oo,oo), t > 0. Here, 5 > 0, f(y) > 0 are assumed to be positive. Further assumptions on the coefficients a = a(z,y) and fl = fi(:r,y) will be made later. Let 5.- 1(1+ th ) ex ’ 1(1+ th 6y = - co :1: = , = - = . 2 e’ + e" y 2 co y) 8” + e“? Then a? ,g 6 (0,1) if 1,31 6 (—oo,oo). Moreover, we have :31: (9:1: vr=vi 21 61(63 + e-x) _ 6.1:(61: _ e-x) _ v5 2(6’ + 63"")2 _ 2 _ v5.- m = 2.7:‘(1 — .27) v5 , 2 2 vrr ‘— (viII— + Us? [(‘_———:I (eat + e—r)2 81: + 6-3)2 :1: _ __2___ 2 + v, -4(6’-e") — I". (ex + 8-1:)2 a: (8.1: + e-r)3 2 422(1— :1?)2 vii — 4:7:(1- i)(2:1: — l)v- 6 __ 37 v,, — via—y _ v- e y(e” + 6'”) -— e3’(e” — e'y) _ y 2(63’ + 6‘”)2 _ .- 2 _ by (8y + e_y)2 = 2 y(1 — y)v3-I , v ( I 2 + I 2 I w : vfl y __:_- vfl _—__— (ell-+6 3])? (ell-I-e y)2 y _ 2 2 —4(ey — e’”) _ v93; _(ey + e_y)2 "I" ”Us; (61! + e_y)3 = 41720-302059 —- 417(1 -17I(23}-1Ivs- Therefore the model equation becomes: 8 - (9p 2 a—f = -247[(1—17)f(y)+2€9(1-I})(‘29-1)3%;453120—QI239—Z —2i(1—i)-§:x(ap) -— 2a(1-— sII2a- -1)%(32p)+ 2s2(1— avg—Mp). Without loss of generality, replace 5' by x , g by y , and j: by f . This leads to 2 -2[y(1— y)f(y) + 2eyI1- y)(2y - 1) 1% + 453,20 _. ”23—311; QE 8: -2x(1—x)—a-(a I—2 (1— )(2 -1I—a—(s2 I 0:1: 1? a: I .2: 6:1: p 2 +24% - mfg—2w?» (5.3) For simplicity, we pose the boundary conditions p(r,0,t)=p(x,1,t)=0, Vt>0, (5.4) 22 and the initial condition p(x, y,0) = q(:r,y) . Let us use H = L2(0,1) to denote the L2 integrable function space. We denote the inner product by (fag) = /01 f(J:)g(a:)d:c , Vf,g€ H. We also define the Sobolev spaces by Wm'2(0,1) = {f e H | f’,f”,...,f"‘ e H} , m =1,2,.... Then the additional boundary conditions at x = 0 and 2: = 1 are specified in a way such that the following always holds, (Lp,v) = (p, L'v) ,Vv 6 W2'2(0,1) fl C2[0,1] (5.5) where L __ , a a 2 . 2 2 02 2 p — -2r(1 — xIaexIapI - 2xI1— xIsz -1)a—I(.8 p) + 2x (1— :0) am In), and L‘p = [—2(2x -1)a + 3(23: —1)2fi2 — 62h; + 2[1:(1— 2:)a — 3:5(1- x)(2:c —1),32]-g—::l 82v +2132(I— @2325}? We note that sometimes the boundary terms may not vanish, such boundary terms often lead to singular measures corresponding to densities accumulated at the boundary. For convenience, we also assume that the coefficients a and fl satisfy certain conditions that imply the inequality (—Lu , u) 2 -—/\(u , u), (5.6) for a given constant A and any function u which satisfies the boundary conditions required for equations (5.3). In addition, a(x,y,t), B(z,y,t) are assumed to be smooth functions and |a(:z:,y,t)| S b1, Vy E (0,1), t > 0, :c 6 (0,1) , lzg(xfiy7t)l S b2, V?! E (091), t> 09 1' E (091) for some positive constants b1 and b2. 23 5.3 Difl‘erence approximation in y Our motivation is that if we first approximate the derivatives in the y direction by differences defined on some given grid, it then might be feasible to apply the one- dimensional Gauss-Galerkin methods at each of the grid points. In this regard, (5.3) is very convenient for variable splitting. Finite difference approximation is one of the most widely used methods in solving partial differential equations. The difference approximation may be set up easily using standard techniques [13]. For example, the unit interval [0, 1] in the y direction can be partitioned by a uniform grid. Let Ny be the total number of subintervals, hy = l/Ny be the length of each subinterval and {yj = jhwj = 0, 1,2, . . . , Ny} be the grid points. Then the second order derivative in y may be approximated by II ( ' -2 ' + ’- P (31)) g P 511+!) Iii/J) P(yj 1) y with truncation error of the order 0(h3). There are several choices for the first order derivative in y, for instance, one can either use the center difference P(yj+1)’P(yj—1) 2hy which has truncation error of the order 0(h3), or the one-sided difference in y ap- proximated by p(yI-+1)-p(yj) p(yj) -p(yj-1) or , hy M which has truncation error of the order 0(hy). Even though the center difference is the most accurate approximation among the ones described above, the upwind difference may have superior stability property which may become very important when the diffusion coefficient in the y direction is small or negligible and convection becomes the dominant phenomenon [7]. For this reason, we choose the backward difference approximation for the first order derivative in y. Using the above difference approximations, we obtain the following semi-discrete approximation of (5.3). (By semi-discrete, we mean that the equation is only dis- cretized in one variable while it remains continuous in the other variable.) a . ._ ._ -;—;= _2lyj(1-yj)f(yj)+2€yj(l -yj)(2yI-1)]p——-’ hf’ l 24 2Pj+1 — 2171+ Pj-l h? u +4€y12(1 - M) + Lij (5-7) where a 6 5'2 a(ajp) - 2.1:(1— I)(2.’13 —1)5;(,6}P)+2 132(1— 13)25:3(fi3213)a (5'8) for 1 Sj S Ny-l, :1: 6 (0,1) and t > 0, withpo =p~y =0,a, =a(:r,y,~,t),13j = Ljp = —2:r(1—:r) fi(z,y,-,t) and each pj,1 S j S Ny — 1, must satisfy additional boundary conditions in the a: direction that are required for the original solution p = p(z, y, t) of equations (5.3), i.e., p,- satisfies equation (5.5) for any j, and L;v is : L;v = [—2(2a: —1)aj + 3(21: -1)2[3}— flfh) + 2[:c(1— :r)aj — 3::(1— 1:)(21: -1)13,2]-g% +212 (1 — x)2- 12%;; (5.9) Then for any smooth test function v E W2'2(0. 1) fl C2[0,1], we have (Ljpj,v) = (p,, L35v) , Vj = 1,2, ...,Ny -— 1. The initial condition is given by p.Ix,y.0I=qu.y.-I, lsjs N.—1. reI0.1I. (5.10) 5.4 Gauss-Galerkin methods The Galerkin methods usually refer to finite dimensional approximations to the weak formulations of partial differential equations. The Gauss-Galerkin methods, in our current context, have special forms. The test functions in the weak form are chosen to be polynomials, while the approximate solutions are taken to be discrete, or atomic measures. A unique feature for the Fokker Planck equations is that the inner product of atomic measures with polynomials are closely related with the moments of the probability distribution. The first analysis of the Gauss-Galerkin methods for one dimensional Fokker Planck equations was given by Dawson [5]. A more complete analysis was given in [6], together with a computational algorithm and numerical examples. The model equations considered in [6] is of the form: QB __ 0 1 32 25 Thus, equation (5.3) can be viewed as a generalization of the above equation in two dimensional space. On the other hand, one may also view its semi-discretization (5.7) as a system of equations and each of the equations is similar to (5.11). In (5.11), the spatial interval is finite, thus, the standard L2 spaces and inner products were used in [5, 6]. When dealing with semi-infinite intervals, these intervals become finite after changing variables. To solve (5.7) by Gauss-Galerkin methods, let us first formulate its weak form. Taking the inner product of a test function v E W2'2(0, 1) flC2[O, 1] with the equation (5.7) and integrating by parts, we have for 1 S j S Ny — 1, d c- 5' 5091,”) = -;:(Pj-1 - P23”) + {29’1“ ‘ 2’5 + 101-1”) +(l-2(?~T - 1101' + 3(2x - 1)2I3}— fiflpj, v) 8 +I2IxI1— xIa. — 3r(1 — xIsz - IIIiflpj. 53) 2 2 2 82v +2($ (1— .17) fijpj, 5;) , t> 0, (5.12) where 61' = 2[yj(1- yj)f(yj) + 26y.“ - yj)(2yj -1)l , 51': 46y12(1- yj)2- Let us now approximate the probability measure pJ-da: by d#?(1=,t)= Z a§(t)6(xf(t))d2, 1 SJ 3 N. — 1. (5.13) k=l i.e., the probability measure is approximated by an n-point discrete Dirac—delta (atomic) measure. Here, one may choose different n for different index j, i.e., there could be different number of atoms at different values of y. But, for simplicity, we only consider the case where n is taken to be independent of j. {xi = :rf(t) , 1 S k S n} are the n nodes with af (t) as their corresponding weights. As far as the test functions are concerned, we choose a set of linearly independent functions {2).-(x) 3"”. In par- ticular, v,(a:) = :r" for i = 0,1, 2, ...,2n — 1 may be the standard choice. Then, (5.12) impliesforOSiS2n—1,lSjSNy—landt>0, ‘1 n k k n Ci 51' k k d— : “1(‘)vi(xj(t)) = 2 (“h— + 7:7 aj-l(t)vi(xj-l(t)) tk=l k=1 1! y c- 28- 5‘ _ h_:, + E§Ia§(t)v.(x§(t)) + fiaf+1(t)vi(rf+1(t)) y 26 +§afItIIL;v.-IIx:ItII (5.14) where L; is defined as in (5.9) . Note that 1),-(2:) = 2:", 0 Si S 2n — 1. The system (5.14) is a system of 2n x (Ny - 1) ordinary differential equations for the 12(Ny - 1) nodes and n(Ny — 1) weights. Initial conditions for the ordinary differential equation system are given by Z af(0)vi(xf(0)) = Iqu,y.-I,v.IxII, (5.15) k=l forlSjSNy—l, 0SiS2n—1. Using the properties of the Gauss quadrature [14], if qJ-(x) = q(z,y,~) is some nonnegative function whose support has positive measure, we may let the weight function in the Gauss quadrature be given by qJ-(x) for each j. Then for each j, the nodes {$§(0)}2=1 and weights {af(0)}}:=1 are uniquely determined by (5.15) and the xf(0)’s are distinct and lie in (0, 1) (see chapter 9 for details). We shall make a crucial but also somewhat restrictive assumption that there exists a time interval [0, T] such that the system of ordinary differential equations (5.14) with initial conditions (5.15) has a unique solution in [0, T], with distinct nodes {s)[-“(0)”;1 in (0,1). Furthermore, the weights {af(t)} stay positive for t 6 [0, T] and for all k and j. The above assumptions imply that the approximate atomic measure would remain valid in a given time interval and the measure is positive in the sense of distribution. However, these assumptions can be proved rigorously only in some special cases which we will discuss briefly later. 5.5 Some useful results In this section we present and prove some useful differential inequalities. Let us use the convention that a vector (or matrix) is said to be non-positive ( j 0 ) iff all its elements are non-positive, and if _<_ )7 iff X - 17 j 0' . Lemma 3 Let A(t) be an N by N matrix with smooth and bounded elements and its ofi-diagonal elements are all non-negative, i.e., a,,-(t) 2 0,1 S i 76 j S N. Let MU) 6 RN satisfy that 112(0) 1 0 and fort > 0 we have d “ -o EMU) j A(t)M(t). 27 TI Then M(t)jO, ‘v’t>0. Proof. Let A Z -—min{aii(t), 1 S i S N} be some given constant, then A1 = A + AIN is a matrix with all non-negative elements. Define M1(t) =3 CMMU) . We have M1(0) j 0 and for t > 0, fine) : AiItIMIItI. So d with é(t) j 0 for t > 0. Thus, t -0 mm = exp{/ A1(u)du}M1(0) +/otexp{/:A1(u)du} éISIds, Vt > 0. Since A1 is non—negative, then —o is non-negative for s S t. Therefore, 1121(0) j 0 and (3(3) 5 for s > 0 imply 1W1(t)j0,Vt>0. Thus, M(t)50, Vt>0. D Consequently, we get the following relation between the solution of the system of differential inequalities and the solution of the system of differential equations. Corollary 1 [Comparison Principle] Let A(t) be an N by N matrix with smooth and bounded elements and its ofl-diagonal elements are all non-negative, i.e., a,,-(t) 2 0,1 g i #j g N. Let 1%) e R" satisfy £12m: A(tIMIt) + (3(5), W > 0, 28 and M10) 6 RN satisfy $1010) = A(t)Ml(t) + (5(t), Vt > 0. with film) = 117(0). Then 1V!(t)j Amt), Vt > 0. In particular, when N = 1, we have the following well-known inequality. Corollary 2 [Gronwall’s Inequality] If fort > 0, we have % ItIsaItItItI+gItI. then, f(t) S erp{/ota(s)ds}f(0) + [Ute-1p {/sta(u)du}g(s)ds , Vt > 0. Proof. Let h(t) = exp{/Ota(s)ds} f(0) + /Ote:cp{/sta(u)du}g(s)ds. Then h(t) satisfies that —h(t) = a(t)h(t) +90). with h(O) = f(0). So, f(t)Sh(t),Vt>0. a The following corollary can be viewed as a weak discrete maximum principle. Corollary 3 Let c1 > 0, c2 > 0 be given constant and {j',-(t)}§v=[3l satisfies d c—lfij) = C1fj—l(t)- (C1 + Czlfjft) + C2fj+1(t) a 1 S .7 S N 9 W > 0 with fo(t) = fN+1(t) = 0 and f,(0) = 9,, 1 Sj S N. Then, f,(t) < max {91,0} , Vt > 0. "‘ lSJ’SN Proof: Define a = ,ggwt, 0} and Mt) = (f1(t) — a,f2(t) — a, ...,f~(t) — a)T. 29 Then d it for some tridiagonal matrix A with elements and = c1,a,_1,j = c; for 2 S j S N and'aw- = -(c1 + eg) for 1 S j S N. Since c1 > 0,c2 > 0, and 112(0) 1 0, by the previous lemma, Mu) = AMIt), Vt > 0. W056, Vt>0. This implies ft(t)Slg1jaSL§,{gJ-.0}t VlfiJfiNtandVDO- D Remark : We see from the proof that the differential equations in the corollary can be replaced by inequalities and the conclusion will still be valid. Furthermore, one could prove a stronger version which implies that if fk(t) = max15j5N{gj, 0} for some 1 S k S N and t > 0, then fj(t) = 0 for all j and all t. Now, we state some results concerning the classical moment problem [15]. Given a sequence {mn}3;0, define the following differences Aom, = m... , (5.16) Akmn = Ak'lmn — Ak’lmn.“ , k = 1,2, ° - -. (5.17) Then we have, Theorem 5 A necessary and sufficient condition for the existence of a solution of the Hausdorfl moment problem, i.e., the existence of a unique nonnegative measure it satisfying 1 m) 2/ x’dp , l=0,1,2,---, o is that Akm120,k,l=0,1,2,---. 5.6 Existence theorems for Gauss-Galerkin approximations First, let us show that if the coefficients of equation (5.3) are constant, then, the system (5.14) with initial condition (5.15) has a unique global solution with positive weights and distinct nodes. 30 Theorem 6 Let the coeflicients of equation (5.3) be constant. If q is a probability distribution and the support of q(:r,y,-) has positive measure for all 1 S j S Ny - 1, and (5. 7) has a set of nonnegative solutions {p,-, j = 1,2, . . . , Ny — 1}, each having support of positive measure, then the solution of (5.14) with initial condition (5.15) exists for all time and the weights {af(t)} remain positive and the nodes {.rf(t)} remain distinct. Proof: We can use each p,- as the weight function to define a Gauss quadrature, i. 6. ,given n, there exists a unique set of distinct and positive nodes {mflt } 15-1 and positive weights {aJ-(t)}2=l such that [0‘ pt(z,t)vt(x)dx = z"; af(t)vt(rf(t)) , Ic=l for any polynomial v, with degree not exceeding 2n - 1. Recall that equation (5.3) becomes, 8p _ 3p 82p 6p 1282p tat—'05??? “a; 5" 5': Its weak form is, d C 5 av,- afipuvt) = Eij-1 - 1).-wt) + 33(191551— 2101 +191 1, v)+ a(pt, 01:) 1 32 v,- +§f32(p1ia _) Then d n n C 5 d, 2 af(t)tt(rf(t)) = 2 I;- + p)af-1(t)vt(xf_1(t)) k-l k=l 3’ U c 25 6 —(h—y Kilafftlvslelt» + fiaf+1(t)vz($1+1(t)) ll +Za-(t )(L'v,)( :rf(t)), Vt>0. Thus, (5.14) has a set of solutions for all time. D 6 Convergence of Gauss-Galerkin approximation 6.1 Outline To prove the convergence of the Gauss-Galerkin approximation, we first show the convergence of the semi-discrete difference approximation and derive an error estimate 31 at the same time under certain regularity assumptions on the exact solution of the Fokker-Planck equation. Then, we show that the Gauss-Galerkin approximation is convergent in some weak sense. 6.2 Convergence of the semi-discrete difference approxima- tion Here, let us demonstrate that the semi-discrete approximation (5.7) is convergent when the number of grid points in the y direction, N”, goes to infinity, i.e., when hy -> 0. Using standard techniques in difference approximations, we first examine the truncation error of the approximation by substituting the exact solution p = p(x, y, t) of (5.3) into the equation (5.7). For simplicity, we use p(yj) to denote p(cr,yJ-,t). Then, a _ p(yt) -p(yt—t) ,,p(yt+1) -2p(yt) +p(yt-1) 50°an — _CJ hy + ‘31 h: +Ltp(yt) + rt, (6.1) for l S j S Ny - 1, a: > 0 and t > 0, with p(yo) = p(yNy) = 0, a,- = a(x,yJ-,t), :31: fi(x,yj,t) and - —2 - + -_ T1: 5j[a_yz(yj)_l’(y1+l) Pig/J) P(.% 1)] y a -— -_ _cj[_a_§(yj)_p(y5) hp(y5 1)]. It So, a2 a = gi-g’éIynh. + 0(1):). and (6.2) for some constant M > 0, if we assume that the solution p = p(r,y, t) of (5.3) is sufficiently smooth and its derivatives in y are uniformly bounded. Now, define e, = p(yj) — p,- , j = 1,2,..., Ny — 1, then 361‘ _ 'CJ' — 6j-1 614.1- 26j + 6j_1 _ c——— at J h, J h; + Ljej + r,- , (6.3) and e0 = eNy = 0. (6.4) Let us now state a convergence theorem. 32 Theorem 7 Assume that in a given time interval [0, T], the solution p of (5.3) is smooth and the coefficients satisfy the previously specified conditions. Then, in the given time interval, the solution {pj,1 S j S Ny — l} of the semi-discrete approxi- mation converges to p as Ny —) oo, i.e., hy -+ 0. Proof: We apply a standard energy type estimate to obtain error bounds on €j,S. Using (6.3) and the inequality (5.6), we get for any j, 1d e- —e-_1 ej+1—-2e,~+eJ-_1 -—(e-,et) ___ —C‘(—J—J—,C‘)+€‘( ,e') 2C“ J J J hy J J h: J +(L,~e,-,e,-) + (T1, 61) 61' CJ' S -:2-h—(ej—ej—158t—8t-1)+ 27108—1, 61—1) - (61» 61)] y y +£(eJ-H _' ej) ej) " QR]. _ 61-1 1 61‘) h; h: 1 1 +/\(€jt 8)) + '2-(65‘, 6)) + 5(7), T1)- Here, we have also used the Holder’s inequality 1 1 (T1561) S 5(6)» e1)+ §(1»71)- Now, if we multiply hy on both sides and sum over all j, we get 1 d ""1 Ci NW] C' ’ 32? (61.60% S *7; 20%" 81-1» 81" 81-1) + 51460160) "' j=l .. i=1 0' _' Ny-l _ 33(6Ny-l a eNy-1)- h—1 (6344 — 61‘ i‘ ej+l - 83') y j:0 Ny-l Ny‘l + A1 2 (51' a ej)hy ‘1' '2‘ Z (7')» lehy j=l J=1 Ny—l S A1 2 (81‘ , €j)hy + Mlh: , i=1 for some positive constants A1 and .MI. Using the Gronwall inequality, we get for any t e [0,T], N,-1 N,—1 Z (€j(t)tej(t))hy S M2 2 (et(0),et(0))hy + M3123: Mah: , j=1 j=1 for some positive constants M2 and M3. Convergence then follows. E] 33 1 /_-fi 6.3 Convergence of the Gauss-Galerkin approximations We now show that the Gauss-Galerkin approximations, as defined in (5.14) and (5.15), converge to the solution of (5.7), (5.10). Combining with the convergence property of the semi-discrete approximation, this implies the convergence of the Gauss-Galerkin approximation to the solution of the original model equation (5.3). Given a continuous f (1:) function on [0, 1], we would like to verify the convergence of the quadrature formula: 8W) = i a:,.ItIfIx.*-,.ItII k=l to mn=ffumooo as n —> 00 for any t 6 [0,T] and each 1 S j S Ny — 1. The additional subscript n is used in order to emphasize the dependence of the weights and nodes on n, the number of atoms. We start with the following lemma: Lemma 4 Let {u;‘(t)} be the Gauss-Galerkin measures defined as in (5.13)-(5.15). Then for any j, 1 Sj S Ny -— 1, let 1 be any fixed positive integer and m§,,,(t) = [01 Ildp?(.r.t), te [0,T]. (6.5) We have the set {mg-m(t) : n 2 %(l + 1)} is uniformly bounded and equicontinuous in t E [0, T] for each I and j. Proof: By the definition of {u;’(t)}, we have (‘3‘ 28j arm-.50) = (5— + Kiri—m(t) “ (t: + 75min“) + %m3+l»n(“ y y Here again L; is as defined in (5.9). Using the boundedness of the coefficients and the assumption that all weights {af’n(t)} are positive in [0, T], we get d c- e c 25 e' Em3,n(t) _ (ll—i +h—;)mi_1,n-(t) (h) +'h—2j)m mi (t)+h—;m]+1,n(t) y +g’(m$.'n2(t),m$;,l(t),mg‘n(t)), n 2 —(t + 1), 2 where g’ is some linear function with nonnegative coefficients. Now, define d ' j J 21 J -d—,m§-ItI = I;—’y+-,‘:—,Im§_.ItI- I;— +-,§,—'Im $ItI+{—. m3..It) +g’Im3‘2ItI,m3“ItI,mflItII (6.6) and m$(0) = (q(.r,yJ-),a:l) , lSj S N, -1, l2 0. Using the comparison principle given in Section 6.1, we get m§,n(t)Smg(t), Vt€[0,T].1SjSNy—1,n (1+1). [\iIH Now, we estimate {m$(t)}. Define Notice that g’ is some linear function with nonnegative coefficients, we may write (6.6) in matrix form as d _. . C' S -o C' 25 5 _. -. _J + 71%)N1j_1(t)-(h—:+h21)th( )+ h—é "1+ 10 )+ AMtltlv y for some nonnegative matrix A. Let Nil-Wt) = C_Ath(t), 171.1(t)=(th°(t),rh1.(t), n'z’ItIIT, J then 2 c 5' .. C 5 E11111“) = (if; + (7:2,) 31-10) " (h—Z 'h—5)1M31(t)+‘wf+1(h2) ’ and 173m = mm) = 0. Now, applying the discrete maximum principle, we get ~ I: ~ I: k m1(t)“15j12fi(§-1m1(0) lspggx_l(q(x,y,),r )_ C , Vt E [0,T], 35 where 1 S j S Ny — 1 ,n _>_ (1+ 1)/2 and C is some constant, depending on the initial condition q = q(.1:,y) only. Meanwhile, since Mj(t) = eA‘MJ-‘(fi and e"‘ is a matrix with bounded and nonnegative elements for t E [0, T], we get mf(t)gC,, Vte[0.T],1gjgN,—1,kgt, where C) are some constant depending on the initial condition q = q(r, y), the time interval [0, T] and the value of 1. Thus, for each given I, we obtain that the set {mg'n(t) : n 2 %(l+1)} is uniformly bounded and the bound, in fact, is independent of j and Ny under proper assumptions on the initial condition. We now consider the equicontinuity of {mg-m(t) : n 2 (1+ 1)} in [0,T]. By the l 2 mean-value theorem 1 d l lm3,n(t2) - mj,n(t1)l : ld—tmjm((:)l . It? _ til 7 Ce (tlat2) ' From earlier discussion, we have I. _. El 311' )mJ—l,n(C) (hy + h: )m§,n(C) m§,..II’I $9— +%mi+l.n(C)l + g’(mj.j,3((), m§;1(g),m§m(g)) :2. it I a. .22 z |/\ +§5m3..I co and the equiconti— nuity, we now show the following. Lemma 5 Given Ny, there exists a sequence {kn}, kn —+ 00 and a sequence offunc- tions {m§'.(t)} such that for every fixed integer l, we have lim ml-kn(t) =m3’,(t), V1 3) g N, -1,t€[0,T]. kn-fm J, Proof: Using the Ascoli-Arzela theorem [16] and results of the previous lemma, for each I and j, we get a subsequence of {mg-m(t) : n 2 %(l+ 1) ,t E [0, T]} that converges uniformly to a limit, denoted by mg'.(t). Taking intersections of these subsequences successively and applying a diagonal selection principle, we obtain a sequence {kn}, kn -> 00 such that lim m'.,,n(t)=m§.,(t), V1 SjS N,—1.120. te [0,T]. D ten—too J Lemma 6 Given N,,, for each 1 S j S Ny - 1 and for any t E [0,T], the elements of the sequence {m$,.(t)}}:0 are the generalized moments of a nonnegative measure. i.e., there exists a nonnegative measure Pj,.(x,t), such that for each I Z 0 and each lSjSNy-l, wehave J“ 1 jx‘dP,,.(.t,t)=m’. (t), Vte[0.T]. 0 37 Proof: The proof is based on the necessary and sufficient condition for the moment problem given in an earlier section. Since {mg-m(t) ,l 2 0} are moments of the measure dug, we have for the related difference Akm'. (t)20, szO,1,2,---, andlS2n—1. Jv" Since l l kll-{rnoomj .kn(tl : mj,-(t)1 we get Akmg‘.(t) Z 0. Thus, there exists a measure de,.(x, t) such that l mfl‘,(t) =/0 I’dP,,.(.t,t). (:1 Corollary 5 Given N,,, for each 1 Sj S Ny — 1 and any f 6 C[0,1], we have lim /1f(x)dpj"(x,t) = j: f(x)dP,-,.(x,t) , Vt E [0,T]. (6.7) kn—roo 0 Proof: Notice that for every fixed integer l 2 0, we have 1 1 lim xldpJ-"(x,t) =/ xlde,.(x,t), V1 3) g N, — 1 , te [0,T]. 0 lea-too o By the well-known Weierstrass approximation theorem, {23’}in is dense in C[0, 1], so for any continuous function f , lim /1f(x)duJ-"(x,t)=/Olf(x)dP,-,.(x,t), Vte[0,T] 1:1 Ian->00 0 Let de,.(x, t) = pj,.(x, t)dx, we now verify that pj'.(x, t) corresponds to the weak solution of the equation (5.7). Lemma 7 Given Ny, for each 1 S j S Ny - 1 and for each l, we have m’., (t) — ml. (0) : /0‘(CJ_Pj-1,-(5]z— pj"(s),x‘)ds t . __ . . +/ (Ejpt+1..(8) 2122.2(5) +PJ-1.-(5),$l)d3 y +/(p,-.() ),L;x)d' )ds, Vte[0,T]. Here, p0,. = pr. = 0. 38 Proof: It is clear that ‘ m'- (8) - m'- (8) 1 1 —1.kn .kn mj,k,,(t) - mj,k,.(0) = C1“ J J 0 hy) ds .1. m1“ "n Qmi.kn(3) + "fit—1.15"“) h?) +/:))t[A1L;(xl)dflj"($,S) ds, 1 LL;($l)dfljfl(x,S) S gl(m§;:(3),mj;:(s),mg'kn(3)), ds for some linear function g1 with nonnegative coefficients which is defined before. So, by the Lebesgue dominated convergence theorem, we get from ( 6.7) and the previous lemmas on the convergence of the moments that ml (t) __ 171.1(0) : tCJ' m§~l.t(s) — min-(S) 0 h,, ds t ml. — 1. ml. 3 + V/ékj 1+1‘.(S) 2m;;(3)+ 1-1,.( )+(pj’.(x,8),L;(:L‘l))]d-S Thus, for any 1, (pj,.($,t),$l) - (q($,yj,0),:tl) = Atli—j'lpj—L-(xvsl -pJ.-(‘r)3)1'rl)ld3 +f [fl—’2 (Pm-(I S)—2Pj.- (I Sl'l'Pj “(I 3) I‘IIds + [0 Ip...I.r.sI. L;(I’))ds- :1 Since {x‘} is dense in C[0,1], we see that {p,,} are the weak solutions of (5.7). By the assumption that (5.7) has a unique set of solutions, we see that the limit of {#595} is, in fact, independent of the subsequence. It follows that the whole sequence is convergent. Thus, we have proved the following theorem. Theorem 8 Under all the previous assumptions, given N,,, for each 1 Sj S Ny — 1 and for any continuous function f in [0,1], we have lim /01f(x)d;tJ-"(x,t) = [Dinah-(1,0455, Vt e [0, T] , (6.8) n-too where {Pi} is the set of weak solutions of {5. 7). 39 Finally, we combine the convergence of semi-discrete approximation and the above result to get the convergence of the Gauss-Galerkin/finite difference approximation to the solution of the model problem (5.3). Theorem 9 Let p be the solution of (5.3). Then, under all the previous assumptions, for a given continuous function f in [0, 1]. we have for any t E [0,T], Ivy—1 1 1 lim mg 2 h,|/0 f(x)du;-‘(x,t)-/O f(x)p(x,yj,t)dxl2=0, i=1 Ivy-+00 71—) where p is the solution of (5.3). Proof: For any 6 > 0, there exists an N‘ such that for any N, > N‘, we have N,-1 h p'(I.t)-p(I,ywt) 2 , 1:21 y” J J llH< 2Hfll2 where H = L2(0,1). So, ivy-1 Zhylfolm PJItdI—/f()(xyj,dg:]2<.2_ Vte[0, T], For each Ny, there exists an n‘ such that if n > n‘, then for each 1 S j S Ny — 1, we have 1 1 n 6 I] fIrIdu.Ia:.tI- / fIsc)p.Ix,t)dx)?< 5. Hence, lVy— -1 gym/m I)dpJ-(.rt)- -/f(:r) t)p,(xt)d.t|2<§. Therefore, Ivy—1 1 1 j; hyl/o f($)d/I?(Itt) -/0 f(I)p(x,y,-,t)dx|2 < e. Letting e -+ 0, we get the conclusion in the theorem. D 7 Application to nonlinear oscillation of second or- der 7. 1 Formulation Consider equation (3.13), which models the problem of general second order nonlinear oscillation with a random force. We rewrite it in the following form it + g(x,x) = w(t) . 40 The associated Fokker—Planck equation is 2 ap-- ya—p+ 59-19 (Iy)p]+Dg—-y§, b—t- — yam 8y Lye (—oo,oo) (7.1) where y = 1': and with initial data p(r,y,0) = q(I»y)- We assume that g and 3% are bounded functions. In order to make the coefficients of equation (7.1) bounded in space (since Sc might not be bounded), we perform the following transformation flax/,t) = p(r + yt, yd), thus ap_ a_p_ yap _ gg__ _a_,3 8t 8—t ”ax ’ yam" ”62’ 0 6g 6p_8g~ 9L3 8p 5&19(r,y)p] = a—yp+95§ — 5,—yp+gay - git-5;, 82p 8 6p tdp 02p ” 0 02p ” 2 0213 __ = __ _. __ _ _ t _ Day? Day (By tax): Dayp D0——$3y + Dt 01:2 i and equation (7.1) becomes: a}; a 62;; as 23—:21) p)+Day 2 —2tDa—p—Iay+Dt 872. Q 315 at _ —gt5;+ 8y(g The coefficients of the above equation are all bounded in a given time interval. Let us change variables similarly as in Section 5.2: - 1 ~ 1 1:: —2-(l+coth:r) , y: §(1+cothy). Without loss of generality, we write the equation in p, r and y, 6p: Gap 6p 82 a2 32 .7 at “81+bay+”gc+da 2(dP)+2dm(fP)+ny—2(fp). (1.2) Here, 2:, y E (O, 1) , and we assume that a = a(x,y, t) , b = b(:r,y, t) , c = c(2:,y, t) ,d = d(a:,t) and f = f(y) are all smooth functions of variables in [0,1] x [0,1] x [0,T] and satisfy some further conditions we give later. Moreover, we assume that the initial condition is p(z,y,0) = q(:c,y) , and p = 0 on the boundary. 41 7.2 Convergence of the semi-discrete difference approxima- tion The difference approximation may be set up by using standard techniques. For ex- ample, the unit interval [0, 1] in the a: direction can be partitioned by a uniform grid. Let N: be the total number of subintervals, hr = l/Nx be the length of each subintervals and .23 = jhr, be the grid points. The second order derivative in a: may then be approximated by 62 d P' —2d"'p+d- P'- b?(p(xj))“~‘ ’H 1+1 fig) 1 J l with truncation error of the order 0(hi). For the first order derivative in x we may use the center difference P(IJ+1)— P(Ij—ll 2hr which has truncation error of the order 0(hi). With the above difference approxima- tions, we get the following semi-discrete approximation of (7.2). @fl _ a'ij " Pj—l +1}an dj+1Pj+1 - 2dej + d'—1Pj-1 at ‘ 1 2h. 6.1/“JP”: h; a(fPJH)__a____(pr-1)+ +_:. ___—___. , 7.3 for 1 S j S Nx— 1,31 6 (0,1)and t > 0, with p0 = pN, = 0. Here, ai = a(xj,y,t), bi = b($j,y,t), Cj = C($j»y»t)a and 41': (1113,0- Now let us demonstrate that this semi-discrete approximation is convergent when the number of grid points in the a: direction N: goes to infinity, i.e., when hit —> 0. Using the similar techniques as in Section 6.2, we first examine the truncation error of the approximation by substituting the exact solution p(x,y, t) of (7.2) into equation (7.3). For simplicity we use p(xj) to denote p(zrj,y,t). Then, 3 . _ .p(xj+1)- P(xj-x) 010%) . . @0431» ‘ a] 2h: + b} By + C)P($J) +djdj+1P13j+1)’ 2di££$jl + dj—1P111—1) dj 61fP11j+ill alfpf-Tj-lll 32 for 1 Sj S N: — 1, y E (0,1)andt > 0, with p(xo) = p(u/1) = 0, and Tj = aj[5;(P(l'j )_p(rj+1)2;xp(xj-1)] 42 32 dJ+1P(Ij+1) — 2de(1‘1) + dj—1P($j—1) +d,- [20?(4M1‘m - h}, ] a'J W - W +2dj [m(fflxfl) - y 2b,. y 1- Thus, and (13,73) S Mhi, (7-4) for some constant M > 0 if we assume that the solution p = p(ar,y,t) of (7.2) is sufficiently smooth and its derivatives in .2: are Vuniformly bounded. Now, define eJ- = p(yJ)— pJ, j- — l, 2,. —.1 Then 861' €j+1—8j_1 66' dj+1ej+1 —2d'8'+d'_1€'_1 _ z ._ d J J J J at J 2h, +b’8y +CJJ+ h; d_j a(_f__ej+l)_a(_f___eJ'--1)_J_ and co = 6N, = 0. (7.6) Theorem 10 Assume that in a given time interval [0, T], the solution p of (7.2) is smooth and the coefl‘icients satisfy the previously specified conditions. Then, in the given time interval, the solution {pJ-,1 Sj S N, — 1} of the semi-discrete approxi- mation converges to p as N, —> oo, i.e., hr -> 0. Proof. Using (4 5) we get for any j, 1882' _ ._ a . i—a—tj- = aJ-eJe—Jiiih—ei-l-l-bJeJ-a—i-l-cJ-e? d 16 1- Zdj 8 +£1-16 _ +dJeJ J+ J+ hJ2J J- J1 d-e- 6(fe' ) 3(fe-_) 02 +11." at“ — a; ‘ )+,.J8_y2(f.J)+.J.J, Summing over all j, we will get 1 N’-166§N’—1 8 +1—1 68 Ng-l 212307 =21aJ~eJ~——————J+N:bJ-eJ 65+21cJ-ej - J: J: 43 N -l . . . . ._ . , J 8J( h2 hi 1' J=l Nz—l d6. a(fe ) a_(fe Nx-l 62 J J 1+1 _ J— l) + g hx( 8y 8y )+EfeJa—_y 2(feJ) Nx—l + Z TJ'BJ' J=l N -1 N —1 N —1 2 N -1 _ J‘ aJeJ-eJ-+1 _ 1 aJ+leJeJ+1 +1 ’ 86 j ‘ _g 2h: J; 2h. 2;,JJTZ +2“ + N‘E-Idjej(dj+lej+l-djej) _Nil dj+1€j+1(dj+1€j+1-dj€j) . h2 , h2 3:0 1: J=0 1‘ + Nil dJ-_1eJ--16(feJ-) _ iii-:1 dj+1€j+1 a(fej) j=1 hr 83/ j=1 h, 8y Nx-l 62 f6) N, —l +ZfeJ—(-—yJ)+ZTJeJ-. 1:1 Integrating over y, we have le-l d 1 2 Nz-l 1 aj—aj+ld1N:—lbj §;a:.Jij = ;/.J1Jw2—h— - J;/.Jf-‘Z —Jy N—l N -l x 1‘ d. . + Z [0 C: eidy- 2 /y( d’ile’iiz e: dy N’JI/J dJ+leJ+l — dj-lej—l a(fejld " h a y j=l 0 1' y Nz—l N: -1 ‘2] (a #61049 +:/T:ejdy i=1 0 i=1 INJ"l 1 a- — a- < __ J 1+1 d — 4 J2, [00 h, CJ“ y +9610 ———a——-hJ+‘ ejdy —JN;l/JJ‘§O 813-de Nz’Jl le‘l lNg—l +Z/o:J-ede+- ;;/ ede+- :Z/O TJ-de, J=121=1 since _N:Z-i/J dj+1€j+1h - j- -1€j—13(f€j)d a y y <1N:-11/0( dJ+lej+l“2 J'- lei- 1) 2dy +Nil/l (%_ (feJ) )2d j=1 44 ..2 l. J [(dJ+leJ+-l - (1161') +0116: - dj-lej-xll2 'o\. D“ N + 2.. 2 g "I 11M“: o\.-_ A 93 3 X? K» V N A dy Alr— ll ... .-| IA NIH cb\. N -l l(dj+18j+l— d j€j)2 +1 : JE/l( (d 161'“— :1; 16J- 1)2dy j- h: ”=4 We) "’ + f (a J ) d E 0 6y y Ng-l 1 d ___d Ng-l S E/O (j+l"31+}:2 1‘31)2 dy + 2/016((f_61))2d J=0 1' 1"! Thus, le-l d 1 2 2 NI-l — —— < 21:1 dt/Oedy _ MINZl/e edy+M2jE=21/11'de N,-1 S JWIZ/O ede+M3hi for some positive constants M1, M2 and M3. Using the Gronwall inequality, we obtain for any t E [0,T] , Nx-l Nx-l j=l for some positive constants M4 and A15. Convergence now follows. [3 7 .3 Convergence of the Gauss-Galerkin approximations We rewrite equation (7.3) as 019- p- —p-_ d- p' —2d-p-+d'_ __ 3? +d- B(fp'u) 0(fp'-1) b J —J —-—J-—- — _J —— . . To solve (7.3) by Gauss-Galerkin method, let us formulate its weak form. Taking the inner product of a test function v 6 W2’2(0, 1) fl C2[O, 1] with equation (7.3) and integrating by parts with suitable assumptions on the coefficients, we have, for 1 S j .<_ N: - 1, d 1 22(ij v) : 2—h-(aj(Pj+1 - pj-l)av) + (CJpva) 45 d- 6 +E’;(d,-+,p,-+1 - 2(1in + dj-xpj—nw) - (Pia 5—(ij)) d 8v d' 01) 82 fzv) -h—’(fpj+1 8—y)+ imam —)+ (fp pi, -—(—y -——). (7.7) Let us now approximate the probability measure pJ-dy by d/‘lj( y,t :Zlajn(t) 6(yjn( t))dy) ISjSNI‘—1, (7'8) and choose w(y) = y’ for l = 0,1,2,...,2n — 1. Then (7.7) implies for 0 S l S 2n—1,1$jSN,-landt>0, d" 1" d-tkzlai‘mUMyfltnl : )hjlgnaf+1.(t)nt)aj(yj+1(t ))(yf+1(t))l 2,, go? 1.(n (y, 1(t))(y;‘_,(t))’ 1: l +}::a,-,n( t)cj(y "(t))(y§‘(t))' +dJ-d 1 hJ+ Zaj+l.(nt) (yj+1(t))l +0131;an __.__-d-1d, J Ear-lint) (yj—1(t))l b —:aj.(nt) %it»l%’£y(yj(t)) —Zla,-,(tn )b-(yj( mm)“ 1:: 1 dj ill-“30H“aj+1n(t)(yf+1(t))l_l ’7 = fiilflyfmi A M ”(w(yfilunl-l + gig—fl fy a’° (t)(y,(t))' + Z: 21 U3; (yfun emu) (y:(t))’-‘ + 231(1- 1)f(yf(t))af,n(t)(yf(t))”’- (7.9) k=l 46 Lemma 8 Let {p3‘(t)} be the Gauss-Galerkin measures defined as in (7.8). Then for any j, 1 Sj S N, — 1, let I be any fixed positive integer and l mzmu): / y’dy2, new]. We have the set {mg-m(t) : n 2 1(1+ 1)} is uniformly bounded and equicontinuous in 2 t 6 [0,T] for each I and j. Proof: By definition of {p3‘(t)}, we have n mama) = zagnuxysnun'. k=l Using the boundedness of the coefficients and the assumption that the weights {af,n(t)} are positive and the nodes{y§‘(t)}}§=1 are distinct in (0. l) for t 6 [0, T], we get from equation (7.9), d dtmi.n(t) S A1’7"‘$'—1,n(t)'+' ’\2 min“) + A3 mi+l.n(t) _ _ 1 +g‘(m§,,3(t),m§,n‘(t),m§,n(t)), n 2 5(1 + 1), where A1, A2 and A3 are some positive constants depending on the coefficients and h, and g1 is some linear function with nonnegative coefficients. Now, define d _ _ min) = A1m§-1 +g'(m§ ”0th ‘(ttmi-(tn (7-10) and ml'(0)=(Q(I13y)1yl)1 ISjSIVJ:_lalZO Using the comparison principle given in Section 6.1, we get . 1 mjmu) _ mS-(t), Vt e [0,T], 1 g] g Nx— 1, n 2 5(1+1) . We notice that g] is some linear function with nonnegative coefficients. Equation (7.10) now becomes 1 It _ ~ I (t) ~ It)+~ l (t)+~ l—2(t)+~ l-l(t n11 dtmj( )—c1mJ-_l v+c2mJ-( c3mj+1 c4mj csmj ), (a. ) where 51 , 62 , E3 , E4 and 55 are some positive constants depending on the coefficients and h. Let M(t) = (MM) be an (N: — 1) x (l + 1) matrix, with element MM being equal to m§(t). We may rewrite (7.11) in matrix form % M = A M for some nonnegative matrix A. Then M(t) = e"t M(O). 47 Thus I: k _ . k "lg-(t) S 01,3,r2g,§_,m.<0)— Czlsjrrslgf_l(q(zpy).y ) S M1, W E [0,T], where 1 S j S N; -— 1,n 2 (1+ 1)/2 , and M; is some constant which depends on hr, the coefficients. the initial condition q = q(:r,y), the time interval [0, T] and the value of 1. Therefore, for each given I, we obtain that the set {mg-m(t) : n 2 %(l + 1)} is uniformly bounded. The proof of the equicontinuity is similar to the proof of Lemma 4 in section 6.3. [3 Now, we know that {mg-m(t) : n 2 %(l + 1)} is uniformly bounded and equicon- tinuous in [0,T] for each I and j, following the same arguments as in Section 6.3, first we will get the convergence of the Gauss-Galerkin approximation to the semi- discrete approximation. Then if we combine this result with the convergence of the semi-discrete approximation, we have the convergence of the Gauss-Galerkin/ finite difference approximation to the solution of equation (7.2). 8 Computational algorithms We discuss here some methods for the solution of Gauss-Galerkin/ finite difference approximation. 8.1 Runge-Kutta and multi-step methods The Gauss-Galerkin/ finite difference approximation yields a system of 2n x (Ny - 1) ordinary differential equations for the n(Ny -— 1) nodes and n(Ny — 1) weights (see equation (7.9)). Standard numerical methods such as Runge—Kutta and multi-step methods may be used to solve such equations. 8.2 Predictor-corrector scheme via moments Due to the special structure of the Gauss—Galerkin approximation and its close rela- tionship with the Gauss quadrature and the moment problems, a alternative way of solving the ordinary differential equation systems has been studied at least for the one-dimensional problem. Here, we present some stability conditions which show why 48 such a procedure is feasible and how it may be applied to two-dimensional cases as well. First we rewrite the Gauss-Galerkin/finite difference approximation at t = to as d i c 5 ,. c- 25' 6 Emma) = (g: + h—g1m.-...(t1—(;’; + h—51mm(t)+;,-3m3.1..(t) +Za,,,t() )(L'v,)( f,(t)), (8.1) where L;- is defined as in (5.9). Note that v,(:r) = :r‘, 0 S i S 2n — 1. The moments are defined by m3,,,(t) = Z af,,,(t)(a:f,,,(t))l , V0 S i 3 2n — 1 . k— The proposed computational procedure is as follows: 1 Given {m3.,,(to) ,2301, compute {af.n(to)} and {Ifm(to)}. [Q Predict {m3‘,,(to + At)} for i = 0,1, . . . ,2n — 1. Evaluate the right hand side of (8.1) at t— — to and using the forward difference to approximate the time derivative to predict {m3,, (to + At)} for i— — 0,1,.. , 2n — 1. 3 With {m3n (to+At) $261, compute the weights {a3(to+At)) and nodes {$300+ At)}. 4 Evaluate the right hand side of (8. 1) using the new moments {m3,, (to + At)}, weights {aJ-( to + At )} and nodes «[1,-(to + At ).} Find the new updates of {m3',,(to + At)} for i = 0,1,...,2n - 1. 5 Repeat steps 3 and 4 above or march to the next time—step. One may consider step 2 as a predictor and steps 3 and 4 as a corrector. Obviously, the core of this procedure is in step 3. The actual implementation of step 3 can be found in section 9.1 (see also [6]). Here we present the next theorem to substantiate the given procedure. Theorem 11 Given a set of positive weights {af,,,(to)}£=1, a set of distinct nodes {:123—‘,,(to)}},‘_._l in (0,1) and the corresponding moments {m3’,,(to) f2; , there exists a small enough At such that if {m3,, (to + At),_?25'1 are computed from step 2, then 49 there exists a set of positive weights {a393,(to + At)}},‘=,, and a set of distinct nodes {1333,00 + At)}},‘=1 C (0,1) such that for any 0 g i 3 2n — 1, mgmm + At) = Z a:,,(to + At)(r3-",,(to + any} k=1 Proof : Define m3',,(t) = 0 for any i 2 2n. We have A"m3-,,(t) 2 0, k,l = 0,1,2,..., where A"m3-,,,(t) is defined for each I: from {m3‘n(t)} by (5.17). So, if At is small enough, by the continuity property we get Akm3‘n(to+At) 2 0. k,t= 0,1.2,.... By the theorem on the Hausdorff moment problem, we obtain the existence of positive weights {af,,,(t0 + At)}},‘=1 and distinct nodes {3393,00 + At)}2._.1 C (0, 1) such that m3,,,(t0 + At) = 2 (133,00 + At)(a:f,(t0 + At))‘, v0 3 i 3 2n — 1.1:1 k=1 9 Implementation In this section, we discuss various issues in the practical implementation of our Gauss—Galerkin/finite difference algorithms. First of all, a general description of the algorithm is given. Numerical results on a number of test problems are then pre sented. For the model test problem 1, we compare the numerical solution based on the Gauss-Galerkin / finite difference method with that based on the conventional two- dimensional finite difference method. It is demonstrated that the Gauss-Galerkin / finite difference methods are superior to the two—dimensional finite difference methods in achieving high accuracy. For test problem 2, the exact solution is known analytically. This, therefore, allows us to compare the numerical solution with the exact solution. For the third problem, a simple second order random linear oscillation problem is for- mulated in terms of its Fokker-Planck equation and the partial differential equation is solved by the Gauss-Galerkin/finite difference method. Graphics plots are provided to show various behaviors of the exact solution and the numerical solution. 50 9.1 Description of the algorithm The Gauss-Galerkin/ finite difference method uses finite difference method in the .1: variable and Gauss-Galerkin in the y variable. Its implementation consists mainly of the following features (see the appendix for a flow chart). 1 Initial condition Use Simpson’s rule to get integrals for the moments of the initial condition. 2 Difference in the x-variable At each time step, center difference is used in .1: variable. 3 Finding the associated symmetric tridiagonal matrix An important step is to compute the inner product of the associated family of orthogonal polynomials from the corresponding moments. This replies on an expansion of the product of two polynomials in terms of a monomial. We implement the procedure by the algorithm given in Figure 1, which is different from the ones given in theses of Abrouk and HajJafar [1, 6]. 4 Finding the weights and nodes We use the following theorem [14] to get the weights and nodes from a tridiag- onal matrix Theorem 12 Let the polynomials {Pi} be defined by the recursions 190(3) = 1» Pi+ll$l = (I _ 6i+llPi($) — 712+1Pi+1(1) fOTi Z 0, where p-1(1:) = 0 and 6i+1 = ”Pupil/(Pupil 2 _ { 0 for i = 0, 714-1 — . (PiaPi)/(Pi—hPi-l) for i Z 1. ’ 01 Algorithm to form the tridiagonal matrix: Input: {mJ-,0 Sj 5 2n — l} : the moments Output: {61—} : the diagonal elements, {7,}: the off-diagonal elements. Begin Initialize 00:09 50:03 60:1» do='-<51, 01: mo, 131: m1, 51: 51/01, 611:1, 71=(m2/m0)—(m1/m0)2, 012 = ($me — 2517711+ m2, 132 = dfml — 261mg + m3 , 62 = 52/02. For j=2:n-l, let 80 = -5jdo — 7j-1Co For k=1:j—2, let 81: = dk—l - 61d}: — 71-101: End for ej_1 = dj-2 - 61dj_1, e,- = dj-h ”1+1 = Zizo Eliza ekeimkha *3)“ = 23:0 ESL—.0 ekeimk+i+1s 5j+1 = 131'+1/01j+1 1 71‘ = CUM/Oj- For i=0:j+1, let c,- = d, , d,- = e,. End for End for Forj=1:n—1, let 7: = fl End for End Figure 1: Algorithm for the construction of the tridiagonal matrix. 52 Let the matrix (51 ‘72 l ‘72 52 ’73 -.7j \ 7152‘) Then (i). The roots 2:5, i = 1,...,n, of the n-th orthogonal polynomial pn are the eigenvalues of the tridiagonal matrix Jn. (ii). Let U“) = (vii),...,v$,i))T be an eigenvector of Jn for the eigenvalue 1,. Suppose v“) is scaled in such a way that . , b (v(:))T,,(:) ___ mePo) z] 00(1),”. 0 Then the weights are given by w,- 2 (v1 )2, i = 1,...,n. 5 Solving the system of ordinary differential equations Predictor-corrector type methods are used to update the moments for the next time step. 9.2 Model test problem 1 Numerical test is performed first for the following problem: at = (y2(1 — y)2u)yy + “m with boundary conditions u(:1:,0,t)= u(.r, l,t) = 0, ur(0,y.t) = ux(1,y,t) = 0, and initial condition u(.r,y,0) = 7rcosZ(-T:2£)sin(1ry). The purpose of this test is to show the superior convergence properties of the Gauss- Galerkin approach, as compared with those of the conventional finite difference method. 53 The Gauss-Galerkin / finite difference method uses finite difference in the 1: variable and Gauss-Galerkin in the y variable. We use five grid points in the a: direction while three nodes in the y direction for each grid point in 1:. The locations of the nodes (0, A, and *) corresponding to the grid point a: = 0.5 are plotted in Figure 2 and the corresponding weights are given in Table 1. We can see that the nodes with positive weights have clearly moved to the boundary of the interval. Similar behavior is observed for all other grid points in :c. time weights for o weights for A weights for * 0.0 0.298631 0.712243 0.298143 .05 0.289834 0.62341 0.28954 0.1 0.288038 0.561437 0.287862 .45 0.309076 0.395482 0.309062 1.0 0.356011 0.288322 0.35601 2.0 0.411019 0.177964 0.411019 3.0 0.44199 0.11602 0.44199 4.0 0.46051 0.078981 0.46051 5.0 0.472101 0.055799 0.472101 60 0.479639 0.040723 0.479639 7.0 0.48471 0.030581 0.48471 8.0 0.488225 0.02355 0.488225 9.0 0.49073 0.018541 0.49073 10. 0.492558 0.014885 0.492558 12. 0.494962 0.010077 0.494962 14. 0.496402 0.007196 0.496402 16. 0.49732 0.00536 0.49732 18. 0.497936 0.004128 0.497936 20. 0.498367 0.003267 0.498367 25. 0.499004 0.001992 0.499004 30. 0.499324 0.001352 0.499324 Table l: Gauss-Galerkin/ finite difference solution: Three nodes in y direction. 0, A, and * are symbols for nodes. (see Figure 2) 54 Gauss-Galerkin Nodes Figure 2: Gauss-Galerkin/finite-difference solution: Movement of nodes to the bound- ary 0.0 and 1.0 as t —> oo. 55 At time 30.0, the numerical solution is approaching steady state based on the tolerance we have chosen. The solution becomes uniform in :r while it approaches zero in the interior and piles up at the boundary. The nodes and weights at the steady state are given in Table 2 and they are the same for all grid points in x. y-nodes 0.006261 0.500000 0.993739 weights 0.499324 0.001352 0.499324 Table 2: Gauss—Galerkin/ finite difference solution: Three nodes in y direction. In Table 3, the zero-th through the fifth order moments of the Gauss-Galerkin / finite difference solution at the grid point :r = 0.5 are given at various times. Next, conventional two-dimensional finite difference methods in both directions were implemented. In contrast to the high accuracy in estimating moments using Gauss-Galerkin/ finite difference methods, the standard finite difference method in two dimensions suffers inaccuracy due to the piling up of solution near the boundary. The related results are presented in Tables 4 and 5. Table 4 consists of the moments computed using 20 grid points, while Table 5 consists of the moments computed using 40 grid points. In addition, the graphs for 20 grid points and 40 grid points at t = 0.0 and t = 30.0 are shown in Figures 3 and 4. By comparing the test results, we can see that Gauss-Galerkin/ finite difference methods have remarkably high accuracy, and they are much more suitable for solving equations with degenerate diffusion coefficients than conventional two-dimensional fi- nite difference methods. This is mainly because Gauss-Galerkin methods can capture the Dirac-delta measure like singular behavior of the solution very efficiently. 9.3 Model test problem 2 Consider the second order Langevin equation: i: + 22': = m(t) , its corresponding Fokker-Planck equation is given as: u. = —(yu). + (211a). + 311.. . according to the previous sections. 56 Time m0 m1 m2 m3 m4 m5 0.0 1.00000 ' 0.50000 0.29736 0.19604 0.13846 0.10275 1.0 1.00000 0.50000 0.36224 0.29336 0.24855 0.21579 2.0 1.00000 0.50000 0.40020 0.35030 0.31524 0.28760 3.0 1.00000 0.50000 0.42430 0.38645 0.35832 0.33504 4.0 1.00000 0.50000 0.44043 0.41064 0.38752 0.36772 5.0 1.00000 0.50000 0.45168 0.42753 0.40811 0.39106 6.0 1.00000 0.50000 0.45982 0.43973 0.42313 0.40826 7.0 1.00000 0.50000 0.46589 0.44883 0.43440 0.42130 8.0 1.00000 0.50000 0.47053 0.45579 0.44310 0.43142 9.0 1.00000 0.50000 0.47416 0.46124 0.44994 0.43945 10. 1.00000 0.50000 0.47707 0.46560 0.45544 0.44593 11. 1.00000 0.50000 0.47943 0.46915 0.45994 0.45126 12. 1.00000 0.50000 0.48139 0.47208 0.46367 0.45570 13. 1.00000 0.50000 0.48303 0.47454 0.46680 0.45944 14. 1.00000 0.50000 0.48441 0.47662 0.46947 0.46264 15. 1.00000 0.50000 0.48560 0.47841 0.47176 0.46539 16. 1.00000 0.50000 0.48663 0.47995 0.47375 0.46779 17. 1.00000 0.50000 0.48753 0.48130 0.47549 0.46989 18. 1.00000 0.50000 0.48833 0.48249 0.47702 0.47174 19. 1.00000 0.50000 0.48903 0.48354 0.47838 0.47339 20. 1.00000 0.50000 0.48965 0.48448 0.47960 0.47487 Table 3: Gauss-Galerkin / finite difference solution: Three nodes in y direction. Time m0 m1 m2 m3 m4 m5 1.0 0.99796 0.49898 0.35708 0.28613 0.24091 0.20855 2.0 0.99794 0.49897 0.38500 0.32801 0.28930 0.25972 3.0 0.99794 0.49897 0.39766 0.34700 0.31127 0.28300 4.0 0.99794 0.49897 0.40338 0.35559 0.32121 0.29354 5.0 0.99794 0.49897 0.40598 0.35948 0.32571 0.29831 6.0 0.99794 0.49897 0.40715 0.36124 0.32775 0.30047 7.0 0.99794 0.49897 0.40768 0.36203 0.32867 0.30144 8.0 0.99794 0.49897 0.40792 0.36239 0.32908 0.30188 9.0 0.99794 0.49897 0.40803 0.36256 0.32927 0.30208 10. 0.99794 0.49897 0.40808 0.36263 0.32936 0.30217 11. 0.99794 0.49897 0.40810 0.36266 0.32940 0.30221 12. 0.99794 0.49897 0.40811 0.36268 0.32941 0.30223 13. 0.99794 0.49897 0.40811 0.36269 0.32942 0.30224 14. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 15. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 16. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 17. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 18. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 19. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 20. 0.99794 0.49897 0.40812 0.36269 0.32943 0.30225 Table 4: Difference solution: 20 grid points in y direction. 58 Time m0 m1 m2 m3 m4 m5 1.0 0.99949 0.49974 0.36190 0.29298 0.24936 0.21838 2.0 0.99949 0.49974 0.39755 0.34645 0.31214 0.28622 3.0 0.99949 0.49974 0.41728 0.37605 0.34705 0.32418 4.0 0.99949 0.49974 0.42820 0.39243 0.36640 0.34524 5.0 0.99949 0.49974 0.43424 0.40149 0.37710 0.35689 6.0 0.99949 0.49974 0.43759 0.40651 0.38303 0.36335 7.0 0.99949 0.49974 0.43944 0.40929 0.38631 0.36692 8.0 0.99949 0.49974 0.44047 0.41083 0.38813 0.36890 9.0 0.99949 0.49974 0.44103 0.41168 0.38913 0.36999 10. 0.99949 0.49974 0.44135 0.41215 0.38969 0.37060 11. 0.99949 0.49974 0.44152 0.41241 0.39000 0.37093 12. 0.99949 0.49974 0.44162 0.41256 0.39017 0.37112 13. 0.99949 0.49974 0.44167 0.41264 0.39026 0.37122 14. 0.99949 0.49974 0.44170 0.41268 0.39031 0.37128 15. 0.99949 0.49974 0.44172 0.41270 0.39034 0.37131 16. 0.99949 0.49974 0.44173 0.41272 0.39036 0.37133 17. 0.99949 0.49974 0.44173 0.41272 0.39037 0.37134 18. 0.99949 0.49974 0.44173 0.41273 0.39037 0.37134 19. 0.99949 0.49974 0.44174 0.41273 0.39038 0.37134 20. 0.99949 0.49974 0.44174 0.41273 0.39038 0.37135 Table 5: Difference solution: 40 grid points in y direction. 59 Initial condition Difference solution at t=30.0. Figure 3: Difference solution at t = 0.0 and 30.0: 20 grid points in y. 60 Initial condition =30.0. Difference solution at t Figure 4: Difference solution at t = 0.0 and 30.0: 40 grid points in y. 61 In this numerical test, we solve the following problem 1 11,—.- -(yU)x + (23,111), + 511,, + f(x,y,t) where f is constructed in such a way that the exact solution to the above equation can be formulated analytically as u(;1:,y,t) = X(x)e‘y2". If the interval for a: is chosen to be [—:r1,:r1], then X(z) is (1:1 — I)(l‘1 + 2:). With the exact solution, one can compare values of the exact moments with the numerical moments computed by using the Gauss—Galerkin/ finite difference method (finite difference method in the :1: variable and Gauss-Galerkin in the y variable). Numerical results are presented in Table 6 through Table 11. The zero—th to the seventh order moments of the Gauss-Galerkin/finite difference solution for 1: = 0.0 are given in Tables 6 and 7. Their differences with the corresponding exact moments are shown in Tables 8 and 9. In Table 10, the maximum of the time difference, i.e. mflo’o"+fii_m"(o'°'tl, of the numerical moments are given. This is used to determine if the solution is approaching steady state. As the time reaches 13.056, the algorithms stops as the solution is near steady state, based on the tolerance we have chosen. At the steady state, as the weights approach zero, the distribution of the nodes are given in Table 11. The symmetry of the nodal distribution with respect to the origin is also a property of the exact solution. The nodes are almost equally spaced. 9.4 Model test problem 3 As in the previous problem, the Fokker-Planck equation corresponding to the second order equation is known as: 1 at = ‘(yulr ‘l' (QyU)y + ‘Z’uyy - We may consider the initial condition 14131130) = 5($)5(y), where 6 is the Dirac-delta measure, or more generally, an initial density distribution of the type: 11(1), y,0) : 110(1‘, y)' 62 Time m0 m1 m2 m3 0.001 24.9875000 0.0000000 12.4937500 0.0000000 0.251 19.4590679 0.0000000 9. 7295339 0.0000000 0.500 15. 1613706 0.0000000 7.5806848 0.0000000 0.501 15. 1537899 0.0000000 7.5768944 0.0000000 0.750 11.8010456 0.0000000 5.9005217 0.0000000 1.000 9. 1900891 0.0000000 4.5950431 0.0000000 1.250 7.1568013 0.0000000 3.5783992 0.0000000 1.500 5.5733742 0.0000000 2. 7866858 0.0000000 1.750 4.3402771 0.0000000 2.1701375 0.0000000 2.000 3.3800002 0.0000000 1.6899992 0.0000000 2.250 2.6321825 0.0000000 1.3160905 0.0000000 2.500 2.0508434 0.0000000 1.0254210 0.0000000 3.001 1.2431225 0.0000000 0.6215607 0.0000000 4.001 0.4572057 0.0000000 0.2286024 0.0000000 5.000 0.1681552 0.0000000 0.0840773 0.0000000 6.000 0.0618460 0.0000000 0.0309227 0.0000000 7.000 0.0227467 0.0000000 0.0113731 0.0000000 7.500 0.0138021 0.0000000 0.0069008 0.0000000 7.500 0.0137952 0.0000000 0.0068973 0.0000000 10.001 0.001 1322 0.0000000 0.0005659 0.0000000 12.501 0.0000933 0.0000000 0.0000465 0.0000000 13.056 0.0000537 0.0000000 0.0000267 0.0000000 Table 6: Gauss-Galerkin / finite difference solution: The values of numerical moments of order zero to three along :1: = 0.0. 63 Time m4 m5 m6 m7 0.001 18.7406250 0.0000000 46.8515625 0.0000000 0.251 14.5943008 0.0000000 36.4988288 0.0000000 0.500 11.3710262 0.0000000 28.4400844 0.0000000 0.501 11.3653407 0.0000000 28.4258655 0.0000000 0.750 8.8507808 0.0000000 22.1368620 0.0000000 1.000 6.8925628 0.0000000 17.2391339 0.0000000 1.250 5.3675972 0.0000000 13.4250108 0.0000000 1.500 4.1800274 0.0000000 10.4547550 0.0000000 1.750 3.2552051 0.0000000 8.1416623 0.0000000 2.000 2.5349978 0.0000000 6.3403366 0.0000000 2.250 1.9741349 0.0000000 4.9375505 0.0000000 2.500 1.5381308 0.0000000 3.8470514 0.0000000 3.001 0.9323406 0.0000000 2.3318964 0.0000000 4.001 0.3429033 0.0000000 0.8576424 0.0000000 5.000 0.1261156 0.0000000 0.3154303 0.0000000 6.000 0.0463839 0.0000000 0.1160114 0.0000000 7.000 0.0170595 0.0000000 0.0426676 0.0000000 7.500 0.0103510 0.0000000 0.0258890 0.0000000 7.500 0.0103459 0.0000000 0.0258761 0.0000000 10.001 0.0008488 0.0000000 0.0021228 0.0000000 12.501 0.0000697 0.0000000 0.0001742 0.0000000 13.056 0.0000400 0.0000000 0.0001000 0.0000000 Table 7: Gauss-Galerkin/ finite difference solution: The values of numerical moments of order four to seven along :1: = 0.0. 64 Ti me m0 m 1 m2 m3 0.001 0.0000000 0.0000000 0.0000000 0.0000000 0.251 0.0000000 0.0000000 0.0000000 0.0000000 0.500 0.0000000 0.0000000 -0.0000005 0.0000000 0.501 0.0000000 0.0000000 -0 .0000005 0.0000000 0. 750 0.0000000 0.0000000 -0.000001 1 0.0000000 1.000 0.0000001 0.0000000 -0.0000014 0.0000000 1.250 0.0000003 0.0000000 -0.0000013 0.0000000 1.500 0.0000005 0.0000000 -0.0000011 0.0000000 1. 750 0.0000006 0.0000000 -0.0000008 0.0000000 2.000 0.0000008 0.0000000 -0.0000005 0.0000000 2.250 0.0000009 0.0000000 -0.0000003 0.0000000 2.500 0.0000010 0.0000000 -0.0000002 0.0000000 3.001 0.000001 1 0.0000000 0.0000000 0.0000000 4.001 0.0000012 0.0000000 0.0000002 0.0000000 5.000 0.000001 1 0.0000000 0.0000002 0.0000000 6.000 0.0000010 0.0000000 0.0000002 0.0000000 7.000 0.0000009 0.0000000 0.0000002 0.0000000 7.500 0.0000009 0.0000000 0.0000002 0.0000000 7.500 0.0000009 0.0000000 0.0000002 0.0000000 . 10.001 0.0000006 0.0000000 0.0000001 0.0000000 12. 501 0.0000004 0.0000000 0.0000001 0.0000000 13.056 0.0000004 0.0000000 0.0000001 0.0000000 Table 8: Gauss-Galerkin/finite difference solution: At a: = 0.0, difference between numerical moments and exact moments of order zero to three. 65 Time m4 m5 m6 m7 0.001 0.0000000 0.0000000 0.0000000 0.0000000 0.251 -0.0000002 0.0000000 0.0130764 0.0000000 0.500 -0.0000017 0.0000000 0.0125145 0.0000000 0.501 0.0000017 0.0000000 0.0125094 0.0000000 0.750 -0.0000034 0.0000000 0.0099015 0.0000000 1.000 -0.0000039 0.0000000 0.0077171 0.0000000 1.250 -0.0000036 0.0000000 0.0060089 0.0000000 1.500 -0.0000029 0.0000000 0.0046791 0.0000000 1.750 -0.0000023 0.0000000 0.0036439 0.0000000 2.000 -0.0000017 0.0000000 0.0028377 0.0000000 2.250 -0.0000013 0.0000000 0.0022100 0.0000000 2.500 -0.0000009 0.0000000 0.0017219 0.0000000 3.001 -0.0000005 0.0000000 0.0010439 0.0000000 4.001 0.0000000 0.0000000 0.0003841 0.0000000 5.000 0.0000001 0.0000000 0.0001414 0.0000000 6.000 0.0000001 0.0000000 0.0000521 0.0000000 7.000 0.0000001 0.0000000 0.0000193 0.0000000 7.500 0.0000001 0.0000000 0.00001 18 0.0000000 7.500 0.0000001 0.0000000 0.00001 17 0.0000000 10.001 0.0000001 0.0000000 0.000001 1 0.0000000 12.501 0.0000001 0.0000000 0.0000002 0.0000000 13.056 0.0000001 0.0000000 0.0000002 0.0000000 Table 9: Gauss-Galerkin/finite difference solution: At a: = 0.0, difierence between numerical moments and exact moments of order four to seven. 66 TIME Maximum of the time difference of moments 0.0005 46.875000000000 0.2505 36.487705858505 0.5005 28.437777611792 0.7505 22.147816506532 1.0005 17247759848566 1.2505 13.431729474020 1.5005 10459985598491 1.7505 8.1457350166581 2.0005 6.3435080228516 2.2505 4.9400201311993 3.0005 23330626924851 4.0005 0.8580712150043 5.0005 03155878452433 6.0005 0.1160692655025 7.0005 4.268883345D-02 7.50045 2.588885695D-02 10.0005 2.123773380D-03 12.5005 1.742318613D-04 Table 10: Gauss-Galerkin/finite difference: Maximum of the time—difference of the numerical moments. y-nodes -1.65040 -0.52387 0.52387 1.65040 Table 11: Gauss-Galerkin/finite difference solution: At :1: = 0.0, there are four nodes in the y-direction. 67 is considered. When the initial condition is the Dirac-delta measure, the exact solution (referred to as p(r,y,t)) which is given by .20 (-,2+e4.o c ,2_ 3 “10,2 0 c 3 I”Camry“,25 ”24,30 3 112-075 e4.0 : ”2,30 1 , ,2) 0.56 -1.o+2.o e7-OI_CI.UI_‘+¢IOI¢ niflflGQS — 0.0625(2'4-0‘ + 0.12562“ + 0.0625t — 0.0625 te-W ‘ is plotted for various times in Figure 5. The moment functions are given in Figure 6. p(l‘,y,t) : The initial condition we choose is of the form 1 _,2 11001319) - We 5(9) The exact solution (referred as g(a:, y, t), which equals p 0 ac.) is plotted for various times in Figures 7, 8 and 9. To apply the GaussoGalerkin / finite difference method, we use the Gauss-Galerkin approximation in the the y direction while using the finite difference approximation in the :c direction. As the initial condition is a singular measure, the conditions of our convergence theorems do not apply. Thus, we choose a smooth / regularized condition to be the initial condition in the approximation. One such smoothing is given simply by solving for the exact solution up to time t = 0.1, which is 9(3) ya 0'1) = 0.783946‘0-99983I2+0.09965.1:y-6.06897y2, and then applying the Gauss-Galerkin/ finite difference method. Given 2:, the zero- th through the fifth order moments of the exact solution in the y direction have simple analytic forms which are used to prescribe the initial approximations for the Gauss-Galerkin method. Let M..(.2:, t) = [.00 y"g(r» y, t)dy- It follows from the differential equation that a co __ n = n d a,M _ooy 91y m n n n 1 fl = f... (-y “9: +214 “91. +231 9+ 511 an) dy 0 °° +1 00 n(n—1) 0° : __ n d _2 / n d / n—2 d a, _ooy 931 n _m11-511/+---—2 _ooy 9y 6 -1 = ——M.,,.,—2n/\A,+fl—li\4,_2 31: 2 This system of equations cannot be solved recursively through analytic approach for general initial conditions. However, some interesting properties of the moments 68 P(erIo-OSI p(x,y,0.01) p(x,y,0.2) p(X.y,0.1) p(X.y,1-0) p(x,y.0.5) Mwwwwwm w a w w ”a-% first Figure 5: Exact solution with the Dirac-delta initial condition. 69 0th moment at t 0.1 in y-direction. -0.1 -0.05 0.1 lst moment at t = 0.1 in y—direction. 3. 2) 1» -011 -03 5 0.05 0.1 3. 2nd moment at t = 0.1 in y-direction. , l -011 —0:05 3rd moment at t = 0.1 in y-direction. 0.4» 0,2. ~011 - 305 0.05 011 —O Figure 6: Exact moments with the Dirac-delta initial condition. 70 may be observed algebraically in the following and graphically in Figure 10 through Figure 13. If the initial condition is an even function in 2:, then the even order moments remain even in :1: while the odd order moments remain odd for any positive t. Thus, a a—J:J\/12n(0, t) — 0 and M2n+1(0,t) = 0 . The equation is given in the whole space. In theory, it can be transformed into an equation defined in a finite region as discussed in the previous sections. However, for practical numerical purposes, we choose to restrict the equations to a finite interval in the y-direction so that finite difference approximation in y can be made. Additional boundary condition has to be implemented as a cut-off of the solution defined in the whole space. Given any finite positive time, the exact solution decays at infinity. Thus, setting the solution to be zero for large values of a: would be a reasonable approximation. Naturally, the accuracy of our numerical scheme is largely affected by the cut-off interval that we pick for :c. This will be explained later through graphic display. Given a grid in the :r-direction, we typically choose two to three nodes in the y-direction for each a: grid point. The length of the cut-off interval in x ranges from {—2, 2] to [—-6,6] and the number of grid points in .7: ranges from 20 to 60. The time step-size is often between 10'5 to 10‘2 and the length of time interval is on the order of 1 to 20. The graphs of the exact and numerical moments as functions of .1: are given in Figures 10-13. In Figure 10, the moments are computed on the interval :1: E {—20, 2.0] at t = 5.21. For the same value of t, the moments computed on the interval x E [—4.0,4.0] are given in Figure 11. By comparing the exact solution with the numerical solutions, we see that the larger the cut-off interval is, the numerical solu- tion will remain a good approximation over a longer time interval. Similar pictures for moments computed on the interval 1' E [—4.0,4.0] and the interval :1: E [-6.0, 6.0] at t = 12.0 are given in Figures 12 and 13, which substantiate the above claim. This is intuitively clear as demonstrated by the graphs of the exact solution. Finally, the zero-th order exact moments, numerical moments and the errors at t = 12 with 40 grid points in :1: and two nodes in y are presented in Tables 12 and 13. 71 g(xIYIo'1)I X in [‘2.0, 2.0) Figure 7: Exact solution for t = 0.1. 72 g(x,y,5.21), x in [—2.0, 2.0] g(x,y,5.21), x in [-4.0, 4.0] Figure 8: Exact solution for t = 5.21. 73 g(x,y,12.0), x in [-4.0, 4.0] 9(x1y.12-0), x in [-6.0, 6.0] Figure 9: Exact solution for t = 12.0. 74 Exact 0th moment at t=S.21 0.05» -2 -1 0 i i Exact lst moment at t=5.21 0.02» 0.01» 32 51 1 2 - .01» -0.02» Exact 2nd moment at t=5.21 0.08» -2 -1 0 1 2 0.02» .01» -0.02» Numerical 0th moment at t=5.21 -2 -1 0 1 2 Numerical lst moment at t=5.21 0.02) 0.01» -‘1 1 - .01» -0.02» Numerical 2nd moment at t=5.21 0.08» —2 -1 0 1 2 Numerical 3rd moment at t=5.21 0.021 0.01» -1 i 2 .01» -0.02» Figure 10: Comparison of the moments when t = 5.21,:c E [-2.0, 2.0]. 75 Exact 0th moment at t=5.21 Exact lst moment at t=5.21 0.02» -0.02» Exact 2nd moment at t=5.21 0.08» 2 4 Exact 3rd moment at t=5.21 0.02» Numerical 0th moment at t=5.21 2 4 Numerical lst moment at t=5.21 0.02» -0.02» Numerical 2nd moment at t=5.21 0.083 2 4 Numerical 3rd moment at t=5.21 0.02» Figure 11: Comparison of the moments when t = 521,2: 6 [—4.0,4.0]. 76 Exact 0th moment at t=12.0 Numerical 0th moment at t=12.0 2 4 Exact lst moment at t=12.0 Numerical lst moment at t=12.0 0.01, 0.01» 0.005» 0.005» 44 -‘2 i 4 7 52 a 4 -0. 05» -0. 5» —0.01» -0.01» Exact 2nd moment at t=12.0 Numerical 2nd moment at t=12.0 0.06 0.06» -4 -2 0 2 4 Exact 3rd moment at t=12.0 Numerical 3rd moment at t=12.0 0.006r 0.006» 0.004» 0.004» 0.002» 0.002» 4 -2 A -2 -0.0 -0.0 -0. 04» -0. 04» - .006» - .006» Figure 12: Comparison of the moments when t = 120,1: E {—40.40}. Exact 0th moment at t=12.0 0.25» Exact lst moment at t=12.0 0.01» 0.005» I6 -4 92 i i -o.o 5» -0.01» Exact 2nd moment at t=12.0 0.06» Exact 3rd moment at t=12.0 0.0075» 0.005» 0.0025» -6 -i -é i i -0.00 -0. 5» —0.0075» Numerical 0th moment at t=12.0 0.25» 4 6 Numerical lst moment at t=12.0 0.01» 0.005» I -4 -0 i i -0.0 5* -0.01» Numerical 2nd moment at t=12.0 Numerical 3rd moment at t=12.0 0.0075» 0.005» 0.0025L - -i -2 2 4 -0.00 » -0. 5» -0.0075» Figure 13: Comparison of the moments when t = 12.0,: 6 [—6.0, 6.0]. r exact moments numer. moments error -4. 0.01958709263951136 0. 0.01958709263951136 -3.8 0.02478757472874219 0.04014208886942286 -0.01535451414068067 -3.6 0.03099229840377004 0.01934606081985256 0.01164623758391748 -3.4 0.03828505035179119 0.04738628312217605 —0.00910123277038486 -3.2 0.04672618773768949 0.04024364299446147 0.006482544743228021 -3. 0.05634393388840611 0.06157565114840212 -0.005231717259996015 -2.8 0.06712583040039726 0.06380420000341576 0.003321630396981495 -2.6 0.07901105596761341 0.0819711045936535 -0.002960048626040101 -2.4 0.0918843928912383 0.0903606037010086 0.001523789190229691 -2.2 0.1055726258102865 0.1072242303532511 -0.001651604542964605 -2. 0.1198440800758245 0.1192245427953118 0.0006195372805126582 -1.8 0.1344118436157715 0.1352757364472455 -0.000863892831474034 -1.6 0.1489409704215598 0.1486639552648338 0.0002770151567260681 -1.4 0.1630596514241039 0.1633949452912657 ~0.0003352938671617733 -1.2 0.1763739857375736 0.1761117785044321 0.0002622072331415204 -1. 0.1884856268586013 0.1884167533216087 0.00006887353699266963 -0.8 0.1990112541285562 0.1986229428879876 0.0003883112405686506 -0.6 0.2076025693122749 0.2072437597245973 0.0003588095876776165 50.4 0.213965375825513 0.2134385613519516 0.0005268144735613712 -0.2 0.2178762877388823 0.2173596789576086 0.0005166087812737419 Table 12: At t = 12.0, the zero-th order exact, numerical moments and the errors. (:1: = —4.0—-> —0.2) 79 exact moments numer. moments CI'I'OI‘ 0.2 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2.2 2.4 2.6 2.8 3.2 3.4 3.6 3.8 0.219195746475355 0.2178762877388823 0.213965375825513 0.2076025693122749 0.1990112541285562 0.1884856268586013 0.1763739857375736 0.1630596514241039 0.1489409704215598 0.1344118436157715 0.1198440800758245 0.1055726258102865 0.0918843928912383 0.07901105596761341 0.06712583040039726 0.05634393388840611 0.04672618773768949 0.03828505035179119 0.030992298403 77004 0.02478757472874219 0.01958709263951136 0.2186121798330846 0.2173596789576086 0.2134385613519516 0.2072437597245973 0.1986229428879876 0.1884167533216087 0.1761117785044321 0.1633949452912657 0.1486639552648338 013527573644 72455 0.1192245427953118 0.1072242303532511 0.0903606037010086 0.0819711045936535 0.06380420000341576 0.06157565114840212 0.04024364299446147 0.04738628312217605 0.01934606081985256 0.04014208886942286 0. 0.0005835666422704112 0.0005166087812737419 0.0005268144735613712 0.0003588095876776165 0.0003883112405686506 0.00006887353699266963 0.0002622072331415204 -0.0003352938671617733 0.0002770151567260681 -0.000863892831474034 0.0006195372805126582 -0.001651604542964605 0.001523789190229691 -0.002960048626040101 0.003321630396981495 —0.005231717259996015 0.006482544743228021 -0.00910123277038486 0.01164623758391748 -0.01535451414068067 0.01958709263951136 Table 13: At t = 12.0, the zero-th order exact, numerical moments and the errors. ( a: = 0.0 —) 4.0 ) 80 10 Discussions and Concluding Remarks In the previous sections, we have developed, analyzed and implemented the Gauss- Galerkin / finite difference method for two—dimensional Fokker-Planck equations. The- oretical analysis suggests that the method is applicable to a wide range of equations arising in nonlinear oscillations. Numerical tests show that the method is practical and very accurate especially in dealing with solutions that exhibit J-measure like singularities. A number of important and interesting issues remain to be studied in the future. Among them, there are various theoretical~ questions concerning the properties of two-dimensional Fokker-Planck equations, and the Gauss-Galerkin/ finite difference approximation such as if the solutions develop singularity and if the Gauss-Galerkin nodes would collide into each other in finite time. On the numerical aspect, we have only presented one possible approach to the two—dimensional Fokker-Planck equations. In the following, we remark on some other alternatives. 10.1 Possible generalization of Gauss-Galerkin methods The Gauss-Galerkin/finite difference method we have analyzed and implemented takes a fixed given direction to apply the difference approximation and another fixed direction to apply the Gauss-Galerkin approximation. For some problems, the so— lution might pile up in both directions, generalization of two—dimensional Gauss- Galerkin methods might be very attractive. It could be performed by introducing the alternating direction approach, i.e., at a given time step, if the Gauss-Galerkin method is applied in the :r-direction, then, it is applied in the y-direction at the next time step. 10.2 Simulation of nonlinear oscillation model The test problems we have studied are limited to the Fokker-Planck equations asso- ciated with the linear oscillation models of which the behavior of the exact solutions are more or less clear. The code, however, is written in a general form that allows to have variable coefficients and thus applicable to the Fokker-Planck equations associ- ated with the nonlinear oscillation models. Test problems for nonlinear oscillation problems and comparison of this method with other numerical methods will be carried out in the future. 81 10.3 Other applications Stochastic equations and their Fokker-Planck equations appear often in other ap- plications such as population genetics and mathematical finance. It remains to be explored in the future that how should Gauss-Galerkin methods be applied in those exciting new areas. 82 Appendix INPUT Simpson’s rule [Initial moments Mathematical "’ nf”IAt)=m(0) i=0 Gum’m At)=m(fl k-o mg: .. 0 I m, (t+ 1M)- llt+ At) mm“ 0:) :11 (MM) LAPACK aster-f 1 W133?” ...-111......“ + Backward , f , - (k . summt‘on _. x, a F( x, a ) [In +tl+ Akm(t)+ a F J OUTPUT Figure 14: Flow chart for Gauss-Galerkin/ finite difference method. 83 References [1] Abrouk, N.E. Some Numerical Methods for Singular Diflusions Arising in Ge- netics, Ph.D. thesis, Department of Statistics and Probability, Michigan State University, 1992. [2] Cauchy, T.K., Nonlinear Theory of Random Vibrations, in Advances in Ap- plied Mechanics, Vol 11, pp 209-250, Academic Press, 1971. [3] Chung, K.L., A Course in Probability Theory, 2nd ed., Academic Press, 1974. [4] Crandall, S.H., Random excitation of nonlinear systems, Random Vibrations, Vol 2, pp 85-101, The M.I.T Press, 1963. [5] Dawson, D. A., Galerkin Approximation of Nonlinear Markov Processes, Statis- tics and Related Topics, pp 317-339, North-Holland, 1981. [6] Hajjafar, A., H. Salehi, and D.H.Y. Yen, On a class of Gauss-Galerkin methods for Markov processes, Proceedings of the Workshop on Nonstationary Stochastic Processes and Their Applications, August 1-2, 1991, pp 126-146, World Scientific, 1992. [7] Harrison, G.W., Numerical solution of the Fokker-Planck equations using moving finite elements, Numerical Methods for Partial Differential Equations, Vol. 4, pp 219-232, 1988. [8] Ilin, A.M., and R.Z. Khasminskii, 0n equations of Brownian motion, Theory of Probability and its Applications, Vol. 9, pp 421-444, 1964. [9] Khasminskii, R.Z., Ergodic properties of recurrent difi‘usion processes and stabi- lization of the solution to the Cauchy problem for parabolic equations, Theory of Probability and its Applications, Vol. 5, pp 179-196, 1960. [10] Kushner, H.J., Stochastic Stability and Control, Academic Press, 1967. [11] Kushner, H.J., The Cauchy problem for a class of degenerate parabolic equations and asymptotic properties of the related diffusion processes, J. of Differential Equations, Vol. 6, pp 209-231, 1969. 84 [12] Loeve, M., Probability Theory, 3rd ed., the university series in higher math- ematics, VNR Company, 1974. [13] Smith, G.D., Numerical Solution of Partial Differential Equations, Ox- ford University Press, 2nd ed., 1985. [14] Stoer, J. and R. Bulirsch, Introduction to Numerical Analysis, Springer— Verlag, 1983. [15] Shohat, J.A. and JD. Tamarkin, The Problem of Moments, Mathematical surveys, no.1, AMS, 1963. [16] Yosida, K., Functional Analysis, Springer-verlag, 1985. 85 HICHIGRN STATE UNIV. LIBRARIES ll[Ill][llllllllllll]Ill[I]l]llllllllllllllllllllllll 31293014203313