] ' III h‘kd’dm'a '1' III'H‘HI’II. *9 I‘H'|','|U_IIIIIWIJI"“ HI , 11:11' E I I IMIIII. I IIIIII I I! I I. v I I 1 I I r I III I. ’1 ‘II. I n' .I I I I I I (, .IIIIIIC I. II . .PII I. “IIIII III . II... I II Iv I IIVII. .‘III I II I. I II . II II. IIIIIHII‘ fiynd f ,1: .1 II I’mnd‘ .II I‘I IIIIIIII I I .I I, ..III. JIIflJnnIIIWI I. III I. IN! IIIIII. III I I...m.Mu..lI II .IHHI‘II..IMI.In¥IIIIu III. IAILIIIII I rII - I4 IIIIIIr I II III. IImIIIII. . «IN. -I . III I . h IIIKIII I . I‘IH IIIVI a \l)‘. wIJil urn-INFINIJeIDIHIuIIIIIII'VJ.1I I II .I IMIII \I I A I . . I.I\.IIL. IIIII ‘IIIIIIIIIIIIw IIIIIIII “amI‘IQIIIIIIIIVIII IIIIv I...III1.hMI.II 442th... II“! . III . I I I -II II II I I I I I 'I II III a II II III I II III . InIkInIIIWIInIIIIIIIIIILII IIIIII . III. I HIIIA. HII...I.I\IhI.III .cflohII I I I .I I I II I.I I I .IIIIINI lIuIlIltlIIIII ”um IVIII I .’ 'I‘nI III. III IIvIIIrlIIzII. II (III II\II3‘AUIII“II III‘IIIIIWIIJlKIIIHIle \‘ \IAWHFIIVIIII . .Ir‘nlblblvfi. I I I- I I I II ‘ ' . - I‘vrnl . In, '1 l V I . If I I 1 I n IIII .1 \I .I.II|I\IIU. I I . I I . I I .I. .I III III. , .V. P c .I «III\ I .‘III «INVIIIIII‘II IJIIIIUIIII 'IllIII‘IIIIOI.‘ . Uri-1| III . . .I‘II III. “III. III“? IWII IIIII Iv .. I) Iv III. IDII $I . II.‘ I I . III II .I'lvVIIIIUIHIw quI .IhIlnIlnunmI, . .IIIIIIIIIIIIIIIIIIII .IIIIII , l I I II IIIY. . : I I; I I _ .I. I\.( l .rIIOII III .. .I. II I I I ‘II I I I I-.. .1 . . IWIIIunIIIIwIIIIIIIIIIIIIIIIl IIIIHIIIIIIIIIIIIIIIIIIIIIIIIIIIIII. {III IIUIILIII . . II ’ I I“ I” I I I I . III I I . , IIIIIIIIIIIIIIIIIIII .III I‘ hnvfiIIIIIII l I I I III . . I ‘\I ILLT‘ I IIII‘II‘JI'II. (III. (II III I I I I ‘ \l \ III)\(§II IIII cllpll 7 . I I I. I . .III I I II .I I III! IIIIIIIIIIII ”IIIIIIIII. IIIMIIIIIIIIIIIIIII.IIIIIIIII.II\hIIIIIIIII.“IlIIIIIII . II “III I I . III II v? II III ‘I II IvI‘ .ll‘llfIIIII‘ I I I II: \EI'Ii‘II‘I 'I'I' IWI‘IIIMK' I’ll '.I I- IIIII‘IIIIIIIIHII’IIIV .IIIII V II I I .I IIIIIIIIIIIIIIIILVIIHIIWWI . I I- I 1.11. I. -I I I‘ I IIIIIIIII I‘H‘Iu’ ' II I I I II . I I I I III I III I..I . II : I “IIIHII‘II: LIIIV.U (I'IIIII‘IIIIIIIIIflIIIIIJIf .IlI I II. I I IIII I II I .I. I.I.I| III I . I .IIIIH III.I..: I..I.II:I\II.IPIIII. I 'IIIIIIIIVII ‘IIIIII III l I I I . II . II I \II I. 'IIII‘ I II I . o I II 7 I I I | I II I I . ‘ILIIIII .II‘I II‘III . I I I I I .I I I. I I I b I .Ii IIILI .v .I I Tl-{ES'IS This is to certify that the dissertation entitled A NONLINEAR MULTISTEP METHOD FOR SOLVING STIFF INITITAL VALUE PROBLEMS presented by Moody Ten-Chao Chu has been accepted towards fulfillment of the requirements for Ph .D . degree in Mathematics 1mm (7 Major professor I Date 5/12/82 MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 . '35 ‘ If) MSU LIBRARIES n \r RETURNING MATERIALS: Place in book drop to remove this checkout from your record. FINES will be charged if book is returned after the date stamped below. A ,~', ". /J "//' / I (I / A NONLINEAR MULTISTEP METHOD FOR SOLVING STIFF INITIAL VALUE PROBLEMS BY Moody Ten-Chao Chu A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 1982 ABS TRACT A NONLINEAR MULTISTEP METHOD FOR SOLVING STIFF INITIAL VALUE PROBLEMS BY Moody Ten-Chao Chu A nonlinear multistep method which bears the nature of the classical Adams-Bashforth-Moulton PC formula is developed to solve stiff initial value problems of the form y' = Ay+g(x,y). It is shown that this method has preperties of consistency, convergence and A-stability in the sense of Dahlquist. Several newly developed numerical techniques have been incorporated into this algorithm. A detailed analysis of its structure is also presented to enable us to implement this method in such a way that the step size and the order can be adjusted automatically. Numerical results from extensive tests by a PECE mode of this method shows its efficiency and several advantages, such as no Jacobian evaluation is needed, much larger step sizes can be used and only a few matirx inversions are involved. TO MY PARENTS AND MY WIFE ii ACKNOWLEDGMENTS . I want to express my deep gratitudes to Professor Tien—Yien Li, my thesis advisor, for all his patience, guidance, and encouragement during the course of my research. His expert advice, useful insight, and stimulating discussions made this work possible. I also want to thank my wife, Joyce, for her care, patience, and understanding displayed throughout the duration of my graduate studies. Finally, I am deeply grateful to my parents from whom I learned the virtues of wisdom and faith. iii TABLE OF CONTENTS Chapter Page 1 ' IntrOduCtion o o o o o o o o o o o o o o o o o o o l 2. Basic Formulation--—Fixed Step Scheme . . . . . . . 6 2 O l O PECE MOde O O O O O O O O O O O O I O O O O O 6 2.2. Major Theorems . . . . . . . . . . . . . . . 9 2.2.1. Consistency . . . . . . . . . . . . . 9 2.2.2. Convergence . . . . . . . . . . . . . 11 2.2.3. A-stability . . . . . . . . . . . . . 14 2.3. NUmerical Evaluations . . . . . . . . . . . . 15 2.3.1. Computation of Matrix EXponentials . 15 2.3.2. Computations of Integrals Involving Matrix Exponentials . . . . l7 3. Efficient Implementation---Variable Step Scheme . . 22 3.1. Generating Coefficients . . . . . . . . . . . 22 3.2. Updating Divided Differences . . . . . . . . 29 33.3. Modified PECE Mode . . . . . . . . . . . . . 31 3.4. Advantages . . . . . . . . . . . . . . . . . 31 4. Error Analysis . . . . . . . . . . . . . . . . . . 34 4.1. Error Estimation . . . . . . . . . . . . . . 34 4.1.1. Local Error for (k,k)-order Mode . . 35 iv Chapter Page 4.1.2. Local Error For Lower Order Mode . . 40 4.1.3. Local Error For (k+1,k+l)-order Mode 42 4.2. Automatic Control . . . . . . . . . . . . . 44 4.2.1. Acception Criterion . . 44 4.2.2. Error Prediction For Fixed Step SCheme O O O O O O O O I O O I O O O 45 4.2.3. Order Selection , , , , , , , , , , 47 4.2.4. Step Size Selection . . . . . . . . . 48 4.2.5. Starting Phase - - - - - . . . . . . 49 5. Programming Flowcharts . . . . . . . - - - - . . . 50 5.1. Block 1 -- Preparation of Coefficients -‘- - 51 5.2. Block 2 -- Prediction and Error Estimation . 52 5.3. Block 3 —- Restart of Step - - . . . . . . . 52 5.4. Block 4 -- Correction And Automatic Adjustment . . . . . . . . . . . 53 6. Numerical Examples . . . . . . . . . . . . . . . . 54 6.1. Homogeneous Linear System With Complex Eigenvalues . . . . . . . . . . . . . . . . 54 6.2. Nonhomogeneous Linear System With Real Eigenvalues . . . . . . . . . . . . . 56 6.3. Nonhomogeneous Linear System With Complex Eigenvalues . . . . . . , . , . . . . 58 6.4. NOnhomogeneous Linear System With Variable Coefficients and Real Nonconstant Eigenvalues . . . . . . . . . . 60 6.5. Nonlinear System With Coupling Components , , 62 6.5.1. Coupling From Transient Component To Smooth Components . , , , , , , , 62 Chapter Page 6.5.2. Coupling From Smooth Component To Transient Components . . . . . . 64 6.6. Nonlinear System With Real Eigenvalues . . 65 6.7. Nonlinear System With Real Mixed Type Eigenvalues . . . . . . . . . . . . . . . . 67 6.8. Nonlinear System With Complex Mixed Type Eigenvalues . . . . . . . . . . . . . . . . 7O 7. Conclusion And Recommendation . . . . . . . . . . 73 List Of References 0 O O O O O 0 O O 0 O O O O O C O 75 vi l . INTRODUCTION Conventional linear multistep method has been recognized as one of the most effective ways to solve a general initial value problem y’ = f(x.y); y(a) = yo; x e [a.b] . (1.1) In the last two decades several very sophisticated and reliable codes [12.17.28]. based on a variable-step variable-order formulation of the classical Adams method, have been established so that (1.1) can be solved both easily and cheaply. Nevertheless, when applied to stiff systems, the efficiency of these codes becomes very limited because impractically small step sizes must be adOpted to ensure the stability [12.19] of the approximate solutions. This difficulty is inherent in the method itself [7] and hence cannot be overcome by any improvement in the computer capacity. On the other hand, most of the methods used for solving stiff systems are implicit of necessity [7.19.29] and hence demand the use of some Newton-like iterations [13] which usually are very eXpensive in Jacobian evaluations. The object of this dissertation is to develOp a method which, inheriting all the merit of the classical predictor-corrector schemes and being A-stable [7]. does not have those difficulties mentioned above when applied. to stiff systems. The search for effective methods to solve stiff systems has received considerable attentions since two decades ago. Brief survey of the literature can be found in Shampine and Gear [29]. Enright. Hull and Lindberg [9]. Willoughby [37]. Bjurel. Dahlquist. Lindberg and Linde [2]. In what follows we shall consider the stiff initial value problem of the form y’ = Ay+'g(x.y>: y(a) = Yo’ x 6 mm (1.2) where A is an n). Proof: From the definition of local truncation errors, we know Ah k P _. T y(xn+l) — e y(xn)+h 1:1 Qki g(xn_i+l.y(xn_i+l))+=£ (y(xn).h) . _ Ah h 2’5 * Y‘Xml) ‘ e Y(Xn) + i=0 ék1 9(Xn-i+l’y(Xn-i+l))+£(y(xn)'h) ' (2.2.2.3) So the mean value theorem implies that the global error at Xn+1 n+1 = y(an)-yn+1 (2.2.2.4) 13 is given by k ~ _ Ah" * ag ~ en+1 ” e en+h 5E1 Qki by (Xn-i+l'§n-i+l)en—i+l * ~ 9.21 Ah + h Qk0 ay (Xn+l'§n+l)[e en+ k a - H“ 12:31 éki ay(Xn-i+l’§n-i+l)en-i+l] +6n (2°2°2°5) * = i3 where 5n h ékO BY ( lies between y xn+l.§n+l)=£p(y(xn) .h) + 2(y(xn) .h). and y(x ) for i > O: n-i+l ) for i = 0. By the gn-i+1 n-i+1 and between p and y(x n+1 n+1 conditions on A and 33’ we arrive at k * ~ nan...) 2 Mann +h Lg 3.30 n 1.1111144”) -+h2L 2“ 'k k ~ g (konigl \lékillllemiflfl . (2.2.2.6) Define x = 14-h Lgd*4-h2L§H§;OHo and Bi = x Ei—li'é' then obviously Ei-l < Ei for each i. By induction, (2.2.2.6) implies that néju g En for j = 0.....n. Since En = anO+v%Ei% 5, the inequality (2.2.2.1) follows immediately from the facts that l4—x < eX and nh = x -a. n In particular, if g E Ck+l. then the theorem of consistency implies 6 = O(hk+2). The above two theorems justify that the nonlinear multistep method, such as the mode (2.1.8), has the consistency and convergence properties. For the computational purpose, we also need the feature of A-stability. 14 2.2.3. A-STABILITY Dahlquist [7] defined a numerical method to be A-stable if its region of absolute stability contains the whole of the left half-plane. Equivalently. a method is A-stable if all numerical solutions yn tend to zero asymptotically as n 4 m when it is applied to the differential equation y’ = Ay where all eigenvalues of A have negative real parts. To prove the A-stability property of our algorithm, we approximate each matrix exponential in (2.1.2) by its corresponding Padé approximation. Recall the following definition [33]. Definition 2.4. A (p.q)-pair Padé approximation to the matrix eXponential eB is the matrix _ -1 qu(B) - [qu(B)] Npq(B) where N m): g (mgfi)m{ zj pq j=0 (p+q):3:(p-J): ' (2.2.3.1) q . . = (p+q+1)!q1 _ 3 qu(z) jig (p+q):j3(q-j)1 ( 2) It is known [1.4.34.36] that the diagonal Padé approximation E(B) of eB is unconditionally stable, provided that all eigenvalues of B have negative real parts. Therefore. we have the following theorem. Theorem 2.3. The nonlinear multistep scheme (2.1.2) based on diagonal Padé approximations to all its matrix exponentials is A-stable. 15 Proof: When applied to any test problem I y = Ay; y(a) = yo. the scheme (2.1.2) yields values yn+1 = E(Ah)yn = [E(Ah)]n+lyO since g(x.y) is identically zero. But all eigenvalues of A have negative real parts. the unconditional stability of E(Ah) implies that lim yn = O. This establishes the A-stability of scheme (2.1.2). nan 2.3 NUMERICAL EVALUATION In this section we shall describe some Special numerical techniques for the evaluations of the matrix exponentials and the matrix coefficients. 2.3.1. COMPUTATIONS OF MATRIX EXPONENTIALS The exponential of a matrix could be computed in many ways [27]. For our purpose we consider only the rational approximations because of its direct applicability. Nevertheless, for stiff systems. direct application of a rational approximation for the matrix eXponential gives very poor results as shown in the contour plot of square errors by Blue and Gummel [3]. In fact. the error in approximating the matrix exponential by diagonal Padé table entries has been shown [10] to be increasing as the norm of the matrix increases. Therefore. the first objective of any algorithm involving Padé approximations is to decrease the matrix norm. Since exponentials satisfy ea = (ea/b)b. one may always use this prOperty to reduce the matrix norm and improve the 16 accuracy of the approximation. Ward [33.35] has given an error estimate for diagonal Padé approximation of order up to P = 15 and made it possible to return the minimum number of digits of accuracy in the norm to the user. The algorithm for computing eA can be summarized as follows: 2.. (i) Compute the mean of eigenvalues 'f = i=1:11 (ii) Make the translation A1 = A-KI. (iii) Balance the matrix Al to obtain A2 = D-lPTAlPD. for which. HAZHl = ngn HD'lAlDU, where 33' B is the set of all nonsingular diagonal matrices with entries as integral powers of the machine base B and P is some preliminary permutation matrix. (iv) If [[Azulgl. set A =A 3 2, m = O and go to (v1). (v) Find an integer m > 0 such that HA2“.S 2m. Then define A = —3-. 3 2 > A (vi) Use Padé diagonal table to compute e 3. (vii) If (v) is skipped. go to (ix). A3 A3 (viii) Square e m times, i.e. compute (e ) 2m —- A m (ix) Compute eA = exPD(e 3)2 D-lPT. 17 This algorithm works quite satisfactorily in general. However. there are some numerical "barrier" where care should be taken. A good source of this material can be found in Molen and van Loan [27]. 2.3.2. COMPUTATIONS OF INTEGRALS INVOLVING MATRIX EXPONENTIALS To compute the matrix coefficients defined in (2.1.7). it is necessary to compute the following integral: 1 M = I eA(l-a)hdd . o 0 (2.3.2.1) 1 . — f e‘P‘H'O‘)h aldd i = 1.2,... 0 Observe that these matrices satisfy the following recursive relatiOns hAMi = 1Mi_l-I . (2.3.2.2) If A‘1 exists, then M0 = (Ah)'l(eAh-I) and all the other Mi's can be induced by (2.3.2.2). But for stiff max x-I systems. the condition number k(A) 2-EIE_}IiT' is large. i The a priori error estimate “A? -A‘1n HA-lH i k(AfiAc -AH (2.3.2.3) l ‘W" W 1 indicates that the computed A; might be in large relative error. Furthermore. these Mi's still exist even if A is singular. So we introduce two methods to obtain them. 18 The first method works for a normal matrix A and . . . . . + involves its generalized inverse matrix A . Recall that the 'k * matrix .A is normal if and only if A A = AA where A* is its corresponding adjoint matrix. Observe that MO satisfies the linear system (Ah)X= eAh-I. (2.3.2.4) Let B = Ah, and let the singular value decomposition of >3 0 * B be V U where V and U are orthogonal matrices O O 'k * formed from eigenvectors of BB and B B. respectively; :3: diag(X1:---okr) ‘where xi # 0 fbr i = 1.....r are * eigenvalues of B B; and vi = fL-Bui for i = 1.....r, i where vi and ui are column vectors of V and U. + zrl O * respectively. It is known that B = U V is O O the generalized inverse of B and $2 = B+(eAh-I) (2.3.2.5) O is a least Frobenius solution to (2.3.2.4). Theorem 2.4. If B is normal, then " + M0 = XO+ (I -B B) . (2.3.2.6) 19 Proof. Since B is normal. it is known that Bu.= Au ._ * if and only if B*u = Au. This implies B Bu = |x|2u and * BB = [1|2u. So the same orthonormal basis ul""”fl1 which diagonalizes B into D = diag(xl.....xr.0.....0) can also serve as an orthonormal basis for both B*B and BB*. 23 0 Hence B has a singular value decomposition B = U U* -1 o o + Z: O * and a generalized inverse B = U U . Dente U 0 O by [U1.U2] where Ul = [ul.....ur] and U2 = [ur+l""’un]' then 1 1 Mo _ I eB(l-a)dd U(f eD(l a)dG)U O O _. * Z 1(eZ-I)I 0 U1 = [U 0U] l 2 o , I U; __ -1 Z * * — U1 Z} (e --I)Ul+U2U2 . (2.3.2.7) On the other hand. ~ * * x0 = B+(eB—I) = UD+U U(eD-I)U = UD+(eD-I)U* _ * = U1 2 1(eZ-I)Ul. (2.3.2.8) So (2.3.2.5) follows from (2.3.2.7). (2.3.2.8) and the following identity + +* * + * * I-BB=I—UDUUDU =I-UDDU =U2U2. (2.3.2.9) 20 Corollary 2.1. If B is normal, then -1)+——l- (I-B+B) . (2.3.2.10) + Mn _ B (nMn n+1 -1 Proof. By definition of (2.3.2.1) and the identity (2.3.2.9), we see 1 1 M = j eB(l_a)andd = U(f n o o eD(l-a)dndd)U* 1 Z—lmj eZ(1’°‘)an‘lda_1), O = U 0 U 0 , l n+1 —1 l 2(1—0) n-l Z) , O * n 5 e d dd-I. O * O O = U U U 0 U +-U 1 U 0 O 0 p O O mI = B+(nM -I)+—!'— (I-B+B) n—l n+1 The second method works for a general matrix A and involves its Drazin inverse matrix Ad. We first make two definitions. Definition 2.6. The index of a matrix A is the k+1 smallest non-negative integer k such that rank Ak = rank A . Definition 2.7. The Drazin inverse of A, denoted -l D D c o _1 by A , is defined as the matrix A = P P . O O c o _1 provided A = P P where C is an invertible O N submatrix and N is nilpotent of index k. 21 Note that there are a variety of ways to calculate AD [5]. In fact, the following lemma can be regarded as the definition of AD from the algebraic point of view. Lemma 2.1. If the index of A is k then AD is the only matrix satisfying the following prOperties (i) ADAAD = AD , (ii) AAD = ADA , (2.3.2.11) (iii) Ak+lAD = Ak Proof. The proof can be found in [5]. Theorem 2.5. If the index of A is k, then k-l 1 MO = BD(eB-I)+ (I-BDB) 23 —B—. (2.3.2.12) i=0 (i+l): Proof. Using the series expansion for eBS and the above lemma. one can show that k-l i i d D B D B s _ Bs Corollary 2.2. If the index of A is k, then D D k—l M =B(nM —I)+n(I-BB) Z (n-121Bl (n+i+1)} ° (2.3.2.13) 3. EFFICIENT IMPLEMENTATION ---- VARIABLE STEP SCHEME 3 . 1 . GENERATING COEFF IC IENTS The subroutine STEP by Shampine and Gordon [28] gives an efficient way of generating coefficients for the linear multistep method and a substantial savings of memory storages. In this section we reformulate their settings in the matrix form and hence preserve all the advantages. We define the following quantities: hi = xi"Xi-l ' x-xn S = I hn+1 wi(n+l) = hn+14'°° 1'hn—i+2 . 1'2 1 h n+1 . oi(n+l) = ¢i(n+l) , 1‘2 1. (3.1.1) (81(n+l) = l . [1(n+1)¢2(n+1)...¢i_l(n+1) ' Bi(n+l) = ¢1(n)¢2(n).o.¢i-1(n) I l > 1 , c"1"" = q’1‘“) °°")’i—1(n) 9[Xn'Xn-1“""Xn—i+1] i > 1 ° 22 23 Recall that 4 n P — e YnI-j‘ e Pkln(T)dT (3.1.2) where Pk n is a polynomial of degree < k and (x for i = l."'.k. Since the inter- Pk.n n-i+l) = gn—i+1 polating polynomial is given by g[xn] + (x -xn)g[xn,xn_l] +--- + (x-xn) (X‘Xn-k+2)g[xn'""Xn-k+1] . (3.1.3) a typical term. for i‘2 2. of P x) can be written as k,n( (x-xn) ---(x-x )g[xn,--.,x n-i+2 ' (Shn+l)(Shn+l+hn) " mi(n) ... (shn+l4-hn4- I'hn-i+3) ¢l(n)°°°¢i-l(n) (3.1.4) = ( Shn+1 )( Shn+1+ +hn n) ... wl(n+1) 12(n+1) sh +-w. (n) n+1 i—2 '° ( Wi_l(n+l) ) Bi(n+l)cpi(n) Introduce l i= 1. ( ) siln+l 1:2 ci,n s = $1 n+1) ' (3.1.5) 5h Sh +h sh + . ( ) n+1 __2:l___ ... n+1 )1—2 n . (m)( ¢2(n+l)) ( wi_l(n+l) )123 , 24 and Cp:(n) = Bi(n+ 1) cpi(n) . (3.1.6) then (3.1.3) can be rewritten as k. * Pk'n(x) = 23 ci'n(S)cpi(n) (3.1.7) i=1 and (3.1.2) becomes Ah k l A(l-s)h _ n+1 n+1 * Pn+1-e Yn+hn+l .4? (f e c:i,n(s‘)d“3)°"i(n) 1—1 0 (3.1.8) Let A(l-s)h _ n+1 di,n(s) — e Ci,n(s) , (3.1.9) then it can be shown that eA(l-s)hn+1 i = 1 A(l-s)h di'n(s)= e “+13 1 = 2 , (3.1.10) )i—2(n) . .l01-1s+(.—i—_1(—n71d1-1.n<5> 1 2 3 . and Ah k l _ n+1 * Pn+1 - e Yn+hn+l 151 ([0 di'n(S)dS)cpi(n) . (3.1.11) For fixed n and i‘2 3. observe that 25 S )0 di.n(sO)dsO s . (n) 8O — ( +1) a-wl‘z ]d( d ( )d ) - )0 [Ci-l n 5‘o (‘i'—T_l(n )0 i-l,n S1 s1 )i-2(“) s = [ci_1(n+l)s+m] IO di-1.n(SO)dSO s 5l ..(O ai_l(n+1) f0 di_l,n(so)dsodsl . (3.1.12) Define (q) s Sq-l sl di'n(s) = [0 Jo -.- [O di'n(so)dsodslo-o dsq_l . (3.1.13) then (3.1.12) can be rewritten as (1) _ $1-2(n’ (1) di,n(s) — [Ci—1(n+l)s+“V;:ITH)] i-l.n(s) -oi_l(n+l)dizi’n(s) . (3.1.14) In fact, it is not hard to show, by induction and integration by parts. that for i‘2 3. the following is true: (q) _ )i-z‘n’ (q) di,n(s) - [oi_l(n+l)si-¢i-l(n) i-l.n(s) -qai_l(n+1)dififg(s).. (3.1.15) Scaling these quantities by defining wi'q = (q-1): d§?;(1) . (3.1.16) 26 we then arrive at one of the important identities u). =u). for 123 . (3.1.17) i.q i-l.q"ai-l(n+l)w i-1,q+1 Up to this point. all these quantities we have derived are formally the same as those by Shampine and Gordon [28] except that these are in matrix form. When A is identically zero. the quantities defined in (3.1.10). (3.1.12). (3.1.15) and (3.1.17) become diagonal matrices and. in fact. each diagonal matrix is of the form (:1 where the scalar c is that usedlnz Shampine and Gordon. In this case. the quantities for i = l and i = 2 can be handled trivially. However. if A is a matrix as stated in Problem (1.2). then for i = l. the definition (3.1.13) of difg(s) implies that A(l-s)h Ah (-Ah hn+l)d11n(s ) = e n+1--e n+1 . (3.1.18) and. by induction, for (3‘2 2. -1 Ah (q) _ (q,-l) sq n+1 (“Ahn+1)d1.n(3) - <11,n (s) --——(q_1). e . (3.1.19) Similarly. for i = 2. we have A(l-s)h (-Ahn+l)d(l)(s) = se n+1-dih)1(s) , (-Ahn+1)déqg(s ) = déqn1)(s)-d d(?)(s ) for q“; 2 . (3.1.20) Resorting to the definition (3.1.17). we obtain 27 Ah _ n+1 (_Ahn+l)wl.1 - I-e ' Ah _ n+1 (”Ahn+l)wl,q — (q-l)wl'q_l-e for q'z 2 , (3.1.21) (‘Ahn+1)w2,1 = I"“”1,1 ' (-Ahn+1)w2.q = (q'l)w2,q-1"w1.q f°r q‘2 2 Note that for stiff systems A is nonsingular, so ml q and wz q for every q‘Z 1 can be solved directly from (3.1.21). I once A-1 is known. But even if A is singular. the techniques described in section 2.3.2 can be used. Since d(l)(l) = w the scheme (3 1 11) becomes i.n i.l’ ' ' k Ah . * _ n+1 Z} w. m (n) . (3.1.22) Pm4"e Yh+hmu.r1 “”1 1 We now make a correction of this predicted value. Recall that Ah Xn+1 A(x T) n+1 n+l- * yn+1 = e yn+-[x e Pk+1.n(T)dT (3.1.23) n * where Pk+1 n(T) is a polynomial of degree < k4—l with ) = values * Pk+1,n * . Pk+l.n(xn-i+1 gn—i+l for l - 1'. O'k and ( Observe that Xn+1) = g(xn+l.Pn+l)' 28 * Pk+1,n(x) __ _ no. ... p ... _ pk'n(x)+ (x xn) (x Xn—k+l)g [Xn+1' 'Xn-k+l] P ‘Pku‘ml’ = Pk.n(x)1'(5hn+l) (Shn+l F '+hn-k+2) ¢l(n+l)"'¢k(n+l) = Pk'n(x)+ck+l'n(s)cp:+l(n+1) (3.1.24) where the superscript P is used to indicate that the value is replaced by g(x ). Substituting (3.1.24) gn+1 n+l'Pn+l into (3.1.23). we see +11 P yn+1 ' Pn+1 n+1 wk+1.l ”k+l(n+l) ' (3'1'25) We conclude this section by illustrating the inter- relation between the coefficients based on (3.1.17) and (3.1.21) for the case k = 3 in the following table where arrows emanate from the generators: 29 3.2. UPDATING DIVIDED DIFFERENCES At the current point Xn all the quantities ¢i(n) for i = 1.°°°.k are known. To complete the step P from xn to xn+1. we need to know ¢k+l(n+l) as indicated in (3.1.25). To prepare for the advance from we need to know all the x to the next point xn n+1 +2' quantities, mi(n+l) for i = l,--o.k. Moreover, as will be shown in the next chapter. the quantities ¢§(n+l) for i = l,°°°.k are also useful in the error estimations. In this section we explain how these divided differences can be updated. First of all. think of PR n in (3.1.3) as a polynomial of degree k determined by the conditions Pk,n(Xn-i+l) = gn-i+1 for 1 = 1'...’k and Pk,n(xn+l) _ . . . e Pk.n(xn+l)' The diVided difference mi+l(n+l) based on the above values at xn+1.xn,---.xn_k+1 is given by. ”1+l(n+l) = 000 e .0. (1(n+l) wi(n+1)g [Xn+l' .xn_i+1] = (1(n+1)"')i_1(n+l)(xn+1-Xn—i+l) e[x 00.x ] [x 00.x ] g n+1’ ' n-i+2 ’9 n' ’ n-i+1 x -x . n+1 n-i+l = (111(1'14'1) . . .wi_l(n+l)ge [xn+l' . . . 'Xn-i‘l'l] - ¢1(n+l)°.°wi-l(n+l) [1(n) ... ¢i_l(n) ”1(n) cp‘:(n+1)-Bi(n+1)cpi(n)=cpei(n+1)-cp§{(n) . (3.2.1) 30 This implies ' ~1- ¢:(n+l) = wi+l(n+1)-+wi(n) for i = l."‘.k ° (3°2°2) But Pk,n(x) actually is a polynomial of degree < k. The k-th order divided difference mi+l(n+l) = ¢l(n+1) °°'mk(n+)ge[xn+l.°°°.xn_k+l] is identically zero. Thus ¢:(n+l) for i = k. ”'31 can be generated according to (3.2.2). Exactly the same argument as in (3.2.1) shows the following two identities: (n+1) Cpi(n+l)-cp:(n) . c"1+1 (3.2.3) P * . cpi(n+1) -cpi(n) for i = l.'°'.k p Siam”) But then it follows. by taking difference between (3.2.1) and (3.2.3). that e _ e _ .°._ _ e cpi+l(n+1)-cpi+1(n+1)-cpi(n+1) -cpi(n+1)- -cp1(n+1) cpl(n+1) P e __ P e _ ...__ P e cpi+l(n+l) -cpi+l(n+l) —cpi(n+l) -cpi(n+l) — —cp1(n+1)-cpl(n+1) (3.2.4) So the quantities ¢§(n+l) and ¢i(n+l) for i = k,---,1 can now be generated from the following two important identities: P _ e P e ¢i(n+l) - ¢i(n+l)-+(gn+l-¢l(n+l)) (3.2.5) and 31 cpi(n+1) = cp:(n+1)+(gn+l-cp:(n+1)) . (3.2.6) 3.3. MODIFIED PECE MODE We now summarize the (k,k+l)-order PECE mode (2.1.5) which advances from xn to xn+1 as follows: Compute wi.l i = l.:--,k+1; * o P: cpi(n) = Bi(n+l)cpi(n) 1 = l. '".k . Ah k _ n+1 * Pn+l ' e yn+hn+1 1131 ”31.1 “91"” ' e — mk+l(n+1) - O e(n+1)= e (n+1)+ 1(n) i = k 1 - cpi cpi+l Cp1 ' ' ’ P — o E: gn+1 — g(xn+l'Pn+1) . (3.3.1) C' =P +11 (1) (P .. e(n+l)). ' yml nH. mnlelgmd Cpl ’ _ __ e cpk+1(n+l) ‘ gn+1 c91‘1““) ' _ e - _ ... cpi(n+1) — cpi(n+l) +cpk+l(n+1) , i — k. ,1 3.4. ADVANTAGES In this section we describe briefly the advantages of the method given in the proceding three sections: 32 (i) Throughout the calling of the whole algorithm we only need to compute A_1 (or the generalized inverse) d for all b cause (Ah )'1 - (Ah )-1 .hold can once an e new — old hnew be updated by scalar multiplications. (ii) If the step succeeds and the step size is doubled, AhneW‘ Ahold 2 then e = (e ) can be obtained by matrix multiplications. Furthermore. any reasonable step size selection mechanism should not cause frequent reductions of step size. so the overhead resulting from the computation of eAh is eXpected to be small. (iii) When an implicit linear multistep scheme. usually given in the form = h '2) B f (3.4.1) with fixed constant step size h and constant coefficients di's and 81's, is applied to solve Problem (1.2), at each step one has to solve the nonlinear equation k where f(x.y) = Ay4-g(x.y) and w = h -E; Bifn_i+1" k i-l £51 aiyn-i+l' Notice that w is a known quantity at the current stage. In order to apply the contraction mapping theorem. it is required to have k=h|BO|Lf< 1 (3.4.3) 33 where Lf is the Lipschitz constant for f. Meanwhile. when an implicit nonlinear multistep scheme (2.1.2) with A constant step size h is applied. the nonlinear equation to be solved becomes A A yn+1 = h Qko g(Xn+1'yn+1)+”" ‘3'4°4) A A E: Ahy . . where w - h i=1 Qki gn-i+l"e n' Again we require the condition A A k - h HékOH Lg < 1 . (3.4.5) under the assumption that the iterative method is used to solve (3.4.2) and (3.4.4) with the same rate of convergence. A i.e.. k = k. it is seen that IB | L = @0 ‘L—f . (3.4.5) .I kg 9 our> This shows that for problem (1.2) where L is much greater f A than Lg. the step size h for the nonlinear scheme can be chosen significantly larger than that of the linear scheme. (iv) Since the equation (3.1.17) and all the quantities (3.1.1) are exactly in the same format as those in [28] except that miq's are in matrix farm, mi's are defined in terms of g and that mlq's and wzq's are generated according to (3.1.21). all the efficient software designs by Shampine and Gordon. such as the substantial reduction in the overhead and the memory storages, can be perfectly transferred. 4. ERROR ANALYSIS In this chapter we first discuss the estimates of errors which occur during the numerical integration. Then we explain how those estimates can be used to control the step size and the order. 4.1. ERROR ESTIMATION Let un(x) denote the exact solution to the problem I _ . - un(X) - Aun(X)+-g(X.un(x)). un(xn) — yn (4.1.1) Then the global error (2.2.2.4) can be Split into en+1 = y(xn+l)"yn+1 = [Y(Xn+l)’un(xn+1)]+[un(xn+1)‘yn+1] (4.1.2) We use the term "local error" to denote the second term of (4.1.2). Since y(x) and un(x) are solutions to the same differential equation with different initial values at xn. we see that A(x-xn) “y(x) -un(x>u 4 He (y(x.) -un>n x . + [ HeA(X-T)[g(11y(1))-g(T.un(T))]d¢ . (4.1.3) x n 34 35 Use the condition on A and the Gronwall inequality we have L (x-xn) Hy -un(x))) _<. Hymn) - unHe g for every X‘Z Xn . ((4.14) Given 6 > 0, let “un(xn+l)"yn+1H < hn+1 e (4°l'5) Then (4.1.4) and (4.1.5) imply that ~ ~ Lghn+1 HenilH g.HenHe +hn+l e . (4.1.6) Repeating the inequality for i = O.---,n, since 30 = O, we obtain L (x -x ) ~ n+1 O HeMIH g €(Xn+1‘xo)e ‘3 (4.1.7) which is the same kind of bound as (2.2.2.1) obtained by controlling the general "local truncation error". Therefore it only needs to control the local error in (4.1.5). 4.1.1. LOCAL ERROR FOR (k,k)-ORDER MODE We first analyze the local error for the predicted value. Equation (3.1.2) implies that xn+1 A(Xn+l-T) un(xn+l)"Pn-+1= fxn e [g(¢.u (T)) -P n(THdT n k. (4.1.1.1) 36 Let 9 (T) be a polynomial of degree < k such that k.n 9 for i = l,---.k, then k.n(Xn-i+l) = g(xn-i+1'un(Xn—i+l)) (4.1.1.1) can be split into Xn+1 A(Xn+l-T) un(xn+l)-P =] e [g(Toun(T))-9k'n(T)]dT n+1 + I e n+ [9k'n(T)-Pk'n(T)]dT . (4.1.1.2) The first integral is the error due to approximating g(T,un(T)) by 9k n(T). As was proved in Section 2.2.1. 1 this integral is O(Hk+ ) where H is the maximum step size considered. The second integral is the error due to inaccurate values yn-i+l‘ Notice that xn+1 A(Xn+l-T) 9 ( ) P ( )]d H _ “E e [ k.n T " k.n T T - x n k = th+1 .2; <5ki[g(x i=1 n-i+l'un(Xn-i+l))"g(Xn—i+l'yn-i+l)]n (4.1.1.3) l A(l-d)hn+1 where éki = (O e Li(xn+dhn+l)dd and k x-xn_.+1 zinc) = n X :1 for i = l,°°°.k. Let the j=1 n-i+1 n-j+l jzi ratios of successive step sizes be bounded. with the condition k on A. it can be shown [28] that d = Z) HékiH is uniformly i=1 bounded. In this case (4.1.1.3) is bounded by k HdLg £51 ”un(xn-i+l)"Yn-i+lu' To get an estimate on this summation, consider the variational equation to (4.1.1), i.e.. 37 V’(x) = _[A+ag(xa'YY(X))JV(x): V(x ) = I . ‘(4.1.1.4) It is known that (4.1.1.4) has the solution V(x) =91é—fii3-l (4.1.1.5) z=y(xn) where y(x,z) is the solution to (4.1.1) with initial value y(xn) = 2. Thus un(X.yn) = y(X.y(Xn) ) + V(x) (yn - y(xn)) + O(Hyn - y(xn)H2) . (4.1.1.6) k+1 . Recall that yn-y(xn) = O(H ) by Theorem 2.2 in section 2.2.2. If we assume y(xn—i+l)"yn-i+l = y(xn)-yn4-O(Hk+2). then, since V(x) = I4-O(H), it follows that _ k+2 un(Xn-i+1)"yn-i+l - O(H ) (4.1.1.7) and _ k+1 un(xn+l)--Pn+1 — O(H ) . (4.1.1.8)_ We now analyze the local error for the corrected value. Let yn+l(k) denote the result of correcting the predicted value (k) = P of order k with a correction of Pn+1 the same order. Let 9k n+1(T) be the polynomial of degree < k n+1 such that 9k,n+1(xn—i+l) = g(Xn—i+l'un(xn-i+l)) for i = O,--‘.k—1. Then 38 un(xn+l) -yn+1(k) x = “*1 eA‘XnH‘fltghmnun -P]:'n('r)]d'r x n Xn+1 A(x -T) = I e n+1 [g(Toun(T))"9k'n+l(T)]dT x n x n+1 A(Xn+l-T) + f e [Qk'n+l(T)-P;'n(r)]dT . (4.1.1.9) X n 1 As in (4.1.1.2). the first integral is O(Hk+ ). The second integral is bounded by k—l *- th+l 3E1 Qki[g(Xn--i+l'un(Xn-i+l))"g(Xn—i+l’yn-i+1)]H * + th+1 §kO[g(Xn+l'un(Xn+l))"g(xn+l'Pn+l)] * k-l g_Ha L9 {.2} Hu 1 Xn-i+l)"Yn-i+ln4'Hun(Xn+l)"Pn+1H}°(4'l°1'lo) 1: n( By (4.1.1.7) and (4.1.1.8). we see that the second term on 2 (4.1.1.9) is O(Hk+ ). And hence, Hk+l un(xn+l)-yn+1(k) = 0( ) . (4.1.1.11) By the same argument. we may show that = O(H ) . (4.1.1.12) Define (k) . (4.1.1.13) then (R) = u k) en+1 n(xn+1)"yn+1( yn+l"yn+l(k)+un(xn+l)"yn+l 2 (k) + O(Hk+ ) = Yn+1"Yn+1 Yn+l-Yn+l(k) o (4.1.1.14) The importance of the estimate (4.1.1.14) is that the value yn+l(k) can be readily obtained from Pn+l° In fact. from . . * * the definitions of Pk’n(T) and Pk+l'n(T). we see that * Pk+l,n(X) _ no. _ p ... _ Pk'n(X) + (X-Xn) (X Xn—k+1)g [Xn+]_’ 'Xn-k+1] * ... P ... _ Pk'n(x)-+(x—xn+1) (X-Xn—k+2)g [Xn+l' 'Xn-k+l] (4.1.1.15) It follows that * Pk'n(X) = Pk.n(x)4.(X-Xn)...(X_Xn—k+2)(xn+1_Xn-k+1) p .00 g [Xn+l' 'xn-k+l] P ¢k+1(n+l) = pk n(x)+(shn+1) (shn+1+-wk_2(n))wk(n+l)[1(n+1)-n-¢k(n+l) P Pk'n(X)+Ck'n(S)Cpk+]-(n+l) . ‘ (4.1.1.16) 40 This implies yn+l(k) = P 4—h (n+1) . (4.1.1.17) p n+1 n+1 wk.l CPk+1 Together with (3.1.25). we obtain the (k,k)-order error estimate P len+1(k) ~ hn+1(wk+1.l—wk.1)mk+l(n+l) . (4.1.1.18) 4.1.2. LOCAL ERROR FOR LOWER ORDER MODE By (4.1.1.11). the local error resulting from a PECE mode with both predictor Pn+l(k—l) and corrector Yn+1(k- l) of order k-l is len+l(k-1) = un(Xn+l)- yn+l(k- 1) = O(Hk ) . (4.1.2.1) Thus. we may write len+l(k‘1) = un(xn+1)'-yn+l(k-l) = yn+l(k)'-yn+l(k—l)4'un(xn+l)"yn+1(k) = Yn+1(k)yn+l(k-l)4.O(Hk+1) ” Yn+l(k)-Yn+l(k-l) . (4.1.2.2) The value yn+1(k-1) cannot be obtained as cheaply as Yn+l(k) because 41 Ahn+1 k-2 * yn+1(k-l) = e yn+hn+l £21 Qk-l.ign-i+l + h (4.1.2.3) * n+16k-1.Og(xn+l'Pn+l(k-l)) where g(xn+l.Pn+1(k-l)) is not readily known. Define Ah k-2 -— _ n+1 * yn+l(k_l) - e yn+hn+l £21 Qk-l.i gn-i+l * + hn+1 3k-1.o g(xn+1'Pn+l) (4°1°2°4) From (3.1.23). we have Ah k-l _ n+1 * Pn+l(k-l) — e yn-I-hn+1 £31 wi.l ¢i(n) . (4.1.2.5) and. similar to (3.1.25). Yn+l(k-l) = P p n+1‘k‘l)*'hn+1 wk—l.l c9km”) ° (4-1-2°6) It follows from (3.2.3) that Ahn+l k-l P _ _ e~ _ yhum-l)"e Ynf*hn+1 1:1 (wi.l u’1—1,1)‘"i(n"1)° (4-1°2-7) On the other hand. ‘\§n+1(k'1’“yn+1‘k‘1’” = HMP)t-l.0[g(xn+l'Pn+l)"{’I(Xn4-1’Pn+l(k"l))1 lg H6*Lgupn+l-pn+l(k-1)H (4.1.2.8) -3 HQ*Lg{HPn+l'un(Xn+l)H'FHun(Xn+l)—Pn+l(k-l)“} 42 Estimate (4.1.1.8) guarantees that (4.1.2.8) is O(Hk+l). Thus (4.1.2.2) can be modified as len+l(k-l) = ynflm -yn+l(k-l) +0 = Yn+1(k("§h+1(k‘l)4I§n+l(k'l) ..yn+l(k-1)+-O(Hk+l) = yn+l(k) _§n+l(k-1)+0(Hk+l) z yn+1(k) -§n+l(k-1) , (4.1.2.9) By (4.1.1.17) and (4.1.2.6). we obtain the (k-l.k-1)-order error estimate ~ _ P len+1(k_l) ~ hn+l(wk.l wk_l’l)¢k(n+1) . (4.1.2.10) By the same argument. we may obtain the (k-2.k-2)-order error estimate ~ P len+l(k-2) ~ hn+1(wk-1,1"wk—2,1)¢k-1(n+1) . (4.1.2.10) 4.1.3. LOCAL ERROR FOR (k+1.k+l)—ORDER MODE The procedure described in the last section fails for the estimate 1en+l(k+l) = un(xn+l)-yn+l(k+l) (4.1.3.1) 43 because there is no dominant term in this case. That is, 2 u O(Hk+ ) is of the same order as (k+1) = o(Hk+2 n(xn+l)"yn+l = un(xn+1)--yn+1 ). If we were to take a step with a (k+1,k+2)-order mode. then (4.1.1.18) implies that 3 k+ len+l(k+l) - h (n+1)+-O(H P n+1‘wk+2,1‘wk+1,1)”k+2 )- (4-1-3-2) For convenience. we assume that a constant step size has been used. Then k ¢:;2(n+1) = V +lg(xn+l'Pn+l(k+l)) k+1 m k+1 = g("n+1'Pn+1(k+1))+ E: ('1) 9n—m+1 m—l m _ k+1 - g(:'(n+l'Pn-)-1(k+l))"g(xn+1'yn+1)+V -gn+l _ k+1 k+2 - V gn+l+o(H ) = ¢k+l(n+l)-¢k+l(n)+-O(Hk+2) (4.1.3.3) k+2 Here we have used the fact that Pn(k+1)--yn+1 = O(H ) from (4.1.1.8). We thus obtain the (k+1.k+l)-order error estimate len+1(k+l)“hn+1(wk+2—1‘wk+1,1)(@k+1(n+1)‘¢k+1(n)) ' (4°1°3°4) Notice that it is necessary to know wk+2 l' wk+l(n+l) and ¢k+l(n) in order to apply this estimate. 44 4.2. AUTOMATIC CONTROL Any program embodying a multistep method will have to invoke techniques in starting. changing step size and changing order as necessary not only for the purpose of reducing the cost but also for the consideration of stability. For an A-stable method, the latter factor probably is less concerned then the former when dealing with stiff systems. In this section we follow closely the algorithm adOpted by Shampine and Gordon to deve10p a step size and order selection mechanism. 4.2.1. ACCEPTION CRITERION Let s > 0 be given. When the step from xn to xn+1 has been calculated to the stage PE in the mode (3.3.1L we test the criterion 9 ERR=Hlen+l(k)H =”hn+1(wk+1,1‘wk,1)°9k+1(n+1)” g 6: (4.2.1.1) according to (3.2.5) and (4.1.1.18). If this criterion is satisfied, we accept this step by completing the remaining CE part in that mode. overwriting the updated ¢i(n+1) on the ¢:(n+l) and determining the step size and order for the next step. If this test fails, we cut the current step size in half and restart to predict the value Pn+l' If there are three consecutive failures. then we reduce the order to be 1. 45 4.2.2. ERROR PREDICTION FOR FIXED STEP SCHEME Recall that for i 2’2. _ (l) U31,1 ‘ dim”) - l A(l-s)hn+1 _ I e O ( Shn+1 )(Shn+l+hn) ...(Shn+1+‘)'1--2(n))ds (4 2 2 1) [1(n+l) ¢2(n+1) ¢i_l(n+l) ° ‘ ' ' ' If constant step size h is used. then 1 1 A(l-s)h wi.1 = 73:17? (0 e s(s+1)°°' (s+i-2)ds . (4.2.2.2) This suggests that if the mesh sizes are not wildly different. neither are these matrices wi 1's. In fact, one can show that wkndl‘Wnl 1 A(l-s)hn+ 1 = e O ( Shn+1 )(Shn+1+hn) H.(Shn+l+)'k-2(n))((Snuhn+1)ds (1(n+1) [2(n+l) ¢k_l(n+1) ¢k(n+l) (4.2.2.3) . 11‘“) Thus if we assume that hn+l-2 j_ for every i = 2.---.k, then Hwk+1'l-mk,lH S,Y; (4.2.2.4) 46 * ‘where Yk is the coefficient for the classical Adams-Moulton method, and is known to be relatively small. Therefore . (4 . l . l . 18) suggests that the error at x is predicted to be n+2 p lfih+2(k) ’ hn+2(wk+1,1’wk,1)¢k+2(n+2) (4°2°2'5) (l) (l) where we have replaced dk+l.n+l(l)"dk.n+l(l) by (l) (1) _ . dk+1,n(l)"dk.n(l)' Suppose h - hn+1 and we Wish to consider hn+2 = yh with y near 1. For simplicity, we assume that the preceding steps were all takenwwith the same step size yh. Suppose also that the order k has been chosen properly so that the k-th order divided difference is nearly constant. Then P p wk+l(n+2) = (Yh)(2Yh) "'(kyh)f [xn+2.°°',xn_k+2] k p ~ Y “k+1(n+1)wk+l(n+l) (4.2.2.6) where 01(n+l) = l and oi(n+1) = h'Zh"" “(l-l)h ¢l(n+l)---¢i_l(n+1) for i = 2.°°°.k+l. In exactly the same context. we may use the quantity _ _ p ERK — th+l(wk+l,l wk,l)ok+l(n+l)mk+l(n+l)u (4.2.2.7) as the error estimate at xn+1 when the preceding steps were all taken with step size h Notice that the n+l' predicted error at Xn+2 uSing step Size hn+2 = Yhn+l is 47 now given by yk+l ERK according to (4.2.2.5). Moreover. the same quantity yk+lERK also estimates the error at xn+1 had a step size yh been used. We also use quantities n+1 P ERKMI ‘ th+1(wk,1‘wk—1,1)Ok(n+l)¢k(n+l)H (4.2.2.8) P ERKM2 ‘ th+1(wk—1.1‘”"k-2.1)Ok-i(“+1)°‘°k—1(n+1)H to estimate the local error at xn+1 and xn+2 had the preceding steps been taken with constant step size hn+1 at orders k-l and k-2. respectively. Finally. when the preceding steps are actually taken with constant step Size hn+1' we use ERKPl = H 1en+l(k+1)H = ”hn+2(wk+2,1‘wk+1,1)(”k+1(n+l)‘”k(n))H (4.2.2.9) as the predicted error at x had the order k+1 been n+2 used. 4.2.3. ORDER SELECTION The purpose of order adjustment is to find a polynomial with degree as low as possible which represents the nonlinear function g(x.y(x)) locally with a reasonable accuracy. We adept the philOSOphy that the order is changed only if the predicted error is reduced and there is a trend in doing so. Therefore the order is lowered only if one of the following happens: 48 (1) k = 2 and ERKMl g 0.5ERK , (ii) k > 2 and max(ERKM1, ERKMZ) g ERK , (111) ERKMl g min(ERK,ERKP1) . (iv) There are repeated step failures. And the order is raised only if one of the following happens: (i) ERKPl < ERK . (ii) k = 1 and ERKPl < 0.5ERK . (iii) The starting phase flag is still on. 4.2.4. STEP SIZE SELECTION After the order to be used is selected. we rename it k, and the corresponding predicted error at constant step size. ERK. Then the predicted error at x with Yk+1 ER n+2 step size yh is given by IL, Since it is desired to have YkH'ERKg e. we select the step size n+1 according to the ratio 1 93.5.2)k+1 ERK . (4.2.4.1) Y=( and (i) Double the step size if Y.2 2 , (ii) Retain the step size if 1.3 y < 2 . (iii) Reduce the step size by a factor maX{O.5, min{O.9.y]] if Y < l . 49 (iv) Halve the step size if there is a failure. (v) Double the step size if the starting phase flag is on. 4.2.5. STARTING PHASE The local error at x1 for order 0 is given by 1el(0) m hwl.lg(XO'YO) . (4.2.5.1) As suggested from (4.1.1.11) and (4.1.1.12). we assume £e1(l) = hLe1(O). Then we would choose h such that Hle (1)“ ~ H112 g(x )H < 5- (4 2 5 2) 1 . N UJ1,1 o'Yo 2 ° - - - But “ml 1“ g'l, so we choose the starting step size .56 h = min(h input o'yo . 211-me )Hl/Z) . ' (4.2.5.3) The starting phase flag is turned off whenever the appropriate order and step size are found, which is indicated by the lowering of order or a step failure. 5 . PROGRAMMING FLOWCHARTS In this chapter we assemble together all pieces of algorithms mentioned in previous sections to give a general overview of this nonlinear method. The organization is composed of four blocks which are presented in terms of flowcharts. (H 51 BLOCK 1 -- PREPARATION OF COEFFICIENTS 5.1. anaconda o—anu .usuea 1?...) 0059500 —A ~+¥uooocfl+.c.d uOu u.«3 ouanloo xcoootd 'd “on —L ...: . ...: no» cannonoo — «+0: slaaou O» _y .33 3.3.3 m...) . ...... noduuco canon noses» vacuum novuo ouuuocuo 2+5: .2453 ..~+:.«u ounce: — .u+:.uo q — us end. no». V acouucou no \_ nonlac 0:900 E a d a I 8 Ofldd‘dUd-nu a _ o 2 Ousnsou no» \/ < v-a>cu x0 5.2. BLOCK 2 -- PREDICTION AND ERROR ESTIMATION - Evaluate ‘ e 01(n) Generate ERR.ZRK.ERKM1 552 Predict Evaluate l Generate ez(n+l) I 9‘”n+l'Pn+l) Advance xn to x Lower order 5).33. EIL£X31( I3 -- ICEESTHKPUP (DF’ EVTIBE’ Turn of: starting phaee #( Rutor. x Ute Optimal etep alze e1(n) 4 01(n+l) Step else too enall Error return A//’,/’/’\\\\\\\\l no lfllg < 3 Halve Count number of failures. iflag etep elze iflag - 3' k I 1 53 5.4 BLOCK 4 -- CORRECTION AND AUTOMATIC ADJUSTMENT . Evaluate Update 9 (n+1) ‘ Correct , 1 ® yll ‘ 9(‘M1'YM1’ ‘_—9 £0! 1 - 1000-0k*1 ’0. - der lowered or m1- bower order yee o I r lowered Turn off. I by l ‘ etartlng phaee Transfer ’ eetlneted error mug}: yea c ae ERR con-tent etepe _ 9 Double yee Double ”lee order ' step elae etep else (---I' by 1 Reduce No the allowable etep elre etep elae yee Return no Comute "1.1m ‘ "2.1.41 .l Obtain 'k+2.l by working on v r ’ 6. NUMERICAL EXAMPLES Test results from several problems representing both linear and nonlinear systems of equations are presented in this chapter. We also give the comparison of our algorithm NLMSUB with Gear's DIFSUB or Lawson's IRKSUB. It should be emphasized that the principal objective of these tests is to confirm the effectiveness of this non- linear method. rather than to make the detailed comparison of available method. although several results do show some significant advantages. 6.1. HOMOGENEOUS LINEAR SYSTEM WITH COMPLEX EIGENVALUES [9] Problem: p- -! ‘1 -1. 1, o, o r 1 —100, —1, o. o o y” = y: y(O) == 0, o, —100, 1 1 L o, 0, —10000, —1oo_] [ o 3 . Exact Solution: r-e—x cos(le) W -lOe—Xsin(le) Y(X) = e—looxcos(lOOx) B-lOOe'loOXsin(lOOx)d . 54 55 Test Parameters: Local Error Tolerance eps = 10-6 . Time Interval x E [0.20] Eigenvalues: -1 :1: 101'. . -100 i lOOi Test Results: DIFSUB+ iRKSUB+ NLMSUB MAX 5.80 5.62 12.73* STEPS 1488 404 11 FNS 3839 4594 23 JACOB 133 56 0 INV 133 168 2 PADE 0 0 1 TIME 1.222 1.287 0.060# Notations: + These data were picked up from the reference. * These maximum errors were measured by -169 f2: [yi(X)-Yi]2 # This execution was done on CYBER 750 in seconds. Remarks: This problem is desigend to assess the effect on the performance of a method when the eigenvalue forms a rather large angle with the negative x-axis. Our scheme is 56 extremely efficient on this type of problem because of the stability of diagonal Padé approximations. Notice that no Jacobian evaluation is needed for this nonlinear method. 6.2. NONHOMOGENEOUS LINEAR SYSTEM.WITH REAL EIGENVALUES [20] Problem: -4498. -5996 0.006-X y’= y + 7 2248.5. 2997 -O.5034—3x 1 25498 Y(O) = 1500 -l6499 Exact Solution: -x e-1500x4_17998-1499lx -2e -+7 1500 y(x) = . -x -1500x 13499-1124.5x 1.5e -3.5e - 1500 Test Parameter: Local Error Tolerance eps = 10-7 . Time Interval x E [0.25] Eigenvalues: -1. -1500 . 57 Test Results: DIFSUB+ IRKS UB+ NLMSUB ' MAx 6.42 6.54 6.73* AVE 7.73 7.70 7.09* MIN 9.82 11.99 8.56* STEPS 235 104 16 FNS 539 592 33 JACOB 23 23 0 FADE , 0 0 1 TIME 5.26 4.21 0.05# Remarks: This problem fits exactly into our model and hence we eXpect accurate numerical outputs. The step control seemed to allow the step sizes to be doubled all the way. In fact. the step size allowable from x = 25.03 to the next point was nearly 25. This is an optimistic indication that even larger step sizes can be used as the time goes on. 58 6.3. NONHOMOGENEOUS LINEAR SYSTEM WITH COMPLEX EIGENVALUES [20] Problem: r0. 1. 0. 0-) px2+2x 3) P1.) -1. 0. 0. O T x2-2x O y’ = U U y+U ; y(0) = 0. 0, -100, -900 —800x+1 o L_0, 0, 900. -100_‘ L-1000x—14 L1-3 "—1. 1, 1, 1) l, -1, l, l where U = loll—I 1' l, "l' l L 1. 1. 1, -1‘ Exact Solution: P 0 Sin x+x2 '1 cos X--x2 e‘lOOX cos(900x)-+x -lOOx e Sin(900x)-x t— Test Parameters: Local Error Tolerance eps = 10' . Time Interval x 6 [0.25] Eigenvalues: -100 :t 900i . ii 59 Test Results: DIFSUB+ IRKSUB+ NLMSUB MAx 4.55 4.22 6.75* AVE 6.77 6.44 6.86* MIN 9.32 8.37 7.31* STEPS 2982 532 25 FNS 8332 2986 51 JACOB 37 41 0 PADE 0 0 1 TIME 106.03 41.88 .19# Remarks: This problem is characterized by the large angles formedtng its eigenvalues with the negative real axis. Gear's method has some difficulties in this situation. Lawson's method works better. The matrix U is used to couple all the components and hence makes the problem complicated. The computation in our algorithm actually terminated at x = 39.21 instead of 25 whereas the next allowable step size was as big as 39. So conceivably the above data is to be even more prominent if we take time interval from O to 80. Notice that the numerical difficulty for a stiff problem usually appears in the steady-state part. i.e. when x is large. So at least for this particular problem. the nonlinear method is more efficient in the long run. 60 6.4. NONHOMOGENEOUS LINEAR SYSTEM WITH VARIABLE COEFFICIENTS AND REAL NON—CONSTANT EIGENVALUES [20] Problem: -1-0.01 sin(O.le). 0. 0, 0" O, -100-sin)(, O. O T YI=U U y 0. O. -1000+cos><, 0 L— O: O: 0: -10J ”3+0.03 sin(O.le) F-l T 200+2 sin x O -+U : Y(0) = -1000+-cos x 2+e L-ZO a L_3-+-e_‘ . Exact Solution: , e-x+cos(O.le)-+37 -lOOx+cos x y(x) = U e -+2 -1000x+sin)< e -1. L -le-2 3 e Test Parameter: Local Error Tolerance eps = 10-7 , Time Interval x 6 [0.100] . Eigenvalues: -l-0.01 sin(0.le). -100-sin x, —1000+cos x. -10 . 61 Program Setting: r- a —l, 0, 0, 0 O. -100. 0, 0 T y’ = U U y 0. O, -1000. 0 L OI OI 0’ -].C)_‘ G- . ‘1 r- . q --0.01 Sin(0.01x). 0. 0, O 3+0.03 Sin (0.01x) 0. -sin x. 0, O T 200+2 sin x 4—U< U y+— 0, 0. cos x. 0 —lOOO+—cos x C' O. O. 0. Q) L-ZO 4 Test Results: + + DIFSUB IRKSUB NLMSUB * MAX 5.83 4.82 6.04 * AVE 8.02 7.00 6.34 * MIN 10.11 8.36 12.06 STEPS 323 154 177 FNS 774 867 155 JACOB 31 44 0 BABE 0 0 1 TIME 16.54 .26.47 39# Remarks: Being a variable coefficient linear system. this problem has real non-constant eigenvalues whose values are in the neighborhood of -l. -10, -100 and -1000. We have 62 to change its form to fit into our model problem (1.2) in the program setting and thus make g(x.y(x)) depend on y. Notice that 75 steps were used for this problem over the interval [0.36.6] in contrast to 25 steps over [0.39.2] for the previous problem. The reason that much greater amount of work was needed is probably due to the poor representation of the severely nonlinear function g(x.y(x)) by polynomial P(x). There is some numerical evidence supporting this conjecture: in the previous problem the optimal order used was k = 3 (average k = 2.5) while in this problem the order used was as high as k = 6 (average k = 3.7): once the nonlinear part died out (approximately when x 2 20) so that g(x.y(x)) became constant. the order dropped abruptly to k = l and the step size began to grow dramatically. 6.5. NONLINEAR SYSTEM WITH COUPLING COMPONENTS [9] 6.5.1. COUPLING FROM TRANSIENT COMPONENTS TO SMOOTH COMPONENTS Problem: ’ a P 2 2 27 r ‘10 0! OI O Y2+y3+y4 11 0, —10, o, 0 10(y§+yi) 1 y’= y + 2 ; y(0) = 0, 0. -40. O 40 y4 1 L0. 0. o, -1004 L2 3 L14 . 63 Test Parameters: Local Error Tolerance eps = 10-6 Time Interval x E [0.20] Eigenvalues: -1. -10, -40, -1OO Test Results: DIFSUB+ IRKSUB+ NLMSUB MAX 6.00 6.00 ? STEPS 221 40 104 FNS 529 450 209 JACOB .22 31 O INV 22 93 2 PADE O O 1 TIME 0.178 0.314 0.351# Notation: ? NO error report because the close form of exact solution is not known. Remarks: This problem has a nonlinear coupling from the transient components to the smooth compOnents and hence causes the high nonlinearity on the function g(x.y(x)). The order used was as high as k = 9 (average k = 4.68). But as in the previous problem. once the transient solutions became 64 stabilized. the order drOpped to k = 1 and the step size began to be large. 6.5.2. COUPLING FROM SMOOTH COMPONENTS TO TRANSIENT COMPONENTS Problem: C1, 0, O. 07 '2 1 "1“ 2 o 0, —10, O, O 1' = y + Byl ; y(0) = 1 0. o. ..40. o 4B(Y]2_+Y§) 1 2 2 2 L_0, 0, 0, —100_ LlOB(Y14-y24-y3)d Ll- where B = 20 . Test Paramters: Local Error Tolerance eps = 10-6 . Time Interval x E [0.20] Eigenvalues: -1. -10. -40. -lOO Test Results: DIFSUB+ IRKSUB+I NLMSUB MAX 5.31 5.40 2 STEPS 650 1575 286 FNS 1839 18805 322 JACOB 66 319 O INV 66 957 37 PADE 0 O 36 TIME 0.597 6.171 1.776 65 Remarks: This problem has a very strong nonlinear coupling from the smooth components to the transient components and hence causes great difficulty. In fact. this coupling causes the transient components to have very large positive derivative at the very beginning. but eventually this tendency is diminshed and the order is lowered when the smooth cOmponents cannot match up the rapid decay Of the transient components. A nearly periodic reduction on step sizes results in frequent calls on the Pade approximations. This phenomenon is very unusual and is not known to us what causes this to happen. 6.6. NONLINEAR SYSTEM.WITH REAL EIGENVALUES [9] ' Problem: -O.2. 0.2, O O O I = — y 10. 60, 0.125 y + 0.125 yly2 y(0) = 0 0, 0. -l y34-l 0 Test Parameter: Local Error Tolerance eps = 10_6 , Time interval x 6 [0.400] Eigenvalues: O. -O.167 * -0.012. -60 4 -11 66 Test Results: DIFSUB+ IRKSUB+ NLMSUB MAX 6.10 4.51 ? STEPS 186 508 36?? FNS 630 7918 73 JACOB 46 532 0 INV 46 1596 2 PADE 0 0 1 TIME 0.149 3.200 0.105# Notation: ?? The execution was actually done to the point x = 147.5 only. Remark: The execution had been carried out with step Sizes as large as 8.19 very smoothly to the point x = 147.5 when there was an indication to reduce the step size by a factor y = 0.9. Then the execution was terminated Ah because of an overflow in the computation Of e new. There are two reasons for this fact: the denominator in the Pade Ah approximation for e new is nearly singular and the Ah matrix e new has a "bump" phase [27] near hnew' We believe a careful programming should overcome these difficulties. 67 6.7. NONLINEAR SYSTEM WITH REAL MIXED TYPE EIGENVALUES [11] Problem: F‘ K V ‘1 r' 2N -1000, 0 , 0, 0 Z1 0 . -800. O, 0 'T 2% y’= U LU Y+U 2 ; 0 , 0 , 10, 0 23 2 L. O , O , O, -0.00£ L_z4" -l y(0) = where z = Uy . -l C-ld Exact Solution: ’ 1000 j 1—1OOIelOOOX 800 l-8Ole800x y(X) = U -10 1+9e”10X 0.001 _ 1-0.001e0°001xd Test Parameters: Local Error Tolerance eps = 10-6 . Time Interval x 6 [0.100] Eigenvalues: 68 2000 1600 -1000 + I -800 + 1—10016lOOOX 1—801e800x lO-—'—2£_)-l-5§ . -0.00l+ 0'03; Ole l+9e l-l.001e ' Program Setting: (11000, 0 . O. O ‘7 in ‘ O , -800. 0, O T 2: y'=U U y + U 2 0 , O . -10. 0 z3+2Oz3 L_ O , 0 , o, ‘°°°°I3 ‘2: J Test Results: . I 01800 W O uranium 6 DIFSUB w + r mmn new I 9 01m: : 16.115116 I Current _11 _1 Tine .10241110 1.1074111!) I I 5:... 70 .29 110 I P I resent 1 Error 9. 100x10°81 3.6031110" I Evaluation: 179 £59 262 I I Invers‘lons 7 -2 12 l Average 1 Step 1.463x10".3.703x10" I I PIG. 1 cum. .1 1 I . 1049x10° I . 10mm!) 1 1 143 1 I I 2.657xm°611.470x10'5 1 1 189 I 1 13 I 1 9.53.1110".2.:177110‘3 I 1 i 12 I . 10121101 1. 101311101 I 1 1116 1 1 I 2.2081110") 1.061x10‘5 1 1 1233 I I 15 18 I I -3' .3 6.0zsx10 (9.0701110 l I l 17 J 168 408 216 I l l I I . 10011110z .. 104611102 I I I 1“ I I | - 2.8701110'6.l.562x10 5 I I [291 I I 20 .9 I I - I . 6.6351110 z.mmo z I 523 I I l I l .1025x103 ..1009x103 I I l I l 262 211 2.906x10":6.666x10'7 I I 616 .425 I I 26 , 10 .111 -1 4.067210 l4.782x10 I I '9 1 f It takes .98 seconds execution £1. 00 CHER 750. I Step size has been rest-16m not to be greater than 1.5. 69 Remarks: In the initial state of integration, the problem is a mixed type nonlinear stiff system because of the presence of a positive eigenvalues. But our algorithm still seems to work satisfactorily. When x‘z 24, the under- Ah flow of the computer makes it impossible to recompute e new Ah through the Padé approximation in case e Old is obtained by squaring and then hn is obtained by reduction but ew Ah is too small to be the argument for exponentials. new To monitor this we may either constrain the maximal allowable step size or reduce the order P used in Padé approximation. In either case this does not contradict the A-stability since numerical results show a strong tendency to increase the step size. 70 6.8. NONLINEAR SYSTEM WITH COMPLEX MIXED TYPE EIGENVALUES [l7] Problem: p 2 2'“ P B 0 ' 01 21-22 ‘31' 2' 2 "B I ‘3 I O I 0 Z Z yI=U 2 1 UTy+U 122 ; O I O I -83, O 23 2 L0, 0' 0"Bl L24. I- 2-1 0 y(0) = U where z = Uy . -l L’ls Exact Solution: 2 2 m1”"J2 2(32101 + 51102) 2 2 U"1"""2 z(x) = and y(x) = Uz(x) 53 B l-(l+B3)e 3" B4 1-(1+al)e54X where [(li—Bl)cos Bzxafi2 sin 32x] (”1" wz = e [B2 cos BZX*'(1i'Bl)Sin 82x] 71 Eigenvalues: zl' Bl i |22-B2Ii , 223-B3, 2z4--B4 . Test, Parameters: Local Error Tolerance eps = 10"4 Time Interval x 6 [0,50] B3 = 100, B4 = 0.1, Bl and 32 are varied. Test Results: Bl==—10: B1 = 1; Bl = 10; Bl = 10, 82= 82 = 100 82 = 100 .82 = 10 AVE 2.84 3.45 3.70 5.35: STEPS 63 809 96 1866 FNS 127 1619 195 3933 INV 4 3 6 200 PADE 3 2 5 199 TIME 0.440 2.711 0.667 17.942 £92m: . This work was done with local error tolerance eps = 10-6. Remarks: This set of problems will have different special characters depending on different values of Bl and 52. When B = -10 and 52 = 0, all eigenvalues are real 1 72 numbers because 22(x) a 0. But there are positive values at the transient region, so this problem is similar to that in the proceding section. When Bl = l and 52 = 100, all eigenvalues have negative real parts but the first two form very large angles (nearly 88°) with the negative x-axis. The accuracy returned is satisfactory but the step sizes used are relatively small (average 2 6.2)(10— ). When Bl = 10 and 32 = 100, we have similar situations (with angles nearly 84°) but the step sizes used are relatively large (average 5.2)(10.l ). The reason for this contrast is not clear. When 81 = —10 and 52 = 10, this is a mixed type problem with eigenvalues having positive real part in the beginning and then turning negative ' eventually. The performance is very poor. Particularly, there are frequent reductions on the step sizes whereas the "global error" implies that larger step sizes could be used. Probably this is an indication that the error control mechanism built in Chapter 4 is not perfect. 7. CONCLUSION AND RECOMMENDATION The numerical examples in the preceding chapter have demonstrated the workability of this nonlinear multistep method which we have deveIOped to solve stiff initial value problems. Its efficiency can be certainly improved with more careful programming work. We draw the following conclusion from our investigation: The formulas derived in Section 2.1 are somewhat cumber- some to apply to practical situations. But.since the method is endowed with the A-stability property, the feature of varying step sizes to avoid instability is not important. Therefore in those cases where memory storage is limited, this fixed step scheme can be used. The numerical techniques described in Section 2.3.2 deserve further study because integrals involving matrix exponentials occur frequently in Optimal linear regulator problems. As was mentioned in Chapter 3, when A = 0, most of the quantities developed can be reduced to scalars which are used by Shampine and Gordon in their famous subroutine STEP. Because of the closeness, it is plausible that these two codes can be combined with a built-in, stiff on 73 74 and non-stiff off, automatic switch for general problems. To accomplish this goal the primary work would be the detection of stiffness which, unfortunately, is still on a heuristic basis and needs more exploration. Because of the presence of large Lipschitz constants and the use of large step sizes, the error estimates derived in Chapter 4 do not work completely satisfactorily. We believe that more realistic error estimates would give faster convergence. The global error analysis is still a wide open problem. Besides the weakness of demanding lots of memory , this nonlinear method has not yet been completely develOped to handle the case when A is time dependent. Although at the current stage a periodic updating mechanism, such as the one suggested in (1.4), can be used, we wish to establish an automatic one in the algorithm so that a much larger class of problem can be handled. Finally, more investigation needs to be done to eXplain the stumbling behaviors of this nonlinear method when applied to problems in Section 6.8. LIST OF REFERENCES LIST OF REFERENCES G. Birkhoff and R. Varga, Discretization errors for well set Cauchy problems, I., J. Math. Phys., 44 (1965). pp. 1-23. G. Bjurel. G. Dahlquist, B. Lindberg and S. Linde, Survey of stiff ordinary differential equations, Report NA70.11, Department of Information, Processing, Royal Inst. of Tech., Stockholm, 1970. J.L. Blue and H.K. Gummel, Rational approximation to matrix exponential for systems of stiff differential equations, J. Comput. Phys. 5 (1970), pp. 70-83. D. Calahan, Numerical solution of linear system with widely separated time constants, Proc. I.E.E.E., 55 (1967). PP. 2016-2017. S.L. Campbell and C.D. Meyer, Generalized inverses of linear transformations, Pitman, London, 1979. J. Certaine, The solution of ordinary differential equations with large time constants, in Mathematical Methods for Digital Computers, ed. A. Ralston and H. Wilf, Wiley. N.Y.. 1960. pp. 128-132. G.G. Dahlquist, A Special stability problem for linear multistep method, BIT, 3 (1963), pp. 27-43. B.L. Ehle and J.D. Lawson, Generalized Runge-Kutta processes for stiff initial-value problems, J. Inst. Maths. Applic.. 16 (1975). pp. ll-Zl. W.H. Enright, T.E. Hull and B. Lindberg, Comparing numerical methods for stiff systems of O.D.E.'s. BIT, 15 (1975). pp. 10-48. 10. W. Fair and Y.L. Luke, Pade approximation to the 11. operator exponential, Numer. Math., 14 (1970). PP. 379-382. C.W. Gear, The automatic integration of stiff ordinary differential equations, in Information Processing, 68 ed. A.J.H. Morrel, North Holland Publishing Company, Amsterdam, pp. 187-193. 75 76 12. C.W. Gear, Numerical initial value problems in ordinary differential equations, Prentice-Hall, Englewood Cliffs, N.J., 1971. 13. C.W. Gear, Numerical solution of ordinary differential 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. equations: is there anything left to do?, SIAM Review, 23 (1978), pp. 10-24. P. Henrici, Discrete variable methods in ordinary R.K. differential equations, Wiley, N.Y., 1962. Jain, Some A—stable methods for stiff ordinary differential equations, Math. Comp., 26 (1972), pp. 71-77. F. Krough, A variable step variable order multistep method for the numerical solution of ordinary differential equations, in Information Processing 68, ed. A.J.H. Morrell, North Holland Publishing Company, Amsterdam, pp. 194-199. F. Krough, 0n testing a subroutine for numerical J.D. J.D. J.D. J.D. integration of ordinary differential equations, Lambert and S.T. Sigurdsson, Multistep methods with variable matrix coefficient, SIAM J. Numer. Anal., 9 (1972). pp. 715-734. Lambert, Computational methods in ordinary differential equations, Wiley, London, 1976. Lawson, Generalized Runge-Kntta.processes for stable system with large Lipschitz constant, SIAM J. Numer. Anal., 4 (1967), pp. 372-380. Lawson, Some numerical methods for stiff ordinary and partial differential equations, in Proc. Second Manitoba Conference on Numer. Math., 1972, pp. 27-34. D. Lee and S. Preiser, A class of nonlinear multistep A-stable numerical methods for solving stiff differential equations, Comput. Math. App1., 4 (1978). pp. 43-51. W. Liniger and R. Willoughby, Efficient integration methods for stiff systems of ordinary differential equations, SIAM J. Numer. Anal., 7 (1970), ppe 47-660 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 77 W. Miranker, Matricial difference scheme for integrating stiff systems of ordinary differential equations, Math. Comput., 25 (1971). pp. 717-728. W. Miranker, Numerical methods for stiff equations and singular perturbation problems, D. Reidel Publishing Company, Holland, 1981. W.D. Mutphy, Hermit interpolation and A-stable methods for stiff ordinary differential equations, Appl. Math. Comput., 3 (1977). pp. 103-112. C. Molen and C. van Loan, Nineteen dubious ways to compute the eXponential of a matrix, SIAM Review, 20 (1978), pp. 801-836. L.F. Shampine and M.K. Gordon, Computer solution of ordinary differential equations; the initial value problem, Freeman, San Francisco, 1975. L.F. Shampine and C.W. Gear, A user's view of solving stiff ordinary differential equations, SIAM Review, 21 (1979), pp. 1-17. L.F. Shampine, Stiffness and non-stiff differential equation solvers, in NUmerische Behandling von Differentialgleichungen, ISNM, 27 (1975), pp. 287-301. L.F. Shampine, Stiffness and nonstiff differential equation solver, II, detecting stiffness with Runge—kutta method, ACM. Trans. Math. Software, 3 (1977), pp. 44-53. J.M. Varah, Stiffly stable linear multistep method of extended order, SIAM J. Numer. Anal., 15 (1978). pp. 1234-1246. R.S. Varga and E.B. Saff, Pade and rational approximation, theory and application, Academic, N.Y., 1977. R.S. Varga, 0n higher order stable implicit method for solving parabolic partial differential equations, J. Math. Phys. 40 (1961), pp. 220-231. R.C. Ward, Numerical computation of the matrix exponential with accuracy estimate, SIAM J. Numer. Anal., 14 (1977). pp. 600-610. 78 36. G. Wanner, E. Hairer and S.P. Norsett, Order stars and stability theorems, BIT, 18 (1978), pp. 475-489. 37. R.A. Willoughby, Stiff differential systems, Plenum Press, N.Y., 1974.