r.$.d...;\.n wam... 1%.. 3.1..“ 1 :1. 1!}. .yl .3 1.. I 1... a: . $0.1. . b. .21...th . I a! y u. fig. 1...: . an .11E a“, a $49.1. an“. is 1.1.5”. :1. .1, $1.. $35... “kart, 5.13 “.15. v.41 ' thuv ” lllllllll[lllllllllllll’lllllllllllllllllllllllllll 1293 01688 5331 This is to certify that the dissertation entitled Convergence of Several Iterative Methods and Solving Symmetric Tridiagonal Eigenvalue Problems presented by Qingchuan Yao has been accepted towards fulfillment of the requirements for Ph.D. degree in Applied Mathematics 431» V1111 Major prkfessor [hue June 30;;1998 MS U is an Affirmative Action/ Equal Opportunity Institution 0-12771 LIBRARY Michigan State UnlversIty PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINE return on or before date due. ‘ DATE DUE DATE DUE DATE DUE 1/98 chIRC/DabDuopflS—p.“ CONVERGENCE OF SEVERAL ITERATIVE METHODS AND SOLVING SYMMETRIC TRIDIAGONAL EIGENVALUE PROBLEMS By Qingchuan Yao A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 1998 ABSTRACT CONVERGENCE OF SEVERAL ITERATIVE METHODS AND SOLVING SYMMETRIC TRIDIAGONAL EIGENVALUE PROBLEMS By Qingchuan Yao This dissertation studies several iterative methods and presents an algorithm for the eigenvalue problem of the symmetric tridiagonal matrices. We first propose some modified Halley’s iterations for finding the zeros of polyno- mials. We investigate the non—overshoot properties of the modified Halley iterations and other important prOperties. We also extend Halley’s iteration to systems of poly- nomial equations in several variables. Then, we study several important properties of the two major concurrent iterative methods: Durand-Kerner’s method and Aberth’s method for finding all polynomial zeros simultaneously. Besides Durand-Kerner’s method and Aberth’s method, several other concurrent iterative methods have been created in the past two decades. However, none of them is of monotonic convergence for solving polynomials with real zeros. The monotonic convergence property plays a key role in solving symmetric tridiagonal eigenvalue problems. Therefore, we propose two new concurrent iterative methods with quadrat- ically convergent rate and cubically convergent rate, respectively. Both of the new methods converge monotonically. Finally, we present an algorithm for the eigenvalue problem of the symmetric tridiagonal matrices. Our algorithm employs the determinant evaluation, split-and- merge strategy and our newly developed concurrent iterative methods with cubically convergent rate and with monotonic convergence property. Our algorithm is parallel in nature and the preliminary numerical results show that our algorithm is very promising. To: Jenny Yao my lovely daughter Lianfen Qian — my beloved wife iii ACKNOWLEDGMENTS I would like to thank Professor Tien-Yien Li, my dissertation advisor, for his constant encouragement and support during my graduate study at Michigan State University. I would also like to thank him for suggesting the problem and the helpful directions which made this work possible. I would like to thank Professor Qiang Du from whom I took several courses which laid the foundation for this research. I would also like to thank him for the helpful discussion. I would like to thank my dissertation committee members Professor Wei-Eihn Kuan, Professor Jay C. Kurtz, Professor William T. Sledd, and Professor Zhengfang Zhou for their valuable suggestions and their time. iv Contents 1 Introduction 1 2 The Halley Method 5 2.1 Introduction ................................ 5 2.2 Derivation for the modified Halley iteration and its convergence rate . 6 2.3 N on-overshoot properties and monotonic convergence ......... 10 2.4 Over-estimation of the errors by the modified Halley iteration . . . . 16 2.5 The two-sided modified Halley iteration ................. 19 2.6 Halley’s iteration in real or complex Banach space ........... 23 2.6.1 Derivation for Halley’s iteration in Banach space ........ 23 2.6.2 Convergence of Halley’s iteration in Banach space ....... 24 2.6.3 The proof of the convergence theorem for Halley’s iteration in Banach space ........................... 26 3 The Durand-Kerner Method and Aberth Method 35 3.1 Introduction ................................ 35 3.2 ‘ New derivation of Durand-Kerner’s method by homotopy ....... 36 3.3 Two-step iterative scheme of Durand-Kerner’s method ........ 37 3.4 An equivalent form of Aberth’s method ................. 39 3.5 R—step Aberth’s methods ......................... 4O 4 A New Method with a Quadratic Convergence Rate for Simultane- ously Finding Polynomial Zeros 43 4.1 Introduction ................................ 43 4.2 A new method with quadratic convergence rate and monotonic con- vergence .................................. 44 4.3 Multiple zeros and clusters of zeros ................... 47 4.4 Single-step methods ............................ 50 4.5 Numerical results .............................. 58 5 Solving Symmetric Tridiagonal Eigenvalue Problems 60 5.1 Introduction ................................ 60 5.2 A new method with cubic convergence rate and monotonic convergence 61 5.3 Evaluation of the logarithmic derivative P’ / P of the determinant . . 66 5.4 The split-merge process .......................... 68 5.5 Cluster spectrum ............................. 69 5.6 Stopping criteria and three iterative schemes .............. 70 5.7 Numerical tests .............................. 73 Bibliography 78 vi List of Tables 4.1 Numerical solutions by the new method ................. 59 5.1 The execution time in seconds for evaluating all eigenvalues ...... 75 vii List of Figures 5.1 Algorithm DETEVL ............................ 5.2 Split and merge processes ........................ viii 76 Chapter 1 Introduction In this work, we deal with finding zeros of polynomials and solving matrix eigenvalue problems. To approximate a zero of a function f (z), real or complex, Halley’s iteration is defined as follows: f(zk) Zk+1 = Zk "' 1 H ) f’(zk) ‘ zflrif—i—z k=0,1,2,..., which was initially discovered in 1694 by E. Halley, who computed the orbit of the Halley comet. Halley’s iteration is cubically convergent for simple zeros of f (2) However, it converges only linearly for multiple zeros. In Chapter 2, we apply Newton’s iteration to derive the following modified Halley iteration: f(Zk) ZIt‘.-+-1“"Zk‘_1__+_m ’(z)_l[izkituizk2, k_0)1a2)°"a 2m 2 f’(zk) which approximates the multiple zeros with multiplicity m of f (z) with cubic con- vergence rate. We show the non-overshoot properties of this modified iteration for polynomial with real zeros and obtain the monotonic convergence of the Halley iter- ation in this situation as a by-product. Moreover, this modified Halley iteration will be used to derive an inequality which gives a circle containing at least one zero of the polynomial. This inequality is at least as good as Kahan’s inequality derived by using the modified Laguerre iteration. We will also study the Halley iteration in Banach space and establish a convergence theorem with an optimal error bound similar in spirit to the Newton—Kantorovich theorem for Newton’s iteration. In the past two decades the so-called concurrent iterative methods for complex polynomial P(z) of degree n have been extensively studied. Two major concurrent iterative methods among them are the Durand-Kerner method: AH”=4“— PVW) k=qrznq I 1 n lc k ’ (29—4)) 1311.1?“ 1 and the Aberth method: 1 12,6“): 25k) — (:1) , k = 0,1,2,. .., PW”? ) _ n 1 for 2' = 1, 2, . . . , n. The attractive feature of Durand-Kerner’s method is its quadratic convergence with no derivative evaluations and Aberth’s method is remarkable for its cubic convergence with no second derivative evaluations. Furthermore, they both converge to all the n zeros of P(z) simultaneously when the polynomial has only simple zeros. In Chapter 3, we first present a new derivation of Durand-Kerner’s method by homotopy. The new derivation gives a geometric interpretation of this method. We then pr0pose the following two-step iterative scheme: (k+2) (k+1) (k+1) (k) n Z(_Ic) _ z(k+1) n Zita“) __ Zrk) _ .7 J t J 21 — Zr "‘ (21' — 3i ) Z _z(k+1) _ ng) _ H .z§k+1) _ zlkH) a 3 J '=117'¢1 i =11J¢z z k = O,1,2,..., for 2' = 1,2,...,n with given ZED),z§0),...,z,(,°) and P W 2,0) = 250) — n (20)) (0) , 2': 1,2,...,n. j=1,j;éi(zi - zj ) We will show that this two-step iterative scheme is equivalent to Durand-Kerner’s method. Notice that our new scheme requires no polynomial evaluations. Similarly, we will show that Aberth’s method is equivalent to: 1 1 1 d“) A+>=4)— * fl,, k=aLaH4 iszuqn n . 1 _ i=11j¢i 7-51—70. k _ where (k) d(tc)_ 1D(Z1) ‘ _ n 2(k_) z(k) ’ J'= =1j¢i(zt zj ) with no derivative evaluations. k=qLaH4 i=LZUWn In Chapter 4, we propose some new concurrent iterative methods with quadratic convergence rate without derivative evaluations. Those methods converge monoton- ically when they are used to approximate real zeros of a complex polynomial P(z) with degree n. The main idea 18 to define a pair of sequences {xik )}? -1 and {mm};1 by means of the following iterations: (k) 230°“) = 330‘). PC“ I ) k=012 1 1 i— k k k k 1 1 1 1° ' '1 Ilj=11($i)—$§)) ,-_ 2.11012. any; ’1 (k) ymuz=ym_P@1) k=012 1 z _ (k k k k 1 1 1 1' ' '1 :1 _y.-l.(‘ ’— P1111- 111(11.‘ >— y; ’1 and z' = 1, 2,. . .,n. We first establish the results of these concurrent iterations for the simple zeros of P(z). For the real multiple zeros as well as a cluster of zeros of P(z), we pr0pose the corresponding new modified concurrent iterative methods which converge monotonically. We also prove the non-overshoot properties and the quadratic convergence rate of the iterations. One of our main purposes in studying the new concurrent iterative methods with monotonic convergence is to solve the symmetric eigenvalue problems in parallel. Finding the eigenvalues of an n x n symmetric tridiagonal matrix A is equivalent to solving det[A - AI] = 0, a polynomial of a single variable /\ with real zeros and with degree n. In Chapter 5, we present an algorithm for the eigenvalue problem of symmetric tridiagonal matrices by using the following new concurrent iterations (k) x = we) _ "‘Pf’i ) k=0 1 2 t 1m? k n- , 1 1 1' ' '1 P’(xi )—) P(x(' ))mj=1,j¢im(k5_yk( 3 313' k() (k+1) _ (k)_ Py,( ) — yi _ yt P, (k) P (k) n- 1 k — 0,11 2 (yi )— (7:0 ) 1:11.79“ yi‘i—xil” t J where P(z) = 33:1(2 — Aj)”i , 2;":1723- = n and z' = 1,2,...,m. We show that the above concurrent iterations are of monotonic convergence for real multiple zeros of P(z) with non-overshoot properties for clusters of real zeros of P(z). Moreover, these iterations are cubically convergent. The algorithm is applied to extract the eigenvalues of symmetric tridiagonal matrices iteratively. With promising numerical results and its natural parallelism our algorithm is therefore an excellent candidate for advanced architectures. Chapter 2 The Halley Method 2. 1 Introduction The monotonically convergent Laguerre iteration and modified Laguerre iteration with cubic convergence rate for solving equations of polynomials with real zeros have a very important application in solving symmetric tridiagonal eigenvalue problems, which have only real eigenvalues. Comprehensive numerical results in this direction can be found in Li & Zeng [36] and K. Li, T.Y. Li & Z. Zeng [35]. In Laguerre’s iteration or modified Laguerre’s iteration, one needs to take a square root for each iteration step. In actual computation, because of computer round-off errors, the values inside the square root may become negative. To avoid this problem, in this chapter we discuss iterations called Halley’s iteration and modified Halley’s iteration without any square roots in their iteration formulae. We will show that Halley’s iteration and modified Halley’s iteration share the same properties as Laguerre’s iteration and modified Laguerre’s iteration when they are used to find the zeros of polynomials with real zeros. Like Newton’s iteration, Halley’s iteration has a very natural extension to systems of polynomial equations in several variables or even to nonlinear operators in real or complex Banach space. To the best of our knowledge, it is still not clear that the Laguerre iteration can be carried out in the same manner [30]. In Section 2.2 we apply Newton’s iteration to derive the modified Halley iteration, which is suitable for finding approximations to multiple roots of f (.12) = 0 with cubic convergence rate. In Section 2.3 we exhibit the non-overshoot properties of this iteration and obtain the monotonic convergence of Halley’s iteration for polynomials with real zeros as a by-product. In Section 2.4 a circle is given by modified Halley’s iteration that contains at least one zero of the polynomial. In Section 2.5 this iteration is converted to another iteration called two—sided modified Halley’s iteration. The non-overshoot properties of this iteration when it is used for polynomials with real zeros is investigated. In Section 2.6 we establish a convergence theorem for Halley’s iteration in Banach space with an optimal error bound similar to the Newton-Kantorovich Theorem for Newton’s iteration (see Gragg and Tapia [24]). From the optimal error bound we obtained, it can be easily seen that the convergence rate of Halley’s iteration in Banach space is cubic. 2.2 Derivation for the modified Halley iteration and its convergence rate Consider the problem of approximating a solution of a real or complex equation f (x) = 0. (2.1) The formula NM) 1 II f1“) " 2% $k+1=$k— 511(33):), k=0,1,2,..., (2.2) is known as Halley’s iteration, see Bateman [5], Frame [19, 20], Stewart [50], Kiss [33], Snyder [48], Safiev [44], Traub [51], Davies & Dawson [10], Brown [8], Hansen & Patrick [27], P0povski [42], Alefeld [3], Zheng [58], Gander [21], Scavo & Thoo [46]. E. Halley, who computed the orbit of Halley’s comet, discovered a special case of this formula in 1694. Scavo and Thoo [46] have shown that there are many ways to derive Halley’s iteration (2.2). Now rewrite H (:r) as .1 _ 1. _ 2f(x)f’(:v) H‘ ’ ‘ 20121112 — f(x)f”(x) ’ (2‘3) then , 1 1 f”’(:v) 3 f”(x) ’ H(:c)=—§(H(x)—:r) (f’($) —§(f'($)) ). (2.4) If 2 is a simple zero of f (1:), we can easily see, from (2.4) above, that H’ (z) = H” (z) = 0 and H ”’ (z) 95 0. Thus, Halley’s iteration (2.2) converges cubically in this case. However, if z is a zero of f (2:) with multiplicity m > 1, then f (3:) = (a: - z)"‘h(:1:) and h(z) 51$ 0. Thus, f'($) = (a? - 2)m’1¢($)1 f"($) = (x - Z)""2l(m - 1)¢(x) + (33 — Z)1/J'($)l where 111(55) = mh(:1:) + (a: — z)h’(:r). Substituting these into (2.3) yields H(:r) = a: — (a: — z)\II(x) where H’(z) = 1 — 111(11): — — = — (2.5) which implies O < H’ (z) < 1 for all m > 1. Thus, Halley’s iteration (2.2) converges only linearly for a multiple root of f(2:) = 0. Suppose f (J3) = (:r — z)"’h(2:) where z is a zero of f (as) with multiplicity m and h(z) aé 0. Let _ f(x) gm”) [movie Then gm1’+'”m 1(1) (26) where h(x) Now applying Newton’s iteration with multiplicity % to the equation gm(:r) = 0, we have 2m gm(a:k) = — - , k=0.1,2,.... 2.8 33"“ xk 1+ m g;,,(xk) ( ) Since [Uf’XJ I m 1' It 9:",(x) : f (SC) — 1+mfiOm f’($) , lf’($)]1+—m we obtain the following Modified Halley Iteration 2r xk+1=$k _ Hm I f( hi) 23:1 ” In. E Hm($k) (2'9) Tar—EH“) ' 5 f’(a:11) where _ f (a?) Hm(:r) —— a: — Lil’lfl($) _ 1 [m [”lxl . (2.10) 2m 2 f’(:r) When m = 1, this modified Halley’s iteration (2.9) is exactly the Halley iteration in (2.2). Hansen and Patrick [27] have derived a one-parameter family of iterations for finding the multiple zeros of f (x) . Their one-parameter family includes the modified Halley iteration (2.9) as a special case, but with our derivation, it is easier to show the cubic convergence of this iteration when it is used to approximate an m—fold root off(:r) =0. Theorem 2.2.1 Modified Halley’s iteration (2.9) converges cubically to an m-fold root of f(a:) = 0. Proof For f(x) = (J: — z)"’h(;r) with h(z) ¢ 0, we have 2m gm(:c) Hm = — - . (2:) 1 + m g;,,(:r) Now, 2m 2m gm(:r)g” (:10) HI : 1 _ ______ . m "’(x) 1 + m 1 + m [95(3)]2 and H" (x) = 2m , [911(2093106) + gm(av)g£’11(:t=)l[921013)]2 - 2911160911.(3c)l91$.(flv)l2 m 1 + m [QM-TH" where I _ 2m _ 2%} _ 11mm I 9.12:) — (1+m)¢ Jim-May - h($)[(M - 1)1/1($) + (II? - Z)¢’($)l with w(:1:) = Mh(:c) + (a: — z)h’(:1:). Thus, 2m M—m Hm(z)=1—\Il(z)=1—M+m= M+m' 2.3 Non-overshoot properties and monotonic con- vergence In this section we will show that the modified Halley iteration (2.9) can not skip over m zeros or a zero with multiplicity greater than or equal to m for polynomials with only real zeros. With this property the monotonic convergence of Halley’s iteration (2.2) is obtained as a by-product. Theorem 2.3.1 Let f be a polynomial with real zeros A1 < A2 < < An. Then, for any :1: E (/\,-, A141) and any positive integer m, we have A1< a: < H1(:1:) < H2(:c) < < Hm(.v) < A1...” , if i?) < 0 (2.12) 11 and )‘i—m+l < Hm(.’L‘) < Hm_1($) < . . . < H1((13) < 13 < Ai+1 , 2f ;’((:)) > O (2.13) with the convention A,- = —00 fori S 0 and )1,- = +00 fori Z n + 1. Proof Since f'($) _ n 1 f($) —j§=:1($-/\j)’ we have fig j=l( at): .f Rewrite the modified Halley iteration function as _ $_ 2mf($)f’($) Hm($) - (m1+1)[f’(x)]2-mf($)f"($) = 5” Fm(:c) where _(m+nW@W—mflflflw) Fm( ) — 2mf(x)f’($) , Fm($) __ [f’($)]2 +m[(fl(x))2 _ f($)f”($)] . f($) 277"t(f(-’17))’ f’(x) :1,I_1_,f_'_+f (f’)’- ff___’_’] 2 __m f f’ (f)2 _ 1. 11' L " 1 _ 2 Lm f+f’j:=:1(:r—/\J)2[ We have the following two cases: 1. If4%<0, then 1 1 —f’ -—f " 1 —Fm = — — _. (x) 2 [m f + I ;($_AJ)2[ 1’1 -f' :1.” 1 > 2 m f + f, 1:621”. A02] 11—f’i m 2 m f +f"(x—A..m)2 2 m (Since %(a+b)Z\/E) _ 1 _ Ai+m_x > 0. So 1 J:0, then 1 '1 f’ f " 1 me : _. _.—+—. U 2 _m f f’ jigs—2.12] 1 '1 f’ f " 1 > —. —.__.+._. ______ 2 _m f f, jzi—Zm+1(x-’\j)2:[ 1 [1 f’ f m J > _. _._+_. 2 1m f f, (x-Ai—m+1)2 1 1 > since —a+b >\/ch _ lx-A._m+1l ( 2( )_ ) _ l _ x-Ai-m+1 > 0. So 1 a: > Hm(:r) = a: — Fm(a:) > a: — (a: —- )1,_m+1) = /\,-_m+1 . By 1 and 2 above, and 6Hm(2‘) -2f (=17)(f’(23))3 am m2. [e41 - (mar — f(2:)f”(:v)]2 {>0 if4%<0, — ~1ei <0 11W, >0, the results in (2.12) and (2.13) follow. I] Remark 2.3.1 From Theorem 2.3.1, the modified Halley iteration (2.9) can not over- shoot as many as m zeros of polynomial f. 13 Remark 2.3.2 When f(:r) is a polynomial with zeros A10 2 gig-)2 is monotonically decreasing in any interval (Ai, Ai+1)- So %% is monotonically increasing in intervals (/\,-,/\2) and (AL/M“). Thus, fig is negative for all a: in (A2, A141) U (—00, A1) and positive for all a: in (A,, A1) U (An, +00). Therefore, by taking 'm = 1 in Theorem 2.3.1, we have the following monotonic convergence of Halley’s iteration (2. 2). Corollary 2.3.1 Let wk“ = H1(a:k), k = O, 1, 2, . . ., then for the polynomial f(a:) as in Theorem 2.2.2, the following two cases hold: Case 1 For any 330 E (A2, /\,-+1), we have x0’__.:_—_f_f": m "‘“__1__ (1(1)) .1 >0. F W-AV ,fl($-MV So 472% is negative for all J: in (A2, A) and positive for all a: in (A, A; +1). On the other hand, where 15 (m+nwoW—mnoflo> 2mm )f’(a=) f’ f . (1")? — m] 1 .E f+fl P 1 .'_ f_' L m "*m 1 _m f+f'(($— Al2+j=1($—’\j)2)]. NIH ton—I Hence, we have the following two cases: 1. Ifxoe(A; A), thenflifl 0. So Fm(xO) 1.3“floo foo m “m 31 2 km f(a:o)+f’($ o) ((No-A) +jZ—-:l($°")‘j)2)l if: H%) an) m ] 2 (m f($o) f'(~’L' 0) (EEO-A)2 1 LTD—Al 1 Ito—A >0. 16 Thus, 1 $0>$1=Hm($0)=$0—m>A. C] Remark 2.3.3 By Theorem 2.3.2, the modified Halley iteration can not overshoot a zero A with multiplicity greater than or equal to m. 2.4 Over-estimation of the errors by the modified Halley iteration For a polynomial f of degree n, the following two inequalities can be found in [29, 30]: Laguerre’s inequality: A zero 2 of f closest to an approximation :2: satisfies f (x) f’($) f x where N (at) = a: — 7,55%) is the Newton iteration function. lz—xl Sn|N(x)—a:| =n E NB (2.18) Kahan’s inequality: A zero 2 of f closest to an approximation :1: satisfies |z — 2| 3 (g (Lia) — 2| 5 LB (2.19) where Pilgrim/am[(-%‘3)2—n(%?)] which is the modified Laguerre iteration function for multiple zeros of f(:r). L,in(a:) :2 a: + (2.20) These inequalities estimate the errors of approximations to a zero 2 of a polynomial f of degree n. In this spirit, the following theorem provides a similar inequality for the modified Halley iteration. Theorem 2.4.1 A zero z of a polynomial f (2:) of degree n closest to an approxima- tion :1: satisfies f (1‘) f’($) where Hm is the modified Halley iteration function in (2.10). 2 I2 — 2| 3 \J" + "m. -|Hm(:z:) — 2| 2 HB (2.21) 2m 17 Remark 2.4.1 The above theorem ensures that for the modified Halley iteration $k+1 :Hm(xk)i k 20,1221'”: at least one zero of the polynomial f (2:) is contained in each circle n2 +nm flask) lz-xkl SJ lf’($k) 2m. ~|atk+1—a:k| , k=0,1,2,.... Proof of Theorem 2.4.1 Let f($) = cos — zoo — 22> - - - (a: — z.) where c is a complex constant. Then and (fl)2_ffn_ n 1 f2 —32:1($—ZJ)2 Since 1 Hm(:v) =2:- Fm(33) where 2 _ (m +1)[f’(x)] - mf(JI=)f"(93) Fm“) ’ 2mf($)f’(x) ’ 1 le($) —$l — IFm($)| Now, =1.'1.."12"__1_'.f 2 (m (,-=$Zj-) +§(._z,)2_ Iron) 11.“ 12" 1‘_f(.) S 2 [m (3212: zj) +£Ix—zj|2_ If'($) (1(21 n ) no - 2 m lw-zl2 lzv-ZP mas) _ n2+mn f(x) 1 — 2m f’($) lx-ZIQ’ 18 n2+nm. f(:z:) 1 '“Z‘25 2m we > IFmon' Therefore, n2+nm f(a:) C] It is not clear in general which one is the best among the three inequalities in (2.18), (2.19) and (2.21). For the following two examples, we use m = 1 in (2.19) and (2.21). For f(:z:) = 2:6 — 411:5 — 524 + 190:1:3 — 666232 + 944:1: — 600 with degree 6 and zeros —6, 2, 1 :l: i, 3:1:4i, let 2:0 2 0. Then, f(a:0) = —600, f’(:1:o) = 944, [Ll-(x0) — (Col = |Lf(a:o) — xol = 2.238 and |H1(a:o) - xol = 1.152. By Kahan’s inequality (2.19), LB = ,/%|L3;,(20) — xol : x/0(2.238) : 5.482. By inequality (2.21), n2+nm f(a: 0) HB = \l 2m f’(x o) “(W “3‘” 62+6 —600 _ (/2 944 (.1152) i 3.921. By Laguerre’s inequality (2.18), f($o) l—GOO NB: — = —— = .814. ”f'oo) 6 944 3 Thus, for this example, we have LB > H B > NB. For f(:z:) = 26 — 62:5 + 502:3 — 4.522 — 1082: + 108 with degree 6 and zeros l, —2, —,2 3, 3, 3, let 2:0: 0. We have f (2:0) —- 108, f’(a:o)= -108, |Lf(a:o) — $o| > |Lf(xo) — :tOI = 0.743 and [H1 (2:0) - atol = 0.706. By Kahan’ s 19 inequality (2.19), LB = (£2le20) — zo| 2 ¢6(0.743) :- 1.820. Inequality (2.21) gives n2+nm HB = \sz 'ffal) 62+6 108 _ (/2 _108 (.0706) m($0)— fol i 3.851. and by Laguerre’s inequality (2.18), 108 — 6 l —108 _ 6' B = n f’(:2:o) Thus, for this example, we have LB0~12 -mf(x)f”(:r) ' In the rest of this section, we proceed to show the non-overshoot properties of this two-sided iteration. 20 Theorem 2.5.1 Let f (11:) be a polynomial with real zeros A1 Ifi—gl > 0 and any positive integer m , we have A.- < a: < Hflx) < H;(a:) < < H:,’,(a:) < Ai+m (2.23) and A,_m+1 < Hg,(a:) < H;,_,(2:) < < Hf(a:) < 2: < A,“ (2.24) with the convention A,- z —oo fori g 0 and A,~ = +00 fori Z n + 1. Proof Since . |f(2)f’(2)| H :c =x:L- a "‘( ) (2.2.1) 000)” — more) we have 2H;(x)____aH.-.(x)_ |f(2=)f ’f(2=)|( (2:2)) 2“,. 3m 0m 2m2((8—;;— )(f’(2r)) snow )) Rewrite i 1 Hm(a:) =mi Fm(117) where .- _ (1+m)(f’(2' ))2—mf(x we) Fm” ‘ 2m|f(2f) '20)) z 0'00) +m[(f’(:c ))— f2( )(f"2 ()1 In 2) 2m(f(2)) r02) :1 if’(2)2+(f’) -f f” ,fl 2 m f(a:) (f)2 f'($) _ _1_ 1m) f(2') " 1 " 2 (m f($) + f’(2:) Eva—2.)?) Since f (it) f’($) Fm(:r) > %(%lf—(£Z + f (x) if 1 j=i+1 (a: ‘ Ajlz 21 1 1 flat) f(:r). m ) > 2(mlf(=v) + f’(2=) (av-Aim)2 2 m (since -;-(a+b)Z\/db) 1 = 2:75”? we have 1 $ 0. Since 1 _1_ f’(2=) f(~2) . " 1 Fm($) 2 (m f($) + f,($) j=i;n+l (113 _ K02) 1 _1 f’(:v) f(:v) , m ) > 2 (m f($) + f’(-’E) (27 - ’\i—m+l)2 2 '1: _ Ali—m+1l (since $(a+b) Z M) 1 = x—Ai—m+l >0, we have a: > H;(a:) = a: — F (11:) > a: — (a: — A,-_m+1) = A,-_m+1 . Thus, (2.24) is true since ”—5359 < O. The following corollary is a direct consequence of Theorem 2.5.1 . Corollary 2.5.1 Let it?“ = Hflxk), k = 0,1,2,.. . Then, for any 2:0 6 (Ai, A1“) with :l: a: +oo>|-f7(—kT)|>0, k=0,1,2,..., f($k) wehave cubically ._ _ _ + + + cubically A, <—— $kyA. (2.25) IfA f<2o> _:n_ "‘"‘_1_ — 2 (m f($o) + f’($0) ((1170 —A)2 + 3:21 (.1170 — Aj)2)) 1 i f'($0) f($0) . m > 2 (m f($0) + f’($o) (mo-AV) > 1 . — lxo '- /\| If .100 < A , then Fm($0) > [\_$ >0 So xo A , then Fm(xO) > (Bo—)1 So $0>$1_=Hr;($0)=$0 >$0—($0—/\)=/\. _ _1__ Fm($0) 23 2.6 Halley’s iteration in real or complex Banach space One of the basic results in numerical analysis is the Newton-Kantorovich Theorem which provides a sufficient condition that guarantees the local convergence of the Newton iteration. This classical result can be found in Schroder [47], Janko [28], Safiev [45], Kantorovich and Akilov [31], Dbring [14], Ortega and Rheinboldt [38] and Ostrowski [39]. More recent work in this direction includes Gragg and Tapia [24], Rall [43], Miel [37] and Traub and Wozniakowski [52]. In this section we first derive Hal- ley’s iteration in Banach space followed by a Newton-Kantorovich type convergence theorem for Halley’s iteration in Banach space. We show that the convergence rate for Halley’s iteration in Banach space is cubic, along with an optimal error estimation for the iteration. 2.6.1 Derivation for Halley’s iteration in Banach space Let X and Y be Banach spaces and D C X be a convex subset of X. Let F : D C X —+ Y be a twice Frechet-difl‘erentiable nonlinear operator on D. If F(X*) = 0, then 0 = F(X") : F(X,,) + F'(z,,)(x* — X.) + éF”(Xk)(X* — Xk)(X* — X.) = F(X,,) + F'(X,.){1 + -:—[F’(Xk)]‘1F”(Xk)(X* — X.)} (X‘ — Xk). So X“ : Xk - {I + %[F’(Xk)]“lF”(Xk)(X* — ch)}_l [F’(Xk)]‘1F(Xk). Now replace X * — X)c by Newton’s correction -[F’(Xk)]‘1F(Xk) on the right-hand side of the above equation and call the resulting right-hand side X k+1, we have 1 I I — —l — Xk+1 = Xk-{I — §[F'(Xk)l_1F'(Xk)[F (Xk)l 1“Xi-l] [F’(X,,)] lF(Xk) (227) which we call Halley’s Iteration in Banach Space . let Pk = [F'(Xk)l-l 24 and 1 ,, " e. = {I — -2-I‘kF (X.)r.F(X).)} , then the above Halley iteration in Banach space can be rewritten as X.+1 = X). — ekrkmxk), k = 0, 1, 2, . . .. (2.28) 2.6.2 Convergence of Halley’s iteration in Banach space Let 5(Xon‘) = {X | IIX - Xoll < 7'} and 5(X0fl‘) = {X I IIX -Xo|| S 7"}- The following convergence theorem for Halley’s iteration in Banach space is proved by Safiev [44]. Theorem (Safiev) Let F : D C X -—) Y be a three times Frechet-differentiable nonlinear operator with ||F”(X)|| g M and ||F”’ (X )|| 3 N on the convex set D. Suppose that X0 6 D satisfies the following conditions: ' “PO” 5 Bo and HF(X0)H S 50; 0 hO : M3360 S ——1— With ’70 = NBo—IM_2; 2+7o o S(Xo, t") C D where t‘ is the smallest positive root of the following equation Y has a second Frechet-derivative at each point of the convex set D, then for any X, Y E D, llF’(Y) - F’(X)|| S IIY - Xlloiggl IIF”(X +t(Y - X))||- 27 The proofs for Lemma 2.6.2 and Lemma 2.6.3 can be found in Dieudonné [11] and Lang [34]. Proof of Theorem 2.6.1 We divide the proof into three parts: Part 1. The error bound in (2.29) is optimal. Let K 2 —1 —1 ¢(t) = 3L —BO t+BO 770 where K, B0, 770 are the constants as in Theorem 2.6.1. Then 450) = go — r00 — m) and ¢’(t) = €10 — r.) + (t — r2)] and ¢"(t) = K where r1 = (1 + Home and r2 = (1 + 031)?)0 with 0 < r1 _<_ r2. Let to = 0 and apply the Halley iteration to the scalar polynomial ¢(t). We have (Wk) 1 ¢ t ¢ll t 1 (Wk) ‘ '2 in.) k By Corollary 2.3.1, the sequence {tk} converges to r1 monotonically, and by (2.33), (tic —7‘1)(tlc —T2) tk+1=tk — k = 0, 1, 2, . . .. (2.33) t = t — " 2 - n + 2 - 22 - L—2‘——2i::::.:::::: 2 tie _ (2k — 7‘1)(tk — 7‘2)[(tk — T1) + (t;c - r2)] (tk - 7‘1)2 + (L); — 1’2)2 + (tk - T1)(tk - 7‘2) . So (t), — ms t — r = . 2.34 k“ 1 (ti — m2 + (it)c — 22)2 + (t)c —— 20(2). — 22) ( ) Similarly, t _ 3 t... — r2 ( ’2 ’2) (2.35) : (tk — 7‘1)2 '1‘ (tk — T2)2 '1' (tk — T1)(tk — 7‘2) . Let 0). = fl, then by (2.34) and (2.35), 9k+1=02, k=0,1,2,.... Thus 1"1 — tk ._ _ 3 _ 32 _ _ 3’c —0k_6k—1_0k—2_'°°_00' Tg—Tl‘l'Tl—tk 28 So, if ho < i- (i.e., 90 < 1), then T-t — lg:— - ——03:—0_1—0 236 l k "‘ 1-08k(r2 T1)—1_68k(0 Olllo (' ) (1 + 00)77068k-1 2 37 23k—10i . ( ' ) i=0 0 If ho = % (i.e., 00 = 1), then r2 = r1. By (2.34), 1 (1 + 90)770 3’c ' (T1 _ to) = 3k Tl—tk+l=§(rl-tk)='”: So the equation (2.37) is still true for 60 = 1 and holds for all ho g %. Therefore, the bound (2.29) in Theorem 2.6.1 is optimal. Part 2. The sequence {tk} in Part 1 is a majorizing sequence for {X k} generated by Halley ’s iteration (2.28). We shall use mathematical induction to prove: ”F(Xklll S (Wk); (238) ”1‘2” S -¢’(tkl_l; (2-39) IIlel s 1 _ 2.1 81.1; (2.40) 2 ¢'(tk)2 lleC+1-— Xkll S tk+1— t), (241) where k = 0, 1, 2, . . ., and ¢(t) is the same scalar function as defined in Part 1. When k = 0, ||F($o)|| S Balm) = ¢(to) and “Fall S Bo = -¢'(to)'1- Since [[éFoF"(Xo)I‘oF(Xo) S $245832 = éMBoflo S :“KBOUO < 1, we have, S 1 ,, _ neon = [[11 — 52°F (Xo)roF(Xo)1 2 29 So, IIX1— Xoll = ||90FoF(Xo)|| S H90” ° llPoll ' l|F(Xo)|| _ ¢§t02 ¢’(to) 1 $0 2 "‘ §M¢'(t3)2 _ ¢ to ¢’(to) (pg: 2 l-inQV Thus, (2.38)-(2.41) are true for k = 0. Now suppose that (2.38)—(2.41) are true for all k S n. Then, =t1—t0. n HXn+1— X0“ = ||Z(Xk+1— Xk)“ k=0 n S 203ch — tk) =tn+1 — to = tn+1 S T1~ 1:20 So, Xn+l E S(Xo,1‘1). By Lemma 2.6.3, lllF'(Xo)l‘1[F'(Xo) - F’(Xn+1)l|| l/\ BoMllXo - Xn+1ll l/\ BoM(tn+1 — to) BOKtrH-l S BoKT1 _l_ _ 1 _ ZKflo Bo V 33 Bo BOX 1. K < On the other hand, by Lemma 2.6.2 and F’(X,,+1) = F'(X0) " [F(Xo) " F,(Xn+l)] = F'(Xo){1 “ [F’(Xoll-llF’(X0) — F'(Xn+1)l}, we have, lanHH = lllF,(Xn+1)]—lll = NH - [F’(Xo)l‘1[F'(Xo) - F'(Xn+1)l}’llF'(Xo)l‘lll Bo 1 s = —-—— = —¢’ tn ‘1. 1_ BOKtn+l Bio “ Ktn+1 ( +1) 30 So (2.39) is true for k = n + 1. For (2.38), let F. = F(Xn) + F’(X,,)(X — X.) — éF”(Xn)[F’(X,,)]‘1F(Xn)(X — X"). (2.42) Then, X. = F(X.) + F'(X.)oo exists with lH‘-Xm Sin—u 03“-1 = 1+0 -———+, kquzrn, and X. E S(X0,T1). Part 3. X" is the unique solution of F(X) = 0 in 5(Xo,(1+ 90l770) U 5(X0, (1 + 90—5770)- 33 If X E S(Xo, n) U S(Xo, T2) is another solution of F(X) = 0, then letting X = X in (2.43) and (2.44), we have X — Xn+1 = enFnP: where .. _ 1 _ _ _ F. = {F(X.) + F'(X.)(X — X.) + 5F"(X.)(X — X.)(X - X.) — F(X)} 1 II I — " I " " + 5F (anlF (Xn)l 1{F(X) - F(Xn) — F (Xn)(X - Xn)}(X - an- (2-47) Similarly, 'Ifi _ tn 3 I tn 2 r2 —t,,+1 = 4 (T; ) / (“fl ”2 . (2.48) 1- T2'¢(tn)/ (Cb “it” By (2.47), ~ 'N M2 , _ _ IIFnll _<_ —6- + -4—||[F(X.)l 1”] IIX —X.||3 FN M2 ] _ < —— — — X — X. 3 “ 16 4¢’(t.) H H F N , M2 , ‘ _ = (- ("at (t.) + —4-) /¢ a.) IIX — an|3 . N M2 I : _ s — (--6-¢'(t.) + —4—-) /¢ a.) IIX — X.)(" _ '_ a): 2 1 - ; 3 _ _ (380 + M ) 4¢'(t.)l ”X anl K2 1 - < ———-— X—X. 3. _ 4 .22.)“ H So, IIX-Xnflll : llenrnfinll K2 1 7,—n—7 _ _ ,_ ,2" ($.13. IIX — anl3- (2.49) 2 we.» If r1 < Q, then X E S(X0,r2). So 36 6 [0, 1) such that IIX—X0” 267'2 =5(T2—t0). (2.50) If r1 = r2, then X E S(Xo,r1). So 35 6 [0,1] such that (2.50) holds. We claim that IIX—XkH 363*(r2—tk), k=0,1,2,.... (2.51) 34 In fact, this is true for k = 0 by (2.50). Suppose it is true for all k S n, then by (2.48) and (2.49), we have 53 1 4 (<1»~’(tn))2 _ % ¢(t.) (¢’(tn)) n l = 53 + (7.2 _ tn+1). |/\ llX_Xn+lll 63n+l (7‘2 _ tn)3 So, (2.51) is true. Now let k —> 00. If n < r2, then 6 < 1 implies ||X — X*|| = 0. When r1 = r2, then tk —-) r2 and 5 S 1 imply IIX — X"|| = 0. Thus, X = X“. D Chapter 3 The Durand-Kerner Method and Aberth Method 3. 1 Introduction Durand-Kerner’s method and Aberth’s method are two major concurrent iterative methods for finding all zeros of a polynomial simultaneously (see Durand [16], Dochev [12], Kerner [32], Yamamoto [57], Petkovié [41], Borsch-Supan [6], Ehrlich [17], Aberth [1], Gargantini & Henrici [22], Braess & Hadeler [7], Alefeld & Herzberger [2], Ya- mamoto, et a1. [57] and Petkovic’ [41] for references). In this chapter, we first present a new derivation of Durand-Kerner’s method by homotopy. The new derivation gives a geometric interpretation of Durand-Kerner’s method. Then, a two-step iterative scheme is proposed which is equivalent to Durand-Kerner’s method but no evaluation of the polynomial is necessary for each iteration after the first step. For Aberth’s method, an equivalent form is proposed which requires no evaluation of the first derivative for each iteration. We then present two r-step Aberth’s methods, both of them having 27' + 1 as their rate of convergence. 35 36 3.2 New derivation of Durand-Kerner’s method by homotopy Let P(:r) be a monic polynomial of degree n with n distinct zeros (real or complex) # t * x1,z2,...,xn or n P(z) = 11(1): — 517;). (3.1) i=1 Durand [16] and Kerner [32] have independently pr0posed the following concurrent iterative method with quadratically convergent rate for finding all zeros of the poly- nomial in (3.1) simultaneously: (k) x§k+ll=x§’°’— F(x'k) k, i=1,2,...,n; k=0,1,2,.... (3.2) n ($()_$§)) j=ld¢i 1 Durand-Kerner’s method always converges in practice, see Fraigniaud [18], although there is no rigorous proof of it for n _>_ 3 . Many authors conjecture that Durand- Kerner’s method converges for almost all starting points {z§0)}?___1 in C". One of the main interests of Durand-Kerner’s method lies in its inherent parallelism. For 20(10), 139), . . . ,sz’), let Poor) = no — mg”) (3.3) j=1 and write H(:c, t) = (1 — t)Po(z) + tP(:c). (3.4) If there are n smooth curves 23,-(t), i = 1, 2, . . . , n, which connect the initial approxi- mations 239)), zgo), . . . ,zgol to the zeros off, 23;, . . . ,x; of P(:1:) respectively, then H(a:,~(t),t) E (1 - t)Po(:1:,-(t)) + tP(:z:,(t)) = 0, i = 1, 2, . . .,n. (3.5) Differentiating H (rm-(t), t) yields Hx-x£(t)+Ht=0, i=1,2...,n, (3.6) where Hz and H. are the partial derivatives of H (2:, t) with respect to a: and t respec- tively. From (3.6), we have the following initial value problem { $:(t) = -H¢[1‘i]t[,t) H3(:ci(t),t) ’ 13,:(0) = $5.") 37 for i = 1,2, . . .,n. By (3.5), we have P( i(t))-P i t 17W) (1—2)0P:(x.(t))+(t:9'((2(t)) 1 13(0) = $50) for i = 1,2,...,n and $.(1)=x;‘, i=1,2,...,n. Now we use Euler’s one-step formula to approximate 23,-(1), that is, where m) _ Pool”) — Pot”) _ Pct“) ' P5012”) PM”) Denote $9) = $.10) + 22(0). then (0) P . 3(1) ___ [0) _ (0320)), = 1,2, . . ”n, P6($i ) where P6(a:[0)) 2: ?:I,#,-(:r[0) — 1:59)). In general, we have P V" 22?“) = 25.“ — (“SQ”), i = 1,2, . . . ,n; k = 0,1,2, . . ., (3.7) P1200.- ) where Fury”) = gzljj¢,(x§"’ — $53)). This is exactly the Durand-Kerner method in (3.2). 3.3 Two-step iterative scheme of Durand-Kerner’s method To use the Durand-Kerner method in (3.2), we must evaluate P(a:[k)) at each iterative step. In this section we propose the following two—step iterative scheme: (k+2) (k+1) (1+1) (1:) " 2:“) — 2‘7““) 121 31““) _ 270°) xi = a2,- — (x,- — xi ) Z J J , J (3.8) '=1.j¢i 29‘“) " $5“ '=1,j¢i mile“) - 171k“) 38 for i z 1, 2, . . . , n; k = 0,1,2, . . ., where as? 1, 239),. . . ,2? are initial approximations and (0) <1) _ (0)_ P($ ) -_ x, —:r, n ($£0)_ 2:10)), i—1,2,...,n J‘= —1.J'¢i$ 11:] We will show, in the following, that this iterative scheme is equivalent to the Durand-Kerner method in (3.2). With this scheme, the evaluation of P(a:[k)) for each iteration becomes unnecessary after the first step in (3.8). This iterative scheme is very useful when it is applied to solve the eigenvalue problem by the homotopy method where the evaluation of the polynomial is quite time-consuming. Theorem 3.3.1 The two-step iterative scheme in ( 3. 8) is equivalent to the Durand- Kerner method in (3.2). Proof By Lagrange’s interpolation formula over the nodes 1: k k k (.g >,p(.g ))),,p(..g ’)),... (x. “2 ,.p( 5.2)) and Durand-Kerner’s iteration in (3.2), we have n _ (k) n (I) _ 11k) F(SII) = 2: 13(1), ) H W ‘1' ((13 — .TJ-k (=1 _ J'= —1.J'¢l 37: J 1 n [- (k) (k+1) n $(k)) n (=1 L j=lJ¢1g)=dg>11(.g>_.;>) J'=—1.J‘¢i and k n k n n k k n F(X)) = Z .15) 2 I1 (mp—.41). 11 1:1 r=l,r;él j=1,j7’:1,r j=1,j;éi n n n n k k k k k k = d.“ 2 H (.g>_.§.>)+ z 191139—22) r—l,r;éi j 1,3311; l:1,l¢i j=1,j;£l,i + fl (m(k1— 3511)) J: -1.J'¢i n n n n k k k k k k = 11> : n (.g>_.§.>)+ : .1) 11 (xv—mg.) 1:1,1¢1 j=l,j;éi,l 1:1,1¢1 j:1,j¢l,i n k k + H (mp—cry). 1:111?“ So P'(x§"))_ " 1 +1 2": d)“ +1 P031“) 1:11,..2: $1M —$1k) d1“ 1:11.11 31k)-$1k) dye), or P'(a;§")) i 1 _ 1 1 22: (1)") H171“) 1=1.t¢i $1M —$lk) (1.-k) 111211-731) 951“ Thus, 1 _ do“) 1),—(1")— (to) dunk W12") -Z1=1,1¢1xlk _1-21=1,l¢i and we have proved the assertion of the theorem. E] 3.5 R—step Aberth’s methods In this section we present derivations of Aberth’s method and its equivalent form n (3.10). Those derivations provide motivations for considering the corresponding r-step Aberth’s methods with (2r + 1)-order convergence rate. 41 P’( ) _ i 1 P(a:) j___1 a: — x}. Replacing :11 by :12?) yields k n P'(:c§ ’) _ Z 1 + 1 P (k ‘" k) (k) .. ( 1 ) 1=13¢i 33] 931 It follows that a. k 1 x, = mg 1 _ PM”) (3.14) 13033.“) _ ;‘=1’j¢i m(kjl—x; for 1' = 1, 2,.. ,.n Let x; — —§k:1: )for j ¢ 1' and denote the right hand side by 119‘“), we obtain the Aberth method given in (3.9). The formulation of the equation in (3.14) may be considered as a. fixed-point problem which suggests the consideration of the following r-step Aberth’s method: 12,0 k y: 1 = 11>, k,l+1 k 1% )= dl-Wam ”1 1 "=°J““”‘1’ (aw) W“ j=l.j¢iW I 3 J \ xgk-H) : yflc, ,r) fori= 1,2,...,n; k=0,1,2,.... It can easily be shown by mathematical induction that the order of convergence of the r-step Aberth’s method in (3.15) is 21' + 1 by deriving n k,ll * k 1.1 k,l 1. 11+) —x.=0((x£’—x.)2.z _(yj- has») fori=1,2,...,n; l=0,1,...,r—1. Comparing to Aberth’s method in (3.9), the r-step Aberth’s method in (3.15) requires no extra work on evaluating P’ (2:90) and P(:c§k)) while the same formula in (3.15) is reused r times with the same P’(:1:§k)) and P0139”). To derive the equivalent form in (3.10) of the Aberth method, recall that, 11 d0“) P(:1:)=[1— Z (1) __EJfI,(_ $11)) 1:137; 42 in (3.12). Let :1: = 51;, 1': 1,2, . . . ,n. With the assumption 23$“ 75 21:2“, 1': 1,2, . ..,n, we have n d(-k) J _ 1—Z——(1)_ , _0, or k 11$“) _ 1 n d;- ) m(.k)_$'l‘— —-2 (k)... '9‘. 1 z J=1»J¢1 III] :17, Thus, k doc) x:=x§)— ' d“) , i=1,2,...,n. (3.16) 1 _ i=1,j¢i7‘1—,jk _x; Let 1:: = my“) and denote the right hand side of (3.16) by 339‘“), we achieve the equivalent form of the Aberth method in (3.10). One may also consider the formulation in the equation in (3.16) as a fixed-point problem. This suggests the consideration of the following r-step Aberth’s method: 1,0 I: y.‘ l = 22$), k,l+1 k 11:") y: ) = $5)- .. 1(1) 11:0111“"’"’1’ (3.17) 1’2j=1,j¢im J 1 x£k+l) : ygkm) for 1' = 1,2, . . .,n; k = O,1,2,. .. , where 11$") is the same as in (3.11). Similarly, it can easily be shown that the order of convergence of the r-step Aberth’s method in (3.17) is Zr + 1. Compared with the equivalent form of the Aberth method in (3.10), the r-step Aberth method in (3.17) requires no extra work in evaluating 11$“ but reuses the same formula in (3.17) 1‘ times with the same 11$“). Chapter 4 A New Method with a Quadratic Convergence Rate for Simultaneously Finding Polynomial Zeros 4.1 Introduction Durand-Kerner’s method is an iterative algorithm for finding all zeros of a monic polynomial simultaneously with quadratic convergence rate. It was first used by Weierstrass [55] and was later pr0posed independently by Durand [16], Dochev [12] and Kerner [32] (see also Yamamoto [57] and Petkovié [41]). There are many al- gorithms of Durand-Kerner-type with higher order convergence rate, which can be found in Borsch-Supan [6], Ehrlich [17], Aberth [1], Gargantini & Henrici [22], Braess & Hadeler [7], Alefeld & Herzberger [2], Yamamoto, et a1. [57] and Petkovic' [41]. However, none of the methods mentioned above converges monotonically for finding real zeros of polynomials. In this chapter we propose new iterative methods which converge monotonically for both simple and multiple real zeros of polynomials. Our new iterative methods converge quadratically, and most importantly, they require no 43 44 evaluation of derivatives. 4.2 A new method with quadratic convergence rate and monotonic convergence Consider a monic polynomial of degree n 2 3, n F(Z) = n(z - *1), (4-1) 1:1 with simple real zeros A1- 11> when 6:0) aé 0, and 6E1) = 0 for (2(0) 2 0. So, g(t)) -/\j n (0) "j/\ (1) _ (0) yi 1‘0 A1 — A,| < D, w #3, Vic. 46 (k) 5(k+1) < €(lc) N_1 1 — 8% D—l (k) 2 Dn—Z (k) 3 s (a 101—116,.-. +0((e 1). Similarly, we can obtain 011—2 59“) g (e)2(n — 1) d“ +0((e(’°))3). Thus, Dir-2 e.<_1 +o( Ai, then 39‘“) = 32$"); (4.10) (k) (k+1) (k) W ) I—Ij:11(cg (') 17(3))1-1 n=i+1(ci ()~ y; )) (2) If cg“ < A.» then I: when) _ C(k) _ F(Cl )) , z _ i ,'_ k k k ’ nj=11°Ai, i=1,...,m. (4.19) (b) The sequence {y£k)}fi°:o generated by (4.17) converges to A,- monotonically. That is, A.- 'i—°° y.“ < 34‘“) < < 4151) < yfo), i=1,...,m. (420) Proof Let a?" = 1,- — 4g“ and 6E“ = yE") — 1,- for i = 1, . . . , m. It follows from (4.16), i-l (k) "j m (k) W k 1 k k n' at- — )" ___x- - A. j: I j j=i+1 : y] :24 (k) "1‘ m (1:) “1 (k) "i :12- —)\j x,- —/\j 51' 1" H( (3.) "(6) H ( a)“ (4)) ' (4'21) i=1 171' — 173' j=‘+1 $4 " y,- For k = 0, by (4.18), 4-1 (0) "1' m (0) "1‘ (1)_ <0) ..,- 1“ -/\j $.- -)\j (0) 0<5i ‘5‘ 1" H( (3) (0)) H ((0) "'('0)) <€i i=1 1* “931' '-'+1 1’ ‘91 I when 69)) 76 0, and a?) = O for EEO) 2: 0. So, xfi") < 2:?) < A,- or 33:0) = as?) = A,-. 49 Similarly, by (4.17), we have i—l (k) "j m (k) "j k 1 k k n. y. —A- 2. —A- 55+) = 6i)‘(y§)"‘*){JH(‘<—ZT_—<%7) H (W) J=1 yi 17; j=z+l yi " yj 1—1 (It) "3' m (k) "j _ (k) n, y, — /\j i ’ ’\j _‘h 1’ 11(“L-“J 1H (au_aJ ”3” 3:1 y: 27] J=3+1 t y] and by (4.18), if (SEC) 75 0, and 6E” = 0 when (SEC) = 0. So, 4 (AW—4.)— n' I )‘(nilln- F1 :12, — All) nj fi :13,- — Agni) ”1' 3 x. _ i g ___—___. z j=l 11:,- - $1 j=i+1 2:, — yJ' 4;— A ‘S I 9:. J P H a ll ,.. I—i cc 5‘? >4 “’73 V 3 W. 41:15 A ‘3 ‘3 . l | 3:. '1‘: '3. V 3 >0. and it is easy to check that 64.44:.) > 0 and 63.44.) 0. an,- on,- < Therefore, (4.24) and (4.25) follow. [:1 Theorem 4.3.3 The convergence rates of the modified iterations in (4.16) and (4.17) are both quadratic for an ni-fold zero A,- of P(z). Proof It is similar to the proof of Theorem 4.2.2. B 4.4 Single-step methods We call the iterations in (4.3) and (4.4) for simple zeros as well as the modified iterations in (4.16) and (4.17) for multiple zeros the Total-step Methods. In this section we consider their corresponding Single-step Methods which have the same convergence properties, but with faster convergence speed. 51 First consider the case of simple zeros. For polynomial P(z), we define the fol- lowing single-step iterations (denoted by .S'SIMS): (k) (k+1) (k) F(xi ) (II,- = 1‘,- — i— k k 1 k k , (4.26) FINE: ) — 37g + )) ?=i+1($l )- y;- )l P W d“”==yw- w') Men -_ k k 1 k k , H}=‘1(y§ ’ — x5- + ’)n;-;.-..(yE ’— yE ’) where k = O,1,2,..., and i = 1,2,...,n. As in Theorem 4.2.1, one can prove the following monotonic convergence theorem. Theorem 4.4.1 If the initial values {xEo)}?:1 and {y§0)}?:1 satisfy ml“) 3 A. s yE‘” < at") 3 A2 s 219” < < 2:5?) s A. s yEP’. (4.28) then (a) The sequence {x§k)},‘:‘;o generated by (4.26) converges to A,- monotonically. That is, mg“) < 225-” < < 4:“ "——°>° A,, 2': 1,...,n. (4.29) b The sequence ylk) °°= generated by 4.27 converges to A,- monotonically. That i I: 0 is, /\i li—oo Elk) < Ellie—1) 1. <.n<¢”<¢% t=Luqn as» For the convergence speed of the single-step iterations in (4.26) and (4.27), we will use the R-order of convergence concept given by Ortega and Rheinboldt (see [38]). Theorem 4.4.2 Let 0,, > 1 be the unique positive root of qn(0) = a" — or — 1 = 0. Then for the R-order of (SSIMS), we have 03((SSIMS), A.) 2 1 + a... Proof Let elk) = A,- — (ES-k) and 6,“) = ygk) — A,- for i = 1, . . . , n. Then, by (4.26), we have N gym-1) = €(k)_2 D2 52 where 111- n k k k 1(.): k k N, : n(x()_ $(+1)) H($§)_y )) —H(17¢()— , HWP‘M) )J'=i+1 j1= )j=i+1 = [(xE") — A) +E-‘°+”) 1(x.E’°’—A.)—6E’°>)- .13. 1.1.. 2 _ k k —1—I:($z (' )— 1' )fi (33: ) A3) =1 j=i+l i-l n n k 1 k k k = 21 leE+’II) j=l j=i+1 Since A1, . . . , A,, are distinct, by (4.28), 3d and D such that d<|$§k)—$§k+l)lalxlk)- y§'°’| |y§k)- 41E“) lylk)- x§-'°+”| 0 such that E(n+1) < 6(1) N_2 I — 8i D—2 : 1 n s 0159‘) Lzegk+l>+ 2; 5,00]. '=l j=i+l Similary, 302 > 0 such that j: j=i+1 Let C = max{C1,Cg} and fl = (n —1)C; 53 nEk)- —BeEk) and pEk) 256,“), i=1,.. Then, 1 pi-l k 1 k (k 1 k 77E+)S 1775)Zn+)+2 ( n— '=1J=i+1 (k+l) 1 (k) ‘1 (k+l) (k p.- S ——1p.- 217,- + 2p 72— _j: —1 j: —i+1 Since k—mo we may assume there is a constant 17 > 0 such that nEO> [’72] Z [73] Z Z l7n|~ (432) Let 0'1] A =(a Elf”), k=1,2,..., denote the kth power of A. Since A is an irreducible matrix, A" > 0 for k 2 kg (4.33) (see [54], p.41). It can be shown that (4+1) lim ‘3 a = 71, If e > 0 is given, then a(k+1)/a(k)> pA( )— e for k 2 16(6) 2 (Co, 01' aE’.‘+1)>a[p(A A)—e], 1', J'=1,2,...,n, where a- — new?” > 0 Therefore, 05?”) > aE'-°+1)[p(A) - 6] 2 CAI/0(4) - 612. and, in general, a(k+r)>a[p(A A)—€]r, i, j=1,2,.”,n, T:1,2,.... (434) Now, combining (4.31) and (4.34) into a single inequality m(k+r) : Ak+rm(0) = (f: a(k+r)) 2 (Hal/AM )-€l'e) 55 where e = (6,), e, = 1, i = 1,2, . . .,n, we obtain m(k+r) 771047+?) S 77 . O and we immediately have 03((351M5), A.) Z n(A)- (4.35) We now consider the characteristic polynomial qn('y) of A: qnh) = (7-1)” — (7— 1) — 1. 56 If we set 0 = 7 — 1, then simply substituting this in the polynomial above yields q,,(0) = 0" — 0 — 1. Since q,,(1) = —1 and q,,(2) > 0 for n 2 2, there is a root 0,, with 1 < 0,, < 2, and by Descartes’ rule of signs, there can be no other positive root of q,,(0). Thus, for the spectral radius p(A) of A we have p(A) : 1 + 0n) and the combination with (4.35) gives OR((SSIMS), A,) 2 1+ 0,, which completes the proof. C] Now consider the multiple~zero case. For the polynomial in (4.14), we define the following single-step iterations (denoted by SSIMM): (k) (k+1) (k) ,.. P (551' ) _ (k) :12, =x,+* -_ 1: k1. k k .=Cni($i); \J now—4+)": LAW—11E)": (4.36) (k) (k+1) (k) ,.. F(yi ) _ (k) 311' : yi _ l -_ k k 1 . k k n. =Dn,(yi )1 [ ;:. 1 be the unique positive root of qm(0) = 0'" — 0 - 1 = 0. Then for the R-order of (SSIMM), we have 03((SSIMM),A,-)21+ 0m. Proof Let o,,=g§, eE")=A,— xEk) and oE’E’=yE’°’— A, for 2', j=1,...,n. Then, by (4.36), we have k .. m k .. 8Ek+1) _— elk) 1 _ j= =ll($(' ) ”\jla” j=i+1(’\j " 17‘ )la” 1 ‘ lc k 1 .. k k .. j: ;11($ ( )_ ZL‘( + ))a,JJ m2i+1(yJ ( )_ $5 ))a,J (k)N3 E“ _ 58 Where i-l m D3 = n(xlk) _ $(Ik+1))a1j H (y(k)_ $97504) j=l ' J j=i+l J and N3 : fiw (4+1) )0...- fi (y (k)_ rEk’W J=1 j=i+l ' (A) m (k) 11% —’\j)a” H (bi—$1 la” J': —1 j=i+1m 1—1 j=1j=i+l( ' (k) m (k) 19H -/\j)a" II (IV—$.- )0"- i=1 It follows from the proof of Theorem 4.4.2 that 303 > 0 such that €(k+1) < g(t) N3 _ D, k ' 1 k 1 m k S C35E)ZeE-+)+26E) '=1 j=i+1 and 304 > 0 such that ask-+1) S 04619:) H (k+1) m (k) 251' + Z 52' ] j=1 j=i+1 Then, by applying the same steps as in Theorem 4.4.2, the assertion of the theorem follows. CI 4.5 Numerical results. Numencal results presented in this section illustrate the monotonic convergence of our new algorithms. In Table 4.1, iterations in (4.3) and (4.4) are applied on the polynomial P(z) = z(z2 — 1)(z2 — 4)(z2 - 9). 59 k xgk) gilt) zék) ygk) xgk) yék) o 45 45 45 45 45 -05 1 429052734 4.61279296 438720703 4.59228515 4.40771484 —0.58544921 2 415548526 473862000 425311124 4.70800722 4.29167636 -0.69429426 3 406939121 486149374 412835706 483431295 416506581 -0.81893982 4 -3.02104604 495343246 404253799 4.93972918 406027549 -0.93056202 5 -3.00276308 499364358 400607224 499101009 400924659 -0.98903675 6 -3.00006028 499986053 400014563 499978274 4.00023501 499971974 7 400000003 499999992 -2.00000008 499999987 400000015 -0.99999982 8 400000000 400000000 400000000 400000000 400000000 400000000 F396) ygk) xgk) yék) zgk) yék) 3(7k) y(7k) [45 0.5 0.5 1.5 1.5 2.5 2.5 3.5 [441455078 0.41455078 0.58544921 1.40771484 1.59228515 2.38720703 2.61279296 329052734 ]-0.30569628 0.30569628 0.69429426 1.29167636 1.70800722 2.25311124 2.73862000 3.15548526 -0.18113671 0.18113671 0.81893982 116506581 183431295 2.12835706 286149374 3.06939121 996969138 0.06969138 093056202 1.06027549 1.93972918 294253799 295343246 3.02104604 -0.01111398 001111398 098903675 190924659 199101009 290607224 299364358 3.00276308 4.00028864 0.00028864 099971974 190023501 199978274 290014563 299986053 390006028 990000018 090000018 099999981 190000015 199999986 290000008 299999992 390000003 0.00000000 090000000 190000000 190000000 290000000 290000000 390000000 390000000 Table 4.1: Numerical solutions by the new methods on P(z) = .z(z2 —1)(2:2 —4)(z2 — Chapter 5 Solving Symmetric Tridiagonal Eigenvalue Problems 5.1 Introduction The homotopy continuation algorithms for eigenvalue problems were mainly devel- oped by T. Y. Li and his former students (see [15], [36] and [35]). Finding the eigenval- ues of an n x n symmetric tridiagonal matrix A is equivalent to solving det [A -— AI] = 0 which is a polynomial of a single variable A with degree 77. having only real zeros. In this chapter, we present an algorithm for the eigenvalue problem of symmet- ric tridiagonal matrices. Our algorithm is parallel in nature. We first propose the following new concurrent iterations with cubically convergent rate: ' k $041) _ (k) __ ”1PM )) k _ 0 1 2 i — i I (k) (k) m n- 1 — 231°”) P (xi )“ F(xi )ijljflm t J (k) (k'l’l) _ (k) niP(yi ) _ yi _' yi _P' (k) P (k) m n' 2 k—011121°”1 (yi )_ (yi )zj=1,j¢i yin—$77." t J where P(/\) = 39:1(A — Aj)"1', 2;":1 nj = n and 2' = 1, 2, . . . , m. The monotonic con- vergence of these concurrent iterations for real multiple zeros of P(/\) will be given in Section 5.2 with non-overshoot properties for clusters of real zeros of P(/\). Moreover, these iterations without second derivative evaluations are cubically convergent. To 60 61 solve the eigenvalue problems, we first employ the Sturm sequence to detect the mul- tiplicities of the eigenvalues. Then, our new concurrent iterative methods are used to extract the eigenvalues iteratively by using eigenvalues of two smaller matrices as the initial approximations. The numerical results seem remarkable. 5.2 A new method with cubic convergence rate and monotonic convergence Consider a monic polynomial of degree n 2 3, P(/\) = flu — a). (5.1) a 1:1 with simple real zeros A1° Ai, i=1,...,m. (5.10) (b) The sequence {y§k)}g°=o generated by (5.8) converges to A, monotonically. That is, a. A,- (io yik) < y§k_l) < < y§1)< yfo), i: 1, . ..,m. (5.11) 63 Proof Let a?" = A,- — xfi") and 55’“)- — y§k’— A,- for i = 1, . . . , m. We have, by (5.7), E(k+l) = 5(8) ”3' ‘ ' PM"); 11- p(m -Zj=—1,j¢ixl_735_y1lk5 n'. : Egk)+ z m ‘ . n1 _ ”L 71.1 23:111¢‘ (ISM—AJ‘ {AH-3,6“) + I‘M—Xi (k) (k) _ (hi - 131 )n, = 5'. k . . (1‘s ) _ Ag) 3:14;,5,’ (TyL—Ikn-Aj — ——Jx(k)’:yl£7) + le' I t J _ (k) ”i — (FT—1 ‘1“ + ”(Z —"—— (5.12) where 59:) = (mm _ A.) if "2' n:- z t 3 $(k)_ $00 (k) 1': 1.J'¢i )‘J— _yj fori=1,2,...,m By (5.9), we know \IIEO) > 0. Thus, 4(0) 0 < 55-1) = a?” - WL— < 55"). i + 774' That is, $50) < SEE-1) < A,. Similarly, by (5.8), we have 6(k+l) = 6m _ "1' 1 3 P,(y(lc)) n_ —ytT j: -1554.- W J = (54") _ n4 1 311.1464 (WV—y, "SA. — MT... ) + t J J = 50c) _ (yfk)— 30774“ k . . (ill ) _Ai) £1,135 (will: " 53%???) + n,- 3 1 J = 6"“) - —"— (5.13) 64 where (poem (you Mi "j ' y(—k—>n —-A,- (k)_ 513(k) fori= 1,2,...,m By (5.9), we know (DEG) > 0. Thus, ¢@ 0 < 651) = 59” . —(0)—'—— < 75$“). Q1: + ”3’ That is, /\.~ < y“) < ylo) Therefore, by mathematical induction, (5.10) and (5.11) can easily be obtained from (5.12) and (5.13). [:1 Theorem 5.2.2 The convergence rates of the modified iterations in ( 5. 7) and ( 5. 8) are both cubic for an ni-fold zeros A,- of P()\). Proof It follows from the proof of the above theorem that 111‘.“ €(k+1) _ 6(k) . 3 i ‘1’?) + n,- where \Ilgk) : (35k) _ A1.) in: nj ”.7 ,- 14¢. aim—IV m(.“ 113(k) m (k) k ()‘j — yj ) = (mp—Ac?) Z ‘ ”J k lc 1: 1:13:44 ($5 ) - MW. (- ) -y§- )) Let em = max,-=1 m(legk)|, |6§’°)|). Then, ..... 59‘“) ~ (e(k))3. Similarly, we have 6(k+1) no (em); 65 Remark 5.2.1 The monotonic convergence and the cubic convergence rates of the iterations in (5.3) and (5.4) can easily be obtained from the above two theorems by taking all n,- = 1. In the next theorem we show that the modified iterations in (5.7) and (5.8) have non-overshoot properties for polynomials with clusters of zeros. Namely, they never skip over n.- zeros of a polynomial. The non-overshoot properties are important when they are applied to find the eigenvalues of a symmetric matrix. Theorem 5.2.3 IfP(/\) = ?:1(/\-/\,-) has a cluster of zeros (All), AEZ), . . . , A§""’};’;, with All) 3 A?) S S A?" < A331 and the initial values {x,};';, and {y,- :21 satisfy 371 S All)5”°_<.)\(1m)591<$2SAgUS'“ S A£"”’Sy2<~- (Aln‘) _ $1.) + Hi I = (Agni) _ 531') (1 _ (”31' ) > 0' ‘1“ + "i ”2' 311345 (fit/(7117 - £15,) + film = (y.- — Al”) (1- ___"i ) (PEI) + n.- Bni(yi) " All) > (312' - A$1)) _ >0. 66 and it is easy to check that: ———6A"‘(xi) > 0 and ———BB"‘(y‘) < 0. an.- 0774i Therefore, (5.15) and (5.16) follow. [I] 5.3 Evaluation of the logarithmic derivative P’ / P of the determinant Let T be a symmetric tridiagonal matrix of the form {01 761 \ 51 a2 52 0 T=[fii_1,ai,fli]= -. -. . (5.17) 0 311—2 an—l [871—1 \ 1811—1 an ) We may assume, without loss of generality, that T is unreduced; that is, 63- 31$ 0, j = 1, . . . ,n — 1. For an unreduced T, the characteristic polynomial P(A) E det(T - AI) (5.18) has only real and simple zeros ([56], p300). In order to use our iterations developed in previous section for finding zeros of P(A), or the eigenvalues of T, it is necessary to evaluate P and P’ efficiently with satisfactory accuracy in the first place. It is well known that the characteristic polynomial P(A) E det(T — AI) of T =- [641,096,] and its derivative with respect to A can be evaluated by three-term re- currences ([56], p423): = 1, = a — A P0 P1 1 (519) P4 = (Oi - ”Pi—1 — 5.2.1/914, i: 2,3, - - '1” = 0, ’ = —1 p6 ”1 (5.20) P1: (ai— A)p:—1 _ Pi—l — fi3—1p2—2) Z: 213) ' - ° a TI. 67 and Po) = p... MA) = p:.. However, these recurrences may suffer from a severe underflow-overflow problem and require constant testing and scaling. The following modified recurrence equations avoid the above problem subtly by computing the quotient q(A) = P’ (A) / P (A) directly [36]. After all, only q(A) is really needed in the formulae (5.3) and (5.4), or in (5.7) and (5.8). Let Pi Pl 6i _: J 771' = _— pi—l Pi 51 = 01 - A, 5.2—1 (5.21) 5 ai-A—EZ‘I, i=2,3,...,n. 1 m=m m=5 .2 (5.22) 1 161—1 . 773' = E— (ai — ”771—1 +1 _ gi-l 77i-2 7 Z = 2131' ' ' a n P’(/\) _ P(A) ‘ ""' To prevent the algorithm from breaking down when 5,- = 0 for some 1 g i g ii, an and extra check is provided: 0 If {1 = 0 (i.e., a1 = A), set £1 = 252; _ £42453 . —&4 ’ where e is the machine precision. A determinant evaluation subroutine DETEVL O If€i=0,i>1,set{i (see Figure 5.1) is formulated according to the recurrences (5.21) and (5.22). When 6,, i = 1, . . . ,n are known, the Sturm sequence is available ([40], p47). Thus, as a by-product, DETEVL also evaluates the number of eigenvalues of T which are less than A. This number is denoted by n(A). A backward error analysis in [36] shows that the accuracy of this algorithm at A, can be 55 3 mJaX(|/31| + Iii-.11) + Me. (5.23) 68 5.4 The split-merge process Let A1 0, for all i = 1,2, . . .,n — 1, since in (5.19)—(5.22), fli’s always appear in their square form. The following interlacing property for this rank-one tearing is important to our algorithm. Theorem 5.4.1 Let A1 < A2 < < An and A1 < A2 < < An be eigenvalues ofT and T respectively. Then 313A.sizsA23---sinsan 1 eigenvalues of T, say, /\i+1 < /\i+2 < < Ai-l—r which are relatively close to each other, comparing to the distance from the two starting points, say $69321 < A,“ < 3’91. Some of them may even be numerically indistinguishable. Numerical evidence shows that to reach A,+1 from 2:91 and 31:91, it may take many steps of the new iterations given in (5.3) and (5.4) before showing super-linear convergence. In this situation, the new iterations given in (5.7 ) and (5.8) with n,“ = r may be used to speed up the convergence. 5.6 Stopping criteria and three iterative schemes The following stOpping criterion was suggested by Kahan [30]: |$(k+l) _ 113(k)]2 S ([:E(k)— 12(_k 1)l_ [33 (k+1)_ $0007. (5.26) where 7' is the error tolerance. This criterion is based on the following observations. Let x oo, qk is normally decreasing. Thus [A _ x(k+1)| = g(xucww) _ x(k+1+i)) S i |$(k+2+i) _ $(k+1+i)l 1' i=0 9+1) _ $09] < $)/2 72 if TEST(2:,E"), yEE’)— —' FALSE’ then if cE’c) > A- then 43*“): 4E“. yE"+1’= CE” - p,(cs*>>/P>—12:=i..4 W i 1 else y(k+1)_ _ 91m”: (_k+1)_ _c(k)_ pkg"))/P(cEk))-I:E=1,j¢i W elseS=S—{i} ’ ’ enddo for i ¢ 5 d0 xEk+1)_ xsk), yEk+1’— yEk) enddo k=k+1 enddo forie {1,...,n} do A? = xE"), ,\+ — 11E") enddo In the second modification we avoid using Sturm sequences at any iterations. For the first iteration, once we have detected which semi-interval A,- belongs to, we keep applying (5.3) or (5. 4) for all the iterations until either TEST(:rE-k+1), 2E“) 2’ TRUE’ or TEST(yEk+1’,yE’°))— -’ TRUE’. In the first case we set A; xEk+1) ,A-+ = max{:z:E’°+l)(1 + esign(xEk+1))), $Ek+l) + T}, in the second case Af- — =yEk+1),A;‘ = E” )(1— esign(yEk+l))), yEk+1)— T}. It is easy to show that A,- E [AE‘,A,'-E] if 7 max{y.- and e are small enough. With this modification we have reduced the cost of each iteration to just the computation of one of the two formulae in (5.3) and (5.4). Iterative Scheme 3. fori= 1,...,n do E” = (24°) + yE°’)/2 if CEO) > A then L(i)— - 1 ,yE0)= ch) else L(z )— — 0,3:E0) = CEO) enddo ={i:L(i)=1}, Sg={i:L(i)=0}, 1921 while 31 U 52 91$ 0 do 73 for i E 51 do (141) = (k) _ 1 (H1) = (k) y: y: P’EytEk))/PEytEk))—Z;=1,j¢i y :2 , 33; 1133 i J' if TEST(yE"+1), yEk)) =' TRUE’ then .91 = 31 — {i} enddo for i E 32 do (k+1) = (k) _ 1 (k+1) = (k) :13, 33' P’($.Ek)l/P($Sk))-Z:=l,j¢i 33577—3; k ’ y’ y: i 1' if TEST(:1:E"+1), 4:5“) =' TRUE’ then $2 = 52 — {2'} enddo fori¢SIUS2 do yEk-l-l) = g(t), $(k+1) = m(k) enddo k = k + 1 enddo for i E {1,...,n} do if L(i) = 1 then A: = 4E“. A: : max{yE’°)(1 — asign