£9507 LIBRARY Michigan State University This is to certify that the dissertation entitled Local Regularizaticn Methods for Nonlinear Volterra Integral Equations of Hammerstein Type presented by Xiaoyue Luo has been accepted towards fulfillment of the requirements for the Ph.D. degree in Mathematics @atoca KM Major Professor’s Signature 7/lblo‘i Date MSU is an afiinnalive-action, equal-opportunity employer .. _.-.-.—-------o-o-.-n—-----o-u--v-.---o-n-u-.-.-o-o--._.— -.---a--n-.-—¢- PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DAIEDUE DAIEDUE DAIEDUE 6/07 p:/C|RC/DaleDue.indd-p.1 Local Regularizaticn Methods for Nonlinear Volterra Integral Equations of Hammerstein Type By X z'aoyue Luo A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 2007 ABSTRACT Local Regularization Methods for Nonlinear Volterra Integral Equations of Hammerstein Type By Xiaoyue Luo We develop a local regularization theory for the nonlinear Volterra problem of Hammerstein type. Our method retains the causal structure of the original Volterra problem and allows for fast sequential numerical solution. The fundamental differ- ence between our method and the previous existing local regularization method for Hammerstein equations (Lamm and Dai, 2005) is that for our method we do not need to solve a nonlinear equation at every step of a numerical implementation. We only have to solve a nonlinear equation for the first step. We prove the convergence of the regularized solutions to the true solution as noise level in the data shrinks to zero with a certain convergence rate. To my parents iii ACKNOWLEDGMENTS I would like to thank my advisor, Professor Patricia Lamm, for introducing me to this wonderful subject: inverse problems. It was after I took your special topic course that I decided to pursue a doctoral degree. Your selfless guidance, enthusiasm about research and humor during hard times make this long journey enjoyable. I can not thank you more for supporting and mentoring me all these years. My research, in general, and this thesis, in particular, would not have been possible without your tireless guidance, endless supply of patience, and tremendous insight. I owe every single piece of achievements to you. In addition, I would like to thank Professors Chichia Chiu, Tien-Yien Li, Keith Promislow, and Zhengfang Zhou for serving on my committee. I am grateful to have Professor Li, Professor Promislow and Professor Zhou as my teachers during my graduate study. Their immense knowledge and dedication to research and education have impressed me deeply. I would also like to thank Professor Zhou for his support and care of me when he served as graduate director and later during my last year at MSU helping me with my job search. In addition, I would like to thank Professors Baisheng Yan and Michael Frazier for spending time on my comprehensive exam. Special thanks go to Professor Frazier, iv thank you for your support for my job search after you already left MSU. I would also like to thank my good colleague and friend Cara for sharing her ideas and insights with me during my Ph.D. study. Special thank you goes to my friends Ying and Gang for their caring and support- ing all these years. I share every piece of good memory at MSU with them who are like the brother and sister I never had. Finally, I would like to express my gratitude and appreciation to my family: my husband, Feiyu, for his love, support and confidence in me. I share this work with him. My parents who give me unconditional love and support, who make my childhood full of happy memories and who always trust and believe in me. I would never be where I am now without you. I dedicate this work to you: Mom and Dad! TABLE OF CONTENTS LIST OF FIGURES 1 Introduction 2 Hammerstein problem with u-smoothing convolution kernel 2.1 Linear Problems .............................. 2.2 Existing results for the local regularization of nonlinear Hammerstein problems .................................. 2.3 New local regularization theory for Hammerstein equations ...... 3 Hammerstein Problem with nonconvolution kernel 3.1 The regularized Hammerstein equation ................. 3.2 Convergence and well-posedness results ................. 4 Discretization and Numerical Implementation 4.1 u-Smoothing Convolution Kernel 4.2 1-Smoothing Nonconvolution Kernel .................. 4.3 Numerical Results ............................. BIBLIOGRAPHY vi vii 11 18 21 39 39 44 53 53 57 58 80 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 LIST OF FIGURES Example 1 (a 1—srnoothing kernel): solution with regularization, 6 = 10%, N = 1000, R = 45. ......................... Example 1, continued: solution with regularization, 6 = 5%, N = 1000, R = 35. .................................. Example 1, continued: solution with regularization, 6 = 1%, N = 1000, R = 20. .................................. Example 2 (a 3—smoothing kernel): solution with regularization, 6 = 5%, N = 60, R = 20. .......................... Example 2, continued, solution with regularization, 6 = 1%, N = 60, R = 11. .................................. Example 2, continued, solution with regularization, 6 = 0.1%, N = 60, R = 7. .................................. Example 2, continued, solution with regularization, 6 = 0%, N = 60 R = 3. .................................. Example 3, (a 3-smoothing kernel): solution with regularization, 6 = 1%, N = 100, R = 16. .......................... Example 3, continued, solution with regularization, 6 = 0.3%, N = 100, R = 12. ............................... Example 3, continued, solution with regularization, 6 = 0%, N = 100, R = 4. .................................. Example 4 (a I-smoothing kernel): solution with regularization, 6 = 0.05%, N = 200, R =11. ........................ vii 61 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24 4.25 Example 4, continued, solution with regularization, 6 = 0.005%, N = 200, R = 7. ................................ Example 4, continued, solution with regularization, 6 = 0%, N = 200, R = 2. .................................. Example 5 (2—smoothing kernel): solution with regularization, 6 = 0.005%, N = 200, R = 27. ........................ Example 5, continued: solution with regularization, 6 = 0.0005%, N = 200, R = 21. ............................... Example 5, continued, solution with regularization, 6 = 0%, N = 200, R = 3. .................................. Example 6 (3—smoothing kernel): solution with regularization, 6 = 0.001%, N = 200, R = 42. ........................ Example 6 (3—smoothing kernel): solution with regularization, 6 = 0.0001%, N = 200, R = 35. ....................... Example 6 (3-smoothing kernel): solution with regularization, 6 = 0%, N = 200, R = 4. ............................. Example 7 (1-smoothing nonconvolution kernel): solution with regu- larization, 6 = 5%, N = 100, R = 7. .................. Example 7, continued: solution with regularization, 6 = 1%, N = 100, R = 5. .................................. Example 7, continued, solution with regularization, 6 = 0%, N = 100 R = 2. .................................. Example 8 (1-smoothing nonconvolution kernel): solution with regu- larization, 6 = 5%, N = 100, R = 65. .................. Example 8, continued, solution with regularization, 6 = 1%, N = 100, R = 53. .................................. Example 8 continued: solution with regularization, 6 = 0.1%, N = 100, R = 25. .................................. viii 70 72 76 4.26 Example 8, continued, solution with regularization, 6 = 0%, N = 100, R = 2. .................................. ix CHAPTER 1 Introduction Volterra integral equations arise in a great many applications. For example, in pop- ulation dynamics [23] [24], epidemic diffusion, reaction-diffusion in small cells [25], in nuclear reactor kinetics [2] and in general in evolutionary phenomena incorporating memory. Of special interest are Volterra integral equations of Hammerstein type. In many applications, the problem can be written in terms of a Volterra integral equation of Hammerstein type, as for example in chemical absorption kinetics, in epidemic models, and also in situations when Laplace transform techniques are used to reduce systems of ordinary or partial differential equations to Volterra integral equations. In this paper, we will study the solution of a nonlinear Volterra problem of Ham- merstein type of the following form F u = f, (1.1) where F is the nonlinear Volterra operator given by f, Fu(t) = [0 k(t,s)g(u(s)) d5 (1.2) for suitable kernel k, nonlinear function g and f in the Range of F which will be clarified later. Before we get into details of this nonlinear problem, we will first give some brief introduction for the linear counterpart to this problem. Let us consider a linear first-kind Volterra integral equation for (1.1), where F is defined by t Fu(t)=/0 k(t,s)u(s)ds (1.3) with the kernel I: E L2 ((0, T) x (0, T)), where f is in the range of F and our goal is to find ii 6 L2(0, T) or C [0,T ] which solves equation (1.1). However, such problems are generally ill-posed due to the fact that the solutions 116 which are obtained by solving (1.3) using imprecise measurement data f 5 do not depend continuously on data, i.e., very small errors in the measurement data f 6 6 could lead to large deviations in the solution u as compared to the true solution 21. What we usually see for these kinds of ill-posed problems are highly oscillatory solutions using measurement data. This is very troublesome because in practice we never have exact data in hand. Since the available data always contain uncertainty, regularization methods have to be employed to stabilize the problem. A classic and well-known example is the Inverse Heat Conduction Problem (IHCP). The problem can be stated as follows: applying heat on one end of a semi- infinite bar which we call location :1: = O, we measure the temperature f (t) as a function of time t at some location away from the heat source, which for simplicity we call location a: = 1. The problem is to recover the temperature u(t) at the heat source a: = 0, and this problem can be formulated as solving equations (1.1) and ( 1.3) for u with the kernel given by k(t, s) = n(t — s), where 1 —1 4t K“) = Qfit3/2e / ' This problem is a severely ill-posed linear Volterra problem. One well—known regularization theory is that of Tikhonov regularization. The idea of Tikhonov regularization is that, instead of solving for u satisfying F u = f 5, we solve a constrained minimization problem for rig, mjn ”Pu — f6||2 + aIILUIIQ, (1.4) where f 6 is noisy data, a is the regularization parameter and L is a suitable (usually identity or differential) operator. The Tikhonov theory gives conditions under which there is choice of a such that as the noise level 6 —) 0, a(6) —-+ 0, and the corresponding Tikhonov solution u‘; (6) to (1.4) converges to the true solution 17. However, there is a drawback associated with using Tikhonov regularization in solving Volterra problems. Volterra problems have a nice physical structure called causal structure. That means the solution u at any given time t does not affect the data f on the interval [0, t). Therefore in finding u(t), it makes sense to use future data I on the interval [t,T] and it does not make much sense to use all data f on the whole interval [0, T]. Tikhonov regularization however converts a causal problem to a non-causal problem, and this leads to nontrivial increases in costs of implementation. In the mid-1990’s, P. K. Lamm established the local regularization theory which is a generalization of a regularization scheme for the discretized IHCP developed by J. V. Beck in the late 1960’s. While Beck’s method was an approach developed to handle a finite dimensional problem, the local regularization theory can be placed in both finite and infinite dimensional settings. The theory can be applied to a wide class of linear first-kind Volterra problems [4] [5] [6]. Local regularization methods preserve the causal structure of the Volterra problems and therefore they have com- putational advantages over the classical regularization methods. For example, while Tikhonov regularization requires 0( N 3) flops for a discretized problem of dimension N (or 0(N2) if special structure is accounted for), local regularization requires 0(N2) flops (or 0(NlogN) flops in the case of special structure). See Section 2.1 for some background of local regularization methods for linear first kind Volterra problems. We now turn to some background on the regularization of nonlinear problems. Consider solving for u that satisfies equation (1.1), where F : D(F) Q X —) Y is a nonlinear operator between Hilbert spaces X and Y. We assume that (1) F is continuous and (2) F is weakly (sequentially) closed, i.e. for any sequence {un} C D(F) such that an _1 u in Xand Fun —* f in Y, then u 6 D(F) and Fu = f. Also assume equation (1.1) has a solution. Then there exists a u‘-minimum-norm solution u+ for the data f E Y, i.e., Fu+ =f and |]u+ -u*|] =min{||u—u*]]: Fu=f}. (This is by the weak closedness of F, and follows from the attainability assumption that equation (1.1) has an exact solution [17].) If the nonlinear operator F is compact, one can give a sufficient condition for ill-posedness of ( 1.1) which is similar to its compact linear counterpart. Proposition 1.0.1. [17] Let F be a nonlinear compact and weakly closed operator between two Hilbert spaces X and Y, and let D(F) be weakly closed. Moreover, assume that F u+ = y and that there exists an e such that Fu = 3'] has a unique solution for all 37 E R(F) (1 35(1)). If there exists a sequence {an} g D(F) such that + + Un—‘u but un—Hu, then F—l (defined on R(F) fl B€(y)) is not continuous in y. Tikhonov regularization As in the linear case, we can replace problem (1.1) by minimization problem: llFu — f5”? + allu - wit? a min, a e D(F). (1.5) where a > 0, f 6 E Y is an approximation of the exact right-hand side f of (1.1) and u‘ E X, I] f 6 — fl] 3 6. As in the linear case, any solution to (1.5) will be denoted by 6 ua. Tikhonov regularization gives the following convergence rate analysis Theorem 1.0.1. [17] Let D(F) be convex, F continuous and weakly closed. Let f‘5 E Y with I] f — f6” 3 6 and let u+ be an u*-minimum-norm solution. Moreover, let the following conditions hold: (i) F is Fréchct-diflerentiable, (ii) there exists 7 2 0 such that ||F’u+ — F’ull g yllu+ —- all for all u. E D(F) in a sufiiciently large ball around u+, (iii) there exists w E Y satisfying u+ — u“ = (F'u+)"w and (iv) o/lel < I. Then for the choice ofa ~ 6, we obtain Hug — u+]l = 0(\/6). An example of the application of Tikhonov regularization to a particular 1-smoothing convolution nonlinear Volterra Hammerstein problem is given in [17]. The problem is to consider the Hammerstein integral equation F : Him, 1] —) L2[0,1] t Fu(s) :=/() (t — s)u3(s) ds. Since F is continuous, weakly closed, compact and injective [17], Proposition 1.0.1 implies that the problem of solving F u = f is ill-posed. Consider the application of Tikhonov regularization method to this problem. In order to satisfy assumption (iii) about the source condition, u+ and u" have to satisfy quite strict smoothness conditions and particular boundary conditions. For example, u+ and u‘ E H'4 , u§(0) = u3(0), u'sl"(1) = 213(1), u+(1) - u§S(1) = u"(1) —u§3(1) and ”338(1): “333(1) [17]- From the above example, we see that in order to use Tikhonov regularization the- ory on nonlinear Volterra problems of Hammerstein type, strict assumptions on the smoothness of the source conditions and particular boundary conditions are needed in order to achieve the desired convergence rate. Also, as for the linear Volterra prob- lems, another disadvantage of Tikhonov regularization methods are that they destroy the causal nature of the Volterra problems and lead to nontrivial computational costs. Lavrentiev’s regularization We now turn to the second common form of regularization for inverse Volterra prob- lems, i.e. Lavrentiev Regularization. The idea of Lavrentiev Regularization is to solve an equation of the form au+Fu = f. (1.6) Definition 1.0.1. Let f : R” —i R”. We say that f is monotonic if < x-y,f(:v) -f(y) >2 0, Viny- Consider the problem of solving for u that satisfies t t . f0 k(t,s)u(s)ds+/0 F(t,s,u(s))ds =f(t), t6 [0,T]. (1.7) It is proved in [18] that one can adapt the Lavrentiev method to identify u by solving the following equation t t t - au(t)+/O k(t,s)u(s)ds+/O F(t,s,u(s))ds=l/0 e_lli(t_3)f0(s)ds. (1.8) (1 Under suitable assumptions given by the next theorem, this equation is solvable on the interval [0, T] and the solution ugly to (1.8) approaches the true solution a as noise 6 —> 0 in an appropriate sense. See [22] for an introduction of the Lavrentiev method. Theorem 1.0.2. [22] Assume: 1. The vectors y, u belong to R”, k(t,s) : A —> R", F(t,s,u) : A x IR” —-> R” where A:={(t,s):OSsStST}. 2. F is continuous and the partial derivative Ft(t,s, u) exists for a.e. (t, s) E A and for all u E R". 3. For each u, v E R" and a.e. (t,s) E A we have H150, 3, v) - 60.8.11)“ S NU, 8)||v - ull, Ilfi‘tlt, Siu) - PAW, 3, v)” S L(t,8)llv - ull and t t sup / L2(t, 3) ds g L, sup / N2(t, 5) ds = N. tE[O,T] 0 te[0,T] 0 4. For every t E [0,T], the function u —> F(t, t, u) : R” ——> IR" is monotonic. 5. The kernel k(t,s) is continuous for 0 S s S t S T and k(t,t) = I fort E [0, T]. 6. The derivative D1k(t,s) exists a. e. and supt E [0, T] jot Ille(t,s)||2 (13 g C. 7. The true solution ii is piecewise W1’2(0,T). If “[6 — fl] 3 6, where 6 > 0 is a known tolerance, then for the choice ofa = 0(6) such that 6—->0+0‘ 6 equation (1.8) has a unique solution ua and tr?! —> a in L2 ((0, T), R”). Notice that by assumption 5, this theorem can only be applied to 1-smoothing (both convolution and nonconvolution type) Volterra problems of Hammerstein type. In this case, the nonlinear function g in (1.2) has to be monotonic. The advantage of this method over Tikhonov regularization is that it still preservas the causal structure of the Volterra problem. However because the added penalty term on does not take the given operator F into consideration, the approximation is not as good as local regularization theory at least for the nonlinear Hammerstein problem from our numerical results. One reason this is the case is that convergence of ug, to u E C [0, T] in the uniform norm is impossible if u(0) 75 0, unless information about u(0) (which is rarely known accurately) is built into the approximate equation (1.8). This fact tends to lead to bad approximations near t = 0. If (1.8) is solved sequentially (the usual case), this can lead to large errors on the entire interval. Please see Example 1 in Chapter 4 of the thesis and we refer to Figure 2 in [18] for comparison. Since the results in [18] do not give a convergence rate in the case of noisy data, we briefly mention the work in [33] where the Lavrentiev method (1.6) is applied to equation (1.1), in the case of the operator F satisfying assumptions similar to those in Theorem 1.0.1 (for Tikhonov regularization) along with additional monotonicity and hemicontinuity assumptions on F. In this case the rate Hug, — u+|| = 0(61/2) is achieved, so that the rate for Lavrentiev regularization can be seen to be the same as that for Tikhonov regularization under similar smoothness hypotheses on u+. There are other regularization methods for nonlinear problems in the literature, for example, Landweber methods [26], where one may seek a solution vi such that ui+1=ui+(F'qu:)*(f(S — Fug) (1.9) for k = 0, 1, ..., where 11.3 = uO is the initial guess. A modified Landweber- method [29] based on the idea that for the numerical realization of (1.9), the use of a rough approximation Fm to F within the first iteration steps has no influence on the quality of the iterates, as long as the iteration is continued with a sufficiently good approximation to F. This leads to the iteration formula ai+1=ui+(F‘y'.(k)'u.i)"‘(f(S — Fr(k)ul5c)' Other regularization methods include Levenberg-Marquardt methods [28] where the author studied a Levenberg—Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy; conjugate gradient methods [27], where the basic idea is to compute an approximate solution for the linearized problem in each Newton step with the conjugate gradient method as an inner iteration; iteratively regularized Gauss-Newton methods [31] [32]; and other Newton-like methods [30], etc. We refer [21] for extensive discussions of such methods. In recent years, local regularization methods have been extended to some nonlinear Volterra problems, for example, the autoconvolution problem [12]. In 2005, Lamm and Dai studied the nonlinear Volterra Hammerstein problems and their idea is that if one treats g(u(t)) as the solution of (1.1) where F is given by (1.2), then solving for g is nothing more than solving a linear Volterra problem. However, this local regularization method requires one to solve a nonlinear equation at each step of numerical iteration which can be difficult in practice. See Section 2.2 for details. Driven by applications and the need to have a regularization scheme that is easy to implement, we develop a local regularization method that not only preserves the causal structure of the Volterra problems but also gives accurate approximations and is easy to implement in practice. We organize the paper in the following way: in Chapter 2, we first give some background on the local regularization methods for linear problems and nonlinear Hammerstein problems. Then we give our main results on the new local regular- ization theory for nonlinear Volterra integral equations of Hammerstein type with V-smoothing convolution kernels. In Chapter 3, we extend our results to nonlinear Volterra integral equations of Hammerstein type with 1-smoothing nonconvolution kernels. In Chapter 4, we present some numerical results using our regularization theory. 10 CHAPTER 2 Hammerstein problem with V-smoothing convolution kernel We first motivate our work on the local regularization method for the Hammerstein problem by giving some background on the existing theory for the linear Volterra problem. 2. 1 Linear Problems We consider the problem of finding a E C [0, T] solving F u = f (2.1) where F is the Volterra operator of convolution type given by t Fu(t) =/0 k(t — s)u(s) ds, t E [0, T], (2.2) and f is in range of F. A discussion of the existence and uniqueness of solutions of (2.1) may be found in [13] in the linear case. We call k the kernel of the operator F. Throughout we will 11 assume that F satisfies a u-smoothing condition for some V = 1,2,. . ., that is the kernel k satisfies 1: e CV[0, T], 1437(0) = 0, j = 0,1, . . .,u — 2, W ‘1)(0) 7e 0, (2.3) where without loss of generality, we will take k(V — 1)(0) = 1. It is well-known that the degree of ill-posedness of problem (2.1) is characterized by the degree of smoothness of the kernel k and the behavior of k at 0, the larger the value of u, the worse the ill-posedness is. We will assume the desired a of (2.1) satisfies the H'o'lder condition [17(1) - i2(S)! S [Vlt - 8|”. (2-4) for 0 < u S 1, N := N(t‘i) > 0, and t, s in the interval of interest. To motivate the sequential local regularization method for linear Volterra prob- lems, we let R > 0 be a small fixed number and r 6 (0, R] a small parameter. Assume that equation (2.1) holds on an extended interval [0, T + R]. If data is not available past the original interval, then this can always be accomplished by decreasing the size of T slightly. Then it solves t+p f0 k(t +p— s)u(s) ds = f(t +p), t E [0,T], p E [0,T]. Split the integral at t, then change the variable of integration, we have t p /O k(t + p — s)u(s) ds +/0 k(p — s)u(t + 3) ds = f(t + p), t E [0,T], p E [0, 1']. Now we integrate both sides of the equation with respect to a suitable Borel measure 12 77r(P) (which will be clarified later) on [0,r], so we have f0" for k(t + p — s) dn.,~(p)u(s) ds + for [Op k(p — s)u(t + s) as darn) =/0 f(t+p)dnr(/)), tEMT]- (2-5) For simplicity, we define the following notations which we will use throughout this paper: ||'||==||°||Loo(0,T), Il-Ilooi=||-||Loo(0,T+R)a ||-l|r==||‘”L°°(0,r) and M1 := sup |q($)|- x E I Note that 17. still satisfies (2.5) exactly. However, in practice, we only have in hand imprecise measurement data or perturbed data f 6 E C [0, T + R], instead of the true data f E C [0,T + R] , where f‘5 satisfies ||f5 — flloo gt for some 8> 0. (2.6) Since solving for u from equation (2.5) when f 6 is in place of f is an ill—posed problem due to lack of continuous dependence on data, some regularization method needs to be employed. The idea is if we momentarily hold u constant on a small interval [t,t + r], then we can replace u(t + s) by u(t) in the second term of equation (2.5). And r serves as the regularization parameter. Then we obtain the regularization equation t ~- a(r)u(t) +/0 kr(t — s)u(s)ds = ff.)(t), t E [0,T], (2.7) 13 where W) =/0 k(t+p)dnr(p), (2-8) flu) = [0 f5 (1w). (2.9) r p a(r)=/é ‘/0 k(p—s)dsdnr(p). (2.10) Notice that equation (2.7) is a well-posed second kind integral equation in u provided that a(r) 74 0. Sufficient conditions for stability and convergence of solutions u to it include the hypotheses on the measures 717‘ given below: The signed Borel measures TIr(P) on [0, r] satisfy the following conditions: 0 (H1) For i = 0, 1, ..., u, there is some a 6 IR and c,- = 0,-(11) 6 Roy > 0 independent of r, such that T . . [0 pt dnr(p) = rt + “(c.- + a(r)). as r —» o. 0 (H2) The parameters ci, 2' = 0, 1, ..., V, satisfy the condition that all roots of the P01yn0mial [Jr/(A) defined by C __ E (ix—1)! 1! 0! have negative real part. 14 0 (H3) There exists a C 2 0 independent of r such that for MP) d'flr(.0)] S éllhllri‘a. for all h E C [0, r] and all r > 0 sufficiently small. It is worth noting that there are an infinite number of continuous and discrete families {n7},. > 0 of measures which are easily constructed and which satisfy the above assumptions. In what follows we provide two classes of measures satisfying (H 1) — (H3) and we refer to [10] for the proof. The first measure is a continuous measure. Lemma 2.1.1. [10] Let V = 1,2, . .. be arbitrary and let 1,0 6 L1(0, 1) be given such that 1 f0 p”¢(p) dp > 0- Then the ‘density’ 721‘ for r E (0, R], 0 < R _<_ 1, defined by [G gamma: [0 gait-(mp. geolon. where W E L1(0,r) is given by ¢r(p)=¢(p/r), a-e- pE [0,T], satisfies condition (H1) (with cu = fol thMp) dp and o = 1) and condition (H3). Further, for all V = 1, 2, and given arbitrary positive 5, m1, m2, . .. and my, there is a unique polynomial if) of degree V so that the resulting family {qr} satisfies (H1) with cu = E and o = 1, (Hg) with the roots of the polynomial pp in (H2) given by(—m,-), i=1, ..., V and (H3). 15 The second measure is a discrete measure. Lemma 2.1.2. [10] Let V =1, 2, ..., be arbitrary and let fit, T1 6 IR, 1 = 0, 1, ..., L, be fixed so that OSTO 0. (2.12) r = 0 Then the discrete measure 7})- dcfined via L [Grandma = Z @9017"), 9 6 C[0,T], l = 0 satisfies condition (H1) (with cu = 21L: 0 51er and o = 0) and condition (H3). Further, for all V = 1, 2, and given arbitrary positive 5, m1, m2..., and my and for L = V, there is a unique choice of 60, 51 . . . , ,81/ satisfying (2.12) (for each given collection of {T1} satisfying (4.4)) and such that the resulting discrete measure 721‘ satisfies (H1) with cu = E and 0' = 0, (H2) with the roots of the polynomial pu in (H2) given by(-m,~), i=1, 2, ..., V and (H3). Under the conditions on the measure rjr the following lemma shows that a(r) aé 0 for all r > 0 sufficiently small and all V-smoothing k. Therefore the regularization equation (2.7) is always well-posed in these cases so that the solution to (2.7) depends continuously on data f 6. Lemma 2.1.3. [10] Assume 17¢ satisfies (H1) and (H3). Then a(r) = $40 + l/(1+ O(r)), so that a(r) > 0 for all r > 0 sufficiently small. 16 Using the above lemma, it is easy to see that a(r) Z gar" + V > 0 for r > 0 sufficiently small. (2.13) V. Further, under this construction we have from [10] the following theorem. Theorem 2.1.1. [10] Let it denote the solution of (2.1) given “true” data f E C [0, T + R] and assume it satisfies the HBlder condition (2.4) on [0,T + R] with Hélder exponent p 6 (0,1] and R > 0 small. Assume k is V-smoothing and that {gr} is a family of signed Borel measures satisfying hypotheses (H1) — (H3) for all r E (0, R]. Then there is a constant C > 0 ( depending only on the c,- defined in (H1) and independent of r ) such that if IMWa_ 0, so that the choice gives uniform in t E [0,T]. We would like to point out that the above convergence result can be obtained using not only signed Borel measures but also positive Borel measures for V-smoothing 17 Volterra problems with V = 1, 2, 3, 4. There is to date no convergence theory for positive Borel measures with V > 4 and in fact a sufficient condition for convergence is known to fail in these cases. For details, see [3], [4], [6] and [10]. 2.2 Existing results for the local regularization of nonlinear Hammerstein problems While the theory for the local regularization methods of linear Volterra problems is rather complete, the same can not be said for the nonlinear theory. In recent years the local regularization theory has been extended to the nonlinear autoconvolution problem [12] and to the nonlinear Hammerstein problem [11]: t [0 k(t — s)g(s,u(s)) ds = f(t) for t E [0,T], (2.14) where g is a nonlinear function on IR. A discussion of the existence and uniqueness of solutions of (2.14) can be found in [16]-[17]. Based on the idea for the linear problem, we let R > 0 be a small fixed number and assume that equation (2.14) still holds on an extended interval [0,T + R]. We may define the following nonlinear regularized equation t~ ~s a(r)g(t,u(t)) +/0 kr(t — s)g(s,u(s)) ds = f7.(t), t6 [0,T], (2.15) where kr, fig and a(r) are given by (2.8) — (3.4) using a signed measure nr satisfying (1],) — (Hg). In a note in 2005, Lamm and Dai observed that if one lets u(t) = g(t, u(t)), then equation (2.15) is nothing more than equation (2.7) in the new variable u(t), that is l a(r)v(l) +/(; kr(t — s)v(s) ds = fg(t), t E [0, T]. (2.16) 18 By the linear theory, if f 6 E C [0, T + R], then there exists a unique solution vé E C[O, T] of (2.16). But the goal is to find u E C[O, T] which solves (2.15). So the question is how to stably recover u from inverting the function g. For g : [0, T] x IR —* IR continuous with (g1) lim g(t,.x) = +00, lim g(t,x) = —00, t E [0, T], x —> +00 x -—+ —oo (92) (g(t,x) — g(t, y))(x — y) > 0, for all t E [0,T] and x,y E IR with x # y, then there exists a unique ué E C[O, T] such that g(t,u,6.(t)) = vé(t), t E [0, T]. The 6 convergence of ur to it is given by the following theorem: Theorem 2.2.1. [11/ Let n denote the solution of (2.14) given “true” data f E C [0,T+ R] and assume it satisfies (2.4) on [0,T+ R]. Assume k is V-smoothing and that {m} is a family of signed Borel measures satisfying hypotheses (H1) — (Hg) for all r E (0, R]. Assume further that g, gt, g3; : [0,T + R] x IR —» IR are continuous with 9t: 9;; bounded on set [0,T + R] x I, where I is a bounded open interval in IR, such that u(t) E I fort E [0,T + R]. Assume also that g satisfies (gl) — (g2) for t E [0,T+ R] and (93) there exists 51 2: 51(1) > 0 such that (g(t,x)—g(t,y))(x—y) Z Ellx-yl2, for all t E [0,T+ R] and x,y E I. If ”k(t/NI.» < C, for the constant C given in Theorem 2.1.1 above, then if f‘5 E C[0,T + R] satisfies (2.6) then the choice r = 7(5) N 5H 3: V (2.17) 19 gives (uéa) — a(t)| = oat/7%) as 8—» 0, (2.18) fort E [0, T]. Remark 2.2.1. Notice that from the above theorem, we derive the same convergence rate as in the linear case. The assumptions (gl) - g(2) guarantee a unique solution ué E C [0, T] for the regularized equation (2.15). Assumptions on g, 9t: ya; on I make 6 sure that “r converges to the true solution 21. However, notice that this theory given by (2.15) requires inverting the nonlinear function g in order to find the solution ué. In terms of numerical implementation this means the method requires solving a large-scale nonlinear system or numerous nonlinear equations which can be difficult in practice. Therefore our goal is to design a local regularization theory which avoids solving large number of nonlinear equations. That is, we want to derive a regularization equation such that the solution to this equation depends continuously on data and it converges to the true solution it when noise level shrinks to zero. At the same time, we want to be able to solve our regularization equation without solving a nonlinear equation at each step of a numerical iteration. Keeping this goal in mind, we present our local regularization theory in the next section. 20 2.3 New local regularization theory for Hammer- stein equations To motivate the sequential local regularization method for nonlinear Hammerstein problems, we let R > 0 be a small fixed number and assume that t [0 k(t - s)g(u(s)) ds = f(t) a.e. t E [0, T] (2.19) holds on an extended interval [0,T + R]. Assume g E C1(I), where I C IR is a bounded open interval, with g’ bounded on I. The true solution a to (2.19) satisfies (2.4) and 17(6) E I for t E [0,T + R]. Note that these are the same assumptions that required in [11]. We will let T E (0, R] be a small parameter. Then the “true” solution a of (2.19) satisfies t+p f0 k(t — s +p)g(u(s))ds = f(t +p), a.e. tE [0,T], p E [0, r]. Proceeding as in the linear problem, we obtain an approximate equation in u valid for a.e. t E [0, T], such that t [0 as—s>gds+ag> =frg'+ f0 f0 k(P—3)9(fi(s+t))d5d77r(P)—a(7‘)9(u(t-T)) - a(r)g’(u(t — r))u(t) + a(r)g'(u(t — r))u(t — r), a.e. t E [0, T], (2.23) where tab/0 6(t+p)dnr(p). 6=f5—f0. (2.24) 22 Assume g’ satisfies (93') there exists a constant 51 := 51(1) > 0 such that |g'(x)| _>_ 51 > 0 for x E I. Remark 2.3.1. Theorem 2.2.1 is still true under a weaker hypothesis than (93), namely, (g3a) 3C1 > 0 such that |g(t,x) — g(t,y)| 2 cllx — y], fort E [0,T + R] and all x, y E I. This latter hypothesis implies (g3’) if g’ exists 9(x + h) — g(x) h regularization method has to utilize g’ term, it makes sense to assume (93’) in our on I since Z 51 > 0 for [h] sufficiently small. Because our local problem. By (g3’) and Inverse Function Theorem, we derive g‘1 E C1(D), where D := 9(1). Let v(t) = g(u(t)), for t E [0,T + R]. Motivated by equation (2.23), we will seek a solution v, v(t) E D a.e. t E [0, T], of the following equation: t~ _ [DST k1‘(t — s) [v(s) — v(s)] ds = (>+/ [up (s + t) delirlpl — a( W — r) — a(r)g’ (9 (v (t — T)))g 1‘0 (0) + “(09' (KI—1(1)“ — 7») 9—104t — T)), 01‘ t [0 kr(t — s) [v(s) — 6(3)] ds + a( r)[v(t) (t)]: Cr( (v)(t), a.e. t E [0,T], (2.25) 23 where for w E L00 ((0, T), D), Gr(w)(t) = a(T) [1110) - 27(1)] + 5,.(13) + fr jp k(P — 8)17’(3 + 0618 d77r(p) 0 0 — a(T)‘5(t — T) — a(T) [w(t - T) - “W — T)I - a«(T)9' (9—1000 - T))) 940110)) + a(r)g'(g"1(w(t —- r))) g'1(w(t — r)), a.e. t E [0,T]. (2.26) Since 6 E C[0,T + R], then Cr : L00 ((0, T), D) —) LOO(0, T). Define Br : L°°(o, T) —+ L°°(0,T), where t ~ Br(w)(t) 2: f0 kr(t — s)w(s)ds, a.e. t E [0,T], so that we can write (2.25) as: (a(r)I + Br) (v — v)(t) = Cr(v)(t), a.e. t E [0, T]. (2.27) The following lemma is obtained using Theorem 3.1 of [10]. Lemma 2.3.1. [10] The operator (a(r)I + Br) : L°°(0, T) -—> L°°(0, T) is invertible with (a(r)I+Br)-1 e £(L°°(0,T),L°°(O,T)) and, if ||k(V)||oo g c, for the (1 given in Theorem 2.1.1 above, then WMAI+BT )5 )‘1 Hr: (L°°(0, T), L°°(0, r) for r > 0 sufficiently small, where rn is independent of r. 24 Now we are ready to prove the main results. Theorem 2.3.1. Let u denote the solution of(2.19) given “true” data f E C[O, T+R] and let the same assumptions hold as in Theorem 2.2.1 for 11, kernel k and signed Borel measures {gr}. Let g E C1(I) with g’ bounded on I , where I C IR is an open bounded interval, assume g satisfies (g3’). Assume further that (94) there exists a constant N > 0 such that |g'(x) —g’(y)| S N|x — y] for x,yEI. Let R > 0 be sufficiently small and let T E (0, R] be arbitrary. Then there exists a 9 independent of r such that, if f‘5 E C [0,T + R] satisfies (2.24) with 6 S klrt‘ + V, then there is a unique solution v of (2.27) satisfying [[v — 17]] 3 9r”. Further, the mapping f‘5 E {w E C[0,T+ R], [lw — flloo _<_ 6} H v E L°°((0,T), D) is continuous for all r E (0, R]. Before proving Theorem 2.3.1, we need some lemmas. Lemma 2.3.2. If t7(x) = g(fi(x)), x E [0,T + R], then W1?) - T1(y)l S Ilg’lller - ill” for a.e. x, y E [0,T + R] and p defined in (2.4). Proof. We have v(x) - 17(3)) = 90163)) - 90101)) = g’(€(fi,x, y))(fi(x) - 1761)), where §(t‘i,x, y) E I since I is an open interval. Thus [v(x) — v(y)l S ”9'”;le — y]”, for a.e. x, y E [0,T + R]. C] 25 Lemma 2.3.3. Assume that 9 satisfies assumption (93’). Then [SJ—1&7) -9'1(y)| S [:17 - y] for x,y E D. S'IH Proof. As stated earlier, our assumptions on g give 9’1 E C1(D); further, D can be seen to be an Open interval due to the continuity of g and g‘l. For any x, y E D, |9‘1(~T) - 9‘1(y)l = (9")’(€(T, y))(x - y), where 6(3. 31) E D- But 1 g’(9‘“‘(€(:r, y))) 1 51 S |(g")'(€(rv,y))l = since g‘1(€(x,y)) E I. So N ow we are ready to prove the above theorem. Proof of Theorem 2.3.1. Since a(r) E I, so D(t) E D. Consider the ball M := {v E L°°(0, T) : ||v — 17H S 6rt‘} for some number 6 (independent of r) to be determined and [1 defined in (2.4). We claim that any v E M, we have v(t) E D for a.e. t E [0, T] when r > 0 is sufficiently small. Indeed, since 1‘) is continuous, the set R(v) = {v(t), t E [0,T]} is a closed bounded interval [a,b] in D. Since D is open, the interval [a — 9r)”, b + drf‘] C D for r > 0 sufficiently small. Therefore the claim is true. For v E M, C7-(v) E L°O(0,T), so it makes sense to apply the operator 26 (a(r)I + B'r)‘1 on Cr(v). Thus v = (a(r)I + Br)_1 Cr(v) + 17 = Hr(v), where Hr : LOO ((0,T), D) —» L°O(0, T), is given by Hr(v) := (a(r)I + Br)_1C7-(v)+ i3. 6 Our goal now is to show that there is a unique solution v7~ E L00 ((0, T), D) solving the equation: v = Hr(v), so that such a v will uniquely solve (2.27). We will prove by the contraction mapping theorem: so we want to Show that Hr : M —> M and is a contraction. First we show that Hr maps M to M for r > 0 sufficiently small. By Lemma 2.3.1 and for v E M, llHrtv) — an = n (a(r)I + Br)“ are)“ .<. H (a(r)I + BM ”5 (mm, ”(amylase)“ 1+m s a(r) llGr(U)ll' We will add and subtract a(r)r§(t — r), a(r)g’(g‘1(v(t - r)))g‘1(t7(t)) and a(r)g’(g‘1(v(t — r)))g‘1(t’2(t — r)), then regroup on the right hand side of (2.26), 27 we obtain )-—-t67~() )+/0 /0pk(p v()s+t)dsdr)7~(p)—a(r)17(t—r) + a(r) [v(t) — v ()1 — a< (Th/(9 (v(t — r)))g-kvtt» + a(r)g’(g"(v(t — avg-lav» — a(r)lv(t — r) — w — r)t + ag'(g-1>>g-‘(v)+ a(r)g’(g“(v(t — r>>>g-1(v 0 for r > 0 sufficiently small from Lemma 2.1.3. By (H3), we can show that 1 (Ti lat = S C6r0. [0 at +p)dnr(P) Now consider the second term on the right hand side of (2.28): let p(p, t) := f6) k(p — s)(t7(s + t) — 17(t — 7)) ds. By Lemma 2.3.2 p — Ipmvl s [O lk(p — snug'nizvts + r))”ds , - P s llkllrlly ”IN [0 (s + r)“ds S ||k]]rllg'ler2f‘rt‘+1 a.e. t E [0, T] and by assumption of the above theorem, p(-, t) E C [0, r] for any t E [0, T] Further, for any 3 E [0, r] k(s>=(,_1,,,s”‘1+RV_1(s>. where RV—1(3) = k(t?! )8“. O < 5 < 3. Therefore 1 ( ) V - V llkllr s (’1‘, _ 1), + “k V, ”Tr”. So for a.e. t E [0,T], _ 7.V—1 [k(t/)l I _ TV+ |P(p,t)| S |[9'll1N2/t7‘”+ 1W (1 + —| V IT’" 5 “9 ll1N2fl+1 _ 29 for r > 0 sufficiently small. So by assumption (H3) on the measure, we obtain 2 ~ ~ I _ 7.V+u+0' Tl kt) .<_ Cllpllrr" 3 Gig “1N2“ + 1W, for r > 0 sufficiently small. For a.e. t E [0, T], we use the fact that (g'l)’(x) = I for suitable x, to g (9‘1($)) write TQM) = a(r) [(v(t) - v(t))u — g’(g‘1(v(t — r)))(g‘1)'<€(v. .7, MI a r r)) g’(g“(€(vn7,t)))-g’(g“(v(t-T))) 5 ( )9 g’(g‘1(€(v,fi,t))) 6r“ < a(T)—.C.1-N| '1(€(v, v, t)) — .(I"l(v(t — T))l Tu = a(r)€_—-N-_-1-]€(v, 17, t) — v(t — r)[ Cl cl where we have used Lemma 2.3.2 and Lemma 2.3.3. Further, for a.e. t E [0, T], min{v(t), 17(t)} < {(vfl, t) < max{v(t)a 170)}, so [€(v, t“), t) — v(t)] 3 Br“ and €(v,D, t) E D for r > 0 sufficiently small. Therefore [filial-Tat) - v(t " 7)] _<_ [HUM—lat) — g(t)|+ [17(0- i7(t — T)| + Wt — T) - ”(t — T)l 5 0r“ +][g'||1Nr“ + 6r“ =2srlt+||g'||,Nr/‘, a.e. tE [0,T], 30 and thus a 41 _ Time) s a(r)—IQ—Neerr + llg’ller”) C 1 = a(r)%N(26 + (|g'n,N)r2#, a.e. t e [0,r]. (2.32) C 1 Similarly, for r > 0 sufficiently small, (4) _a, _, _,_, _T 9’(g“(€(v.t7..t-T)))-9’(g“(v(t-r))) T. (v— ()(lt ) (r. )) g,(g_,(,(v,,,,_m) 6 “N s a(r) ’12 lav. at — r) — v(t — r)) 01 62N S a(r)—31‘2“, a.e. t E [0,T], (2.33) c1 because [{(v, t“), t — r) — v(t — r)[ g [v(t — r) — v(t — r)[ 5 0r, a.e. t E [0,T]. Finally the last term on the right hand side of (2.28) is Time) = a(r) |g'13“ + a(T) 52 7‘2” 1 1 C‘ + a(r)]lg'IIINr“, for r > 0 sufficiently small. 31 Thus, we have from Lemma 2.1.3, 1+m a(r) llHr(v) - 17H S llGr(v)ll (1 + m) ~l~ — (cu/2V!)r" 1+m V ~ - + 2(# + zlL—C—‘LCHQ'HINTM V 2 6 — 6 N +(1+rn.)3N(20+|]9’]]1N)T2”+(1+m) _2 T?” C C 1 1 + (1 + "T)Ilg'ller“, for all r > 0 sufficiently small. Let 6 = 6(r) satisfy 6 g klrl‘ + V, for some k1 > 0. Then (1+m) .. ~l€1Vl —— < u ((tV/2VI)T‘V 06 - 2(1 + m)C CV r , for all r > 0 sufficiently small. So - ~k1v! U.+2,(1+m)u~ , — , - ,, llHr(v)-v||_<. 2(1+m)C—c—+2 —C—-Cllg ||)N+(1+m)||9ll1N r V V 9 - 62N + [(1+m)_—N(26+|]9’ll1N)+(1+m) _2 r2”. C C 1 1 To have |[Hr(v) — 17]] S 97‘“ for some 0 > 0 and all r > 0 sufficiently small, a sufficient condition is Q ~k ! 1 ~ - - 2(1+m)C—j” +2r+2—( f,m)”CIIg'ii,N+(1+m)ng'n.N < V ,1/ (\D 32 for r > 0 sufficiently small. So let -1, l .. _ _ o < 0 < 2(2(1+ nae—11 + 2” + 3( "Cllg’lliN + (1 + m)ll9'||1N). (2.35) 1 + m) Cu CV Then llHr(1’) — 17]] S 07'“ for r > 0 sufficiently small where 0 is defined by (2.35). Therefore HrzM—2M for r > 0 sufficiently small provided 6 = 6(r) satisfies 6 g klrf‘ + V for all such r. Now we want to show that for any v1,v2 E M = {v E L°O(0,T) : llv — ill 5 Or“), we have llHr(7~’1)— Hr(v2)ll S allvl “ v2”, for 0 g 0: < 1. Since 1 +772. a(r) we note that for r > 0 sufficiently small, we have for a.e. t E [0, T], llHr(v1) - Hr(v2)ll S llGr(v1) - Gr(v2)||, falGrlth) — Gr(v2)(t)l = [M0 - v2(t)l - [v1(t- T) - v2(t - T)I - g'(g'1(v1(t - T)))l9'1(v1(t)) - 9’1(v2(t))l + 9’(s‘1(v2(t - T)))[9'1('01(t - T)) - 9‘1(v2(t - T))l - g‘l('v2(t))l9'(g‘1(v1(t ~ T))) ._. .q'(9"(v2(t *- T)))I — y“(v1(l- r))[g'a-‘(vzu - r))) — g’tq-‘(vtv - T)))) 3 . = Z sl'la), (2.36) i=1 33 87(1)“): [v1(t) - v2(t) -. 0 sufficiently small we have 8%) = ((v1(t)— v2 0 is sufficiently small. Thus H)— is a contraction in the ball M for all r > 0 sufficiently small, provided 6 = 6 (r) satisfies 6 _<_ klr” + V for all such r. Therefore equation (2.27) has a unique solution v76~ E L00 ((0, T), D) and |va - all 3 6r“ where 6 is defined by (2.35). Now we show that this solution v5: depends continuously on the data f 6. Fix r > 0 sufficiently small and let 6 = 6(r) satisfy 6 S klr“ + V. Let fig, f3 E C[0,T + R] satisfy ns—nmsti=t2 Replace 6r(t) in equation (2.28) by 6,.‘1-(t) where 6r,,-(t) is defined as in (2.24) using 6-in fits instead of f 6 respectively for i = 1, 2. Then there exists a unique solution vr 1 ball M which is defined in Theorem 2.3.1 solving v = Hr, ,(v) respectively for i = 1, 2. 36 Further, using arguments similar to those used to prove HT is a contraction, llv§,1— vigil = NH. 1(1)? 1) — H. 26:22)“ 1+m <——— (T) ——uc.~ 1(1),. 1)— 0.3622)” 1+m 0"("lll1/7: 1_ v7: 2]] + a(r) for (fie + p) — rte + p)) a(r)] 2(1 + mot/inf? — rig)... s a(r)llv3,1 — v.22“ + CyTV , so 6 ’7") 1 T) 2 _ 1 - a(r) cer where a(r) E (0, 1) for this fixed r. Thus continuous dependence of solutions on data is obtained for equation (2.25). This completes the proof. E] Remark 2.3.2. The only new assumption we need for our theorem is assumption (94) on g’ which is not surprising since our theory use 9’ explicitly so we expect to have some assumptions on g’. Also our assumption (93’) is in fact weaker than assumption (g3a) (which could have been used in place of (93) in [11]) in the case when 9’ exists. Using (g3’) alone guarantees existence of a unique solution ué E L°°(0,T) which solves ur(t)— - g‘l(vr(t)) a. e. t E [0, T] where v5(t t) E D for a.e. t E [0,T]. Corollary 2.3.1. Assume '17., f, and g still satisfy the assumptions given in Theorem 2.3.1. Fork = 1, 2,. ,let f6k E C[O, T+R] satisfy (2. 24) with 16k > 0 where 6k1—> 0 as k —-> co and let rk- — rk(6k) > 0 be selected satisfying ()1ng < Tk— < d26k—-F_ for some constants d1, (12 > 0 and 6k -—> 0 as k —> 00. Then for k sufficiently large, 6 equation (2.22) has a unique solution ué’]; = u k(g ) E LO°((0,T), I) satisfying T k k a ..+ k—angcai‘ ” k as k ——> 00 for some C independent ofk and 6k. (2.40) 37 . 6 Further, the mapping fék E {w}C E C[0,T+ R],llwk — flloo 5 6k} )—> url’: E LOO((0,T), I) is continuous for all k sufi‘iciently large. Remark 2.3.3. The rate of convergence in (2.40) is in fact the optimal rate for local regularization of linear V-smoothing problems under the assumption of 12 H6lder continuous with Holder exponent n E (0, 1]. Proof of Corollary 2. 3. 1. By Theorem 2.3.1, for each fixed k sufficiently large, equa- tion (2. 27) has a unique solution vffiE EL°°((0, T). D) satisfying llvrk — vll < Brk. Therefore, we can define date) was )) and we obtain that uéflt) E I for a.e. t E [0, T]. Therefore, viro— 17(()-—-l lg 1.,, ))— g(u(t))] = lg (anti. 1-. t))(u‘l’;(t) — a(t))| 251 urk(t ()-a(t)l a.e. tE[0,T]. (2.41) So ur’;(t)——u(t)lgiliré:(t)—v(t) _<_£(rk)f‘:=C(rk)l”, a.e. tel0,T], ~ 6 where C = :—. The above is true for any k sufficiently large. Therefore the Cl approximate solution all" k converges to the true solution it with order (rkW in LOO-norm as k —> 00. Continuous dependence of all: on f 6k follows from continuous dependence of vll: on f 6" and estimates like (2.41). C] 38 CHAPTER 3 Hammerstein Problem with nonconvolution kernel 3.1 The regularized Hammerstein equation We study the following nonlinear Volterra problem: F u = f, (3.1) where F is the nonlinear operator given by t Fu(t) =/0 k(t,s)g(u(s)) ds a.e. t E [0,T], (3.2) where f E Range(F) Q L°O(0,T) for u E L°O(0,T) suitably defined. Here k(t, s) is called the nonconvolution kernel. We will assume the kernel k is a 1-smoothing kernel, that is kECl([0,T+R]x[0,T+RI), and k(t,t)7éO for tE[0,T+R]. 39 Without loss of generality, we assume k(t,t) = 1. For a large class of kernels k, the solution of (3.1) is a ill-posed problem due to the fact that the solution of (3.1) does not depend on data in a continuous way. It is for this reason that some kind of regularization of (3.1) must occur. In order to motivate our method, we will make the same type of assumptions that we have for the convolution problems in Chapter 2 for the nonlinear function g and the true solutions 11. That is, we will let g E 01(1). Assume the true solution 1’1 E C ([0, T + R], I ) of (3.1) satisfies Holder inequality (2.4) and u(t) E I for t E [0,T + R] . We will let r E (0, R] be a small parameter. Then using the same idea as in Chapter 2 for the nonlinear problem: we extend the integral slightly into the future, split the integral and do a change of variable to the second integral, we obtain t 9 f0 k(t + p, s)g(u(s)) ds +/0 k(t + p, t + s)g(u(t + 3)) ds = f(t + p) a.e. t E [0,T]. Then integrate with respect to a signed Borel measure {17,} which satisfies (H 1)-— (H 3), and change the order of integration to the first integral, we then obtain for a.e. te [0,T], [0 [0k tt+p.)p)gsdnkt ((su ))ds+[0 [k k(+p1t+s)9(u(t+s))dsdnr(p)=fr(t). where frIt) =fo f(t + p) d77r(P ) If we approximate k(t + p, t + s)g(u(t + 3)) by k(t, t)g(u(t)) = g(u(t)) in the second integral above, we then have an approximating equation t r f0/0k(t+018)d9r(p )9 (u (8))ds+a(r )9 (1 1a)): fr(t1) a.e. relax] where a( =[0 ptlnr (p) and f7. is defined by (2.9). If we linearize g(u(t)) at u(t —r), 40 we then obtain our regularization equation [[111111, )dnr(99u)((8))ds+a(r)l9(u(t-T)) g’(u (t —- r))(u(t) —- u( (t— 7' ))]= fr( t,) a.e. tE [0,T]. (3.3) From (H 1) we have T 1 1 1 11(1) =[0 pdnr(p )= + “(Cl + 0(1)) 2 Eclr +0 > o, (3.4) for r > 0 sufficiently small. The true solution 11 satisfies for a.e. t E [0, T], [0 [0 k(t+p,s)dnr(99)(u(8))ds + [0 [0 ktt+p.t+s)gtatt+s>>dsdn1tp)=f1tt> (3.5) Because the assumption on k limits the convergence theory to mildly ill-posed problems of solving for (3.1), we make the following remark. Remark 3.1.1. We note that this 1-smoothing assumption on k is standardly found in the theoretical convergence arguments for methods which preserve the Volterra nature of the original problem. The hypotheses of several well-known methods which preserve causality are discussed in [7/ and [8]. Local regularization theory has been extended to the linear nonconvolution problem using the assumption of k 1-smoothing [7]. In 2000, Lamm and Scofield observed that the theoretical assumption k(t, t) aé 0 does not appear to be needed in numerical method for the local regularization method they present for the linear problem. Numerical examples for k not satisfying the assumption k(t, t) = 0 may be found in [4]. Thus the 1-smoothing assumption in nonconvolution problems is more a theoretical limitation than a practical one. Here we extend the existing 41 theory for 1-smoothing nonconvolution linear problems to l-smoothing nonconvolution Hammerstein problems. Subtract (3.5) from (3.3) and regroup the terms by adding and subtracting, we obtain for a.e. t E [0,T], t r [0 /0 W + p’ 3) “WW [991(0) - 9(I‘I(S))I d8 + a(T)[9(II(t)) - g(a(1))] — —’a( ”QM )) ”HM (’l +/0 fopk (t+p1 t+ 9)9(I‘ 1(1+ 9))d8dnr(p) - a(T)9(fi(t - T)) - a(T)[9(U(t - T)) - 9(fi(t - T))l - a(T)9'(U(t - r))u(t) + a(T)9’(u(t - r))u(t - T) (3-6) where 8,.(1) is defined by (2.24) By (3.4), we know that a(r) > 0 for r > 0 sufficiently small. So for fixed r > 0 sufficiently small, we can divide a(r) on both sides of equation (3.6) to obtain for a.e. te [0,T], 1) [ [kt 111,1, dnktp [g(u -9(9( s))lds+l9(u(t))-9(9(t))l [g(u 1))—g(11()))+_(1__)5,.(1)) 337, [0 [0 k(t+p1t+s)9(9(t+s))dsdnr(p)l-9(I1(t-r)) —(g(11(1-— T)) - 9(1‘I(t - T))) - 9 ’(11(1 - T))u ( .1)+ 9 ’(11(1 - T))II(t - T) (3-7) Assume 9 satisfies (q3’) and let 17(1):— —— g(u(l )) for t E [0, T + R]. By (93’) and Inverse Function Theorem, we derive g‘1 E C1(D), where D := 9(1). Motivated by (3.7), for fixed r > 0 sufficiently small, we will seek a solution 1), v(t ) E D a. e. 42 t E [0, T] of the following equation: (31 + 002 — 1) = 11(1), (3.8) where Br : L°°(0, T) —1 L°°(0,T), (3.9) defined by Br(v)(t) := 71—) [0t [0 ktt + 1018) d91(p)v(s) ds. (3.10) If 6 E LOO(0,T + R), then F7- : L°°((0,T), D) -+ LOO(0,T) is defined by Fr(v)(t) := (v(t) — 1(1)) + 325319) 1 T P _ > + a—(f)/(‘) f0 k(t + p, t + s)v(t + 8) ds dmlp) — v(t —- r) — (v(t — 7') -— 17(t — r)) - 9' (9"1(v(t - T))) 9'1(v(t)) + 9’ (9‘1(v(t - T))) 9‘1(v(t - T)) 5,.(1) + for f6" (k(t + p, 1 + s)e(1+ s) — 17(1- 1)) 111 1111101) — a(r) a(r) + [v(t) — Wt) — 9' (9—104t — T))) (9—1040) - 9-107(0)” - ['v(t - T) - 9(t - T) - 9’ (9‘1(v(t - T))) (9"('v(t - T)) - 9"(T'(t - T)))] - 9’ (971(110— T))) (9'1(I7(t))- 9_1(17(t- T)))1 (3-11) for a.e. t E (0,T). 43 3.2 Convergence and well-posedness results Before we present our main results, we will study the properties of the operator Br + I first. Lemma 3.2.1. For any r > 0 sufficiently small, let Br be given by (3.9)-(3.10). If k E C1([0,T+ R] x [0,T+Rl), then the operator Br+I is invertible with (Br+I)—1 E $(LOO(0,T),L°O(0,T)) and there exists a constant C independent of r, such that ”(Br + Il—lllg (LOO(01 T), L°°(0,T)) S C for all r > 0 sufficiently small. Proof. For any r > 0 sufficiently small, by Taylor expansion ft)" k(t + 191 S)d9r(10) a(r) = ltilk(t18) + D1k(€(t1p)18)pld9r(p) 9(7) = W .1t1,1) 1 f6" 019925)).911119) = 5%,—S)“ + Kr“, S), where 6,. 1: fl = f(lpd’flrlp) f6 6911(9) [6 d77r(p) and 6101-9) ;= Id D1k(€(:(l:‘))13)PdTIr(P). Consider the equation (B.+1)(w)(1) f(1), a.e. tElO,Tl. (3.12) This is a second kind integral equation in w. If f E LOO(0, T), then 44 there exists a unique w E LOO(0, T) which solves equation (3.12), i.e. (Br + I)”1 : LOO(0, T) —+ L°O(0, T) [13]. This is true for any r > 0 sufficiently small. By definition [16" D1k(€(t1p)1s)pdnr(p)l ll-Y7‘(t,8)l 1' [It pdrirlp) 9191111111 +0 < 29101911 — r1+ ”(c1 + 0(r)) — C1 for r > 0 sufficiently small and t E [0, T + R]. Therefore, 2C'lllelloo llRTlloo S Cl From the proof of Lemma 4.1 of [9] we have - - 26' D k llwll S “lefllexp(||01)1||00 + 2%) = Cllf—IL . ._ 267 D k where C := 2exp (lllelloo + 2M2) independent of r. Since Cl llwll = |l(B1~ + I)‘1flls Cllfll, we obtain _1 e C] If 6 E 1.100(0, T + R), then by Lemma 3.2.1, for r > 0 sufficiently small, equation 45 (3.8) is equivalent to (11 — 11) = (31 + I)_1Fr(v), (3.13) 01' v(t) = Hr(v)(t), a.e. t E [0,T], (3.14) where Hr : LOO((0,T), D) —+ LOO(0,T), is defined by Hr(v) := (Br + I)-1Fr(v) +17. (3.15) N 0w we present our main results. Theorem 3.2.1. Let 1‘1 denote the solution of (3.2) given “true” data f E C [0,T+ R] and let the same assumptions hold as in Theorem 2.2.1 for 11 and signed Borel measure {gr}. Let g satisfy the some assumptions as in Theorem 2.3.1. Assume k is 1- smoothing, i.e. k E C1([0,T+ R] X [0,T+ R]) and k(t,t) = 1. Let R > 0 be sufliciently small and let r E (O, R] be arbitrary. Then there exists a 0 independent of 1 such that, if f5 e C[0,T + R] satisfies (2.24) with 8 g 1111!1 + 1, for 11 the Hélder exponent on 1'1, then there is a unique solution v of (3.8) satisfying llv — 17]] S 0r”. Further, the mapping f‘5 E {w E C[0,T+ R], [[111 — flloo S 6} 1—1 v E L°°((O,T), D) is continuous for all r > O sufficiently small. Proof. We will use the same type of arguments like in Theorem 2.3.1. That is, we will first define a ball M := {v e L°°(0. T) : llv - 9H S 914‘}, for some 0 independent of r and p E (0, 1] defined by (2.4), then use the Contraction Mapping Theorem to prove our result. 46 Since u(t) E I , so v(t) E D. By previous discussion in Chapter 2, we know D is an open interval and for any v E M, we have v(t) E D for a.e. t E [0,T] when r > 0 is sufficiently small. We will show that there exists a unique solution v solving the equation ’U 2 117(1)), so that such a U will uniquely solve equation (3.8). First we will show that Hr maps M into M. For v E M, by Lemma 3.2.1, Mme) — an = “(Br + I)"1Fr(v>n S ”(Br + 1)_1llg(LOO(0, T), LOO“), T))“Fr(vlll S éllFrWHI- For 1‘ > O sufficiently small, by equation (3.11), we have for a.e. t E [0, T], 5 (i) wow: .2: PT (t). i =1 where (1) ._ WI P. (t .— a(r) . (2) ._ [f5 [6’ (k(t + p,t + s)17(t+ s) —- v(t — T)) dsdn,~(p)l Pr (t) .— , a(r) . P5300) := [v(t) - 17(0- 9' (9"1(v(t - T))) (9-1040) - 9'1('l7(t))) , PW!) == IUU - T) - W - T) - 9’ (9—104! - T))) (9’1('U(’- - T)) -!1'1(27(t- T)))Ia 105%) := Ig' (g-‘(ve — 7») (9'1(17(t>)— g-W — r)))l, 47 and where we have used the fact that a(r) > 0 for r > 0 sufficiently small. By (H3), we have (1) If; 6(t +p)d77r(p) 55.0 265 PT (I) S 1 -<— 1 = §C1T1+o §c1r1+0 617‘ for r > 0 sufficiently small. If 6 g klr“ + 1, then 2Ck1 P£1)(t)SM1r” where M1:= c 1 (3.16) Now consider the integrand of P,£2)(t). We have for t E [0, T], s, p E [0, r], |k(t + p, t + s)u(t + s) — v(t — 7)] S |k(t + p, t + s)u(t + s) — k(t + p,t + s)v(t — T)I + |k(t + p, t + s)u(t — r) — k(t + p, t)v(t — T)I + [k(t + p, i)v(t — r) -— k(t, t)o(t — r)| S ||k||oo||g’||1N2“r“ +ll02kllooT|l9H1 + ||D1k||oorllgl|1 = “7127’” + M31“, (3-17) where Mg := 2fl||klloo||g’||1N, 11713:: ||D2klloo||gl|1 + ||D1k||oo|lg||1, and where we have used Lemma 2.3.2. Therefore by (H3), for r > 0 sufficiently small, "MIL 1V1 -0 261F411 A7 _C r1 + o 01 2 1 26337 201?! where A12 := 2, M3 := 3. C1 C1 Notice that 13793)“), R,(.4)(t) and P,(.5)(t) are exactly the same as T£3)(t), T54)(t) 48 and T76) (1) respectively defined by (2.29) — (2.31), under the same assumptions on g .. 1 . and it, except for a factor of a(r), i.e. P£Z)(t) = —T,gz)(t) for i = 3, 4, 5. Therefore a(r) by similar arguments, we derive for r > 0 sufficiently small and a.e. t E [0,T], 0 _ 14%): 3N(29 +Ilg’II1N)r2“, (3.19) 61 2 144%,) < 6:12”, (3.20) C 1 14%) s Ilg’ller“ = My“, (321) and where M4 := “Q’IIIN. Therefore by (3.16)-(3.21), we have ”117(1) — an s C[(Ml + big + M4)r# + My + o(r’”‘)]. For r > O sufficiently small, to have |]H7~(v) — 27]] S 6r“ for some 0 > 0, a sufficient condition is A C(M1+M2+M3+M4) < (\DIQD So let 9 := 26(1111 + 1112 + M3 + M4), then we have |]Hr(v) -’l7|] S 01'” for r > 0 sufficiently small. Therefore Hr : M —> M. Now we want to Show for any v1, 122 E M = {v E LOO(0, T) : ]|v -— 27]] S Orff}, we have ||Hr(v1) — Hr(v2)|] S 0]le — v2” for O S a < 1 and r > O sufficiently small. Since ||H,.(v1) _ 111.09)” g (2* Fr(vl) — Fr(v2)|], using similar computations as in 49 Chapter 2 to derive (2.36), we have 3 (i) Fr(v1)(t) — Fr(v2)(t) = 2 3 Sr (t). (3.22) i=1 where 87(1) are defined by (2.37), (2.38) and (2.39) for i = 1, 2, 3 respectively. There- fore , “1’1 -v2]] )1 I ‘ ,u IlFr(v1) — ”(12)” _<_ —E2——N(3‘9T + ”9 “INT ) 1 N u 1 , ‘ u +Ilv1- toll-55261" + 3(26 +||9|l1NlNT uvl — v2“ 1 Ci Nrf‘ I _ = ”'01 — ell-306 + 2]]9H1N) Cl = fi(7‘)llv1 - v2”, N741 , _ where fl(r) := _2 (76 + 2|]g ]]1N). C1 80 we have IIHT(v1) —- Hr(v2))|| s OIIFr(v1)— Fr O is sufficiently small. Thus equation (3.8) has a unique solution vé E L00 ((0, T), D) in ball M for r > O sufficiently small. For the proof of continuous dependence on the data, by the same type of arguments 50 that we have used in the proof of Theorem 2.3.1. Let U? i denote the solution of (3.8) associated with data f? and Hf;S — flloo S 6 , i = 1, 2. For fixed r > 0 sufficiently small, we obtain ((1)21 — vigil = “Hr, 10$, 1) — 11.3322)“ S CllFr, 1(1)? 1) — Fr,2 0 sufficiently small. Therefore, continuous dependence of solutions on data is obtained for equation (3.8) for r > 0 sufficiently small. El Corollary 3.2.1. Assume all the assumptions hold as in Theorem 3.2.1. Then for 1 1 rk = rk(dk) > 0 selected satisfying dldz‘q‘. S rk S dgdzr" for some constants d1, d2 > 0 and for 5k —+ 0 as k -> 00, equation (3.3) has a unique solution (if: = 145.20%) E L°O((0,T), I) satisfying 6): Elli—V as k —> 0 for some constant 5 independent ofk and 5k. Further, the mapping Mk 6 {wk E C[0,T+ R], “wk — fHOO S 5k} ,_, u‘lk 7,. e L°°((0,T).I) 51 is continuous for all k sufficiently large. The proof of the above corollary is similar to the proof for Corollary 2.3.1. 52 CHAPTER 4 Discretization and Numerical Implementation 4.1 u-Smoothing Convolution Kernel We first consider the implementation for our regularized equation with V-smoothing convolution kernel. Recall the regularized equation is t [0 Mt — s)g(u(s)) ds + a(r)g' t'l', that is t N N - lim / k7-(t—s)g Z cjxj(s) ds+a(r)g Z cjxj(t) = lim f2“), tat? 0 '=1 '=1 t——>t,’ or [$1 ~ ~ gel) [0 W1 -— s) as + a(r)g 0, aft) = 0 cos2t = 0, —1 + c0321. cos2t < 0, for t E [0,10]. And we choose our nonlinear function g to be g(u) = u + u3. Below are three pictures corresponding to three relative errors. See Figures 4.1—4.3. See [18] for a comparison of local regularization to Lavrentiev regularization 0n the same example. 60 /} I ___,_,,4 —- — T ..-1‘ _‘i r*' on. ‘ H v... .— __.‘ I \ ___ \ A» Figure 4.1. Example 1 (a 1-smoothing kernel): solution with regularization, 6 = 10%, N = 1000, R = 45. 2.5 *- 1.5" 0.5 ~ -0.5 r -1.5- -2 _ ‘25 r If ...," f’ - _.-——'-— - .- ,/ \ “a. 4!. 1..” I \ x b’." “._ Q ’ — . _, ..-- - _ 4 IA“, ’4 ..‘_ r—. .«——-I.‘ “-1 / ,7— .f” Figure 4.2. Example 1, continued: solution with regularization, 6 = 5%, N = 1000, R235. 61 2.5 - 1.5 ' \~. \ \. / x 0.5 ~ -0.5 _ - rr-—-—-_. R 9....” u—f‘. ".2. H-1 5'..— -1 ,. -1.5 ~ \. -’ \ -2 _ u -2.5 *- “\ _ V‘L- ._.__——. _- Figure 4.3. Example 1, continued: solution with regularization, 6 = 1%, N = 1000, R=20. Example 2. In this example, we consider a 3-smoothing kernel k(t) = 0.5t2, with the true solution 11 = 8(t — .4)2 + 1, t 6 [0,1], and g(u) = u3. Compare to the same example handled by solving a nonlinear equation for every 2', i = 1, ..., N, in [11]. See Figures 4.4—4.7. 62 4.5 * K \\ 3.5 * .\\\ 2.5 ~ 0’ Figure 4.4. Example 2 (a 3—smoothing kernel): solution with regularization, 6 = 5%, N=60,R=20. 4.5 ’- 3.5 P 2.5 " /’/V 0.4 0.6 0.8 1 Figure 4.5. Example 2, continued, solution with regularization, 6 = 1%, N R = 11. 63 60, 4.5 r 3.5 ' 4/ 2.5 ~ / 1.5" V\\\ 4/ 0.5 * Figure 4.6. Example 2, continued, solution with regularization, 6 = 0.1%, N = 60, R = 7. 64 4.5 - 3.5 - ." Figure 4.7. Example 2, continued, solution with regularization, 6 = 0%, N = 60, R = 3. Example 3. In this example, we still consider the kernel k(t) = 0.5t2, and function g(u) = u3 with discontinuous true solution. 0.9 1 . ——-—(0.3)1/2t2 1f ost<0.3, a(t)= t+1.2 if 0.3gz.<0.6, ' 15 81 __ _ 'f . /;‘"r‘*: f' ‘_\ \ 1.5 “ ’ 7/ \ \ 1 ,’ \ V “ ‘.\ 1 _ ,1 ‘-‘ \ //’/ J -\ (“A 2 ”// ‘1 \ \ K‘hk 0.5 - ,z/ \ J/fi \\ o . '05 " 0 0.2 0.4 0.6 O 8 1 Figure 4.9. Example 3, continued, solution with regularization, (5 = 0.3%, N = 100, R=12. 66 2.5 *- 2 . , . 4"’ /~\\ 1.5 ~ ’7'er \‘\ If: \ i “ 1 1' \\ r ‘ It” \ / \\ 0.5 ' /’/ \‘ f V. .1." ‘ f 0 b -0.5 0 0 2 0.4 0 6 0 8 1 Figure 4.10. Example 3, continued, solution with regularization, 6 = 0%, N = 100, R = 4. Example 4. We consider a 1-smoothing kernel k(t) = 1 and the same nonlinear function g(u) = u3 as in the above example. The true solution is a periodic function u(t) = sin(2t) + 2, for t 6 [0,10]. Remark 4.3.2. Notice that this true solution it is similar to the true solution a as in Example 1 with the same kernel k(t) = 1 fort 6 [0,10]. However, this true solution is hard to recover and it is due to the fact that the nonlinear function g in Example I guarantees that |g’(u(t))| 2 1 > 0 no matter what u(t) is fort 6 [0,10], while in Example 4 if u gets close to the t-axis due to measurement error, then g’ gets close to 0. Therefore, in Example 4, really small errors are needed in order to keep 9’ bounded away from 0. See Figures 4. 1 1—4. 13. 67 3.5 > 3 r .. If r’fw‘ / \ f \ \ ‘ 2|“ I I I \ ‘ I L“ l ‘ \ / I ‘ I 2 5 7 , l ‘2‘ I ‘ l I \ l V l_! \\ 7 , \\ I t i r. c 2 '- ‘ 1 'v. I", u l‘ r‘ "a. .J m. . x v' :. \‘- I i. l" l - l . l ‘, , ‘ . 1 J; \ t , 1.5" \\ I" \‘J'J, l I \ '1’ I \l g" \ L . i’ I ,' I m/ ‘ l "g, \ 7 \ / D- ‘ M .‘ v 1“ 0.5 - 0 2 4 6 8 10 Figure 4.11. Example 4 (a 1-smoothing kernel): solution with regularization, 6 = 0.05%, N = 200, R = 11. 3.5- 3 b l/ \ 4-\ '- ‘ , 3 I I \\ If \ [I \ l, l- " \~ \. ('1 \ l 2.5 , , / , .\ ,’ \ / \ / i l 2' l / \ l 1 1 \{ \‘i I]; \‘ f , \‘x 5 ~ a 15* “,x. ./ “x. .i ‘6 I \\ 4 \\ . f \ \i ,2!” 1 _ §\/ Ct], ‘61.], 0.5- 0 2 4 6 8 10 Figure 4.12. Example 4, continued, solution with regularization, 6 = 0.005%, N = 200, R = 7. 68 3.5 r If \i l \ ’1 ls / 2.5-,‘ ' / . r \, 1.5- i \ ‘3 ' \‘x. / ‘- I, \\ / 0.5 - Figure 4.13. Example 4, continued, solution with regularization, 6 = 0%, N = 200, R = 2. Example 5. We consider a 2-smoothing kernel k(t) = t and the same nonlinear function g(u) = u3 as in the above example. The true solution is a(t) = sin(2t) + 2, for t E [0, 10]. See Figures 4.14—4.16. 69 3.5 r 3 P I, \ I , \ I a \ I t ’ “\\ I \ I \ A" I: . ’4 i ,-I"‘- ‘l/V \\ .' 2‘5 bi” \\ I ,1, " ‘x I I; \ 'J‘\ .l/ \ / ‘ K I .J \ i l . I \ I ‘ ‘. \ v\ ,J ‘ \ I' l \ U ’ j l i 2 i- \ \1 / \ ‘ _ ,5, ’ \ \ Ill \ /’ ‘ I/l h‘J \ l I " 1 \ N ' - I- \/ I l \ '.-' a ,’ 1.5 p \ I ‘ I \ ‘ ‘.\ Ill \\ I \ I \ ‘V/ " \ \ I 1 1 r \ l \ I .. ii. 1.5 u 0.5 - l 0 2 4 6 8 10 Figure 4.14. Example 5 (2-smoothing kernel): solution with regularization, 6 = 0.005%, N = 200, R = 27. 3.5 - 3 " / \ r \ / r- \ I I ,2 \ ’ , -2. V, V I / \. I / \\ l’ 2.5 t , ‘1. // I, \ I/ ._‘ , \ \ / ‘ I‘ I . ,1 i." [I \ 1 ll \ ' \ ) I \ . \ I/ , I 2 >- \ l \ \ I ‘ \ ‘ \. \ i \ ‘l‘ \ \ j \ 1\ n 1.5 '- \ ‘2': \ 2" " \ ‘3 ’/ x \ . \ ‘2 I \ \ I \ ‘1 4 \ K .1 \ K \ | ll / l 1 r- \ \ 4&5! a 0.5 - 0 2 4 6 8 10 Figure 4.15. Example 5, continued: solution with regularization, 6 = 0.0005%, N = 200, R = 21. 70 3.5 - 2.5 - f \ [I \ \ Figure 4.16. Example 5, continued, solution with regularization, 6 = 0%, N = 200, R = 3. Example 6. We consider a 3-smoothing kernel k(t) = 0.5t2 and the same nonlinear function g(u) = u3 as in the above example. The true solution is a(t) = sin(2t) + 2, for t 6 [0,10]. See Figures 4.17—4.19. 71 3.5 - Figure 4.17. Example 6 (3-smoothing kernel): solution with regularization, 6 = 0.001%, N = 200, R = 42. 3.5 - 2.5"; \\\ I x \ ~\ I Figure 4.18. Example 6 (3—smoothing kernel): solution with regularization, 6 = 0.0001%, N = 200, R = 35. 72 3.5 r 2.5 ~ ' l 2 ‘ . I’ 2". 1.5 - \. f \ (I, ll 1" 0.5 - Figure 4.19. Example 6 (3—smoothing kernel): solution with regularization, 6 = 0%, N = 200, R = 4. Example 7. We consider the 1-smoothing nonconvolution kernel k(t, s) = is + 1, and g(u) = u3 with true solution 1/0.151, 0 g t < 0.15, -—10/3t+1.5, 0.15 g t < 0.3, 5(1) = 0.5, 0.3 g t < 0.5, 7.5t — 3.25, 0.5 g t < 0.7, -—20(t — 0.8), 0.7 g t g 1. See Figures 4.20—4.22. 73 2.5 - 1.5 . / ~\., Figure 4.20. Example 7 (l-smoothing nonconvolution kernel): solution with regular- ization, 6 = 5%, N = 100, R = 7. 2.5 - Figure 4.21. Example 7, continued: solution with regularization, 6 = 1%, N = 100, R = 5. 74 2.5 r / (”5, I \‘x 1.5 b I, v“ .' l I ‘. I l“, 1 fi\ I K I I l\ \ I ‘l o 5 ,I — W-J g [I ‘. 4 1‘ 0 ~’ -O.5 Figure 4.22. Example 7, continued, solution with regularization, 6 = 0%, N = 100, R = 2. Example 8. We consider the same 1-smoothing nonconvolution kernel k(t,s) = ts + 1, as in the above example with continuous true solution a(t) = —3t + 5, for t 6 [0,1], and g(u) = e“. See Figures 4.23—4.26. 75 5.5 F 4.5 r 35 " \ 2.5 . Figure 4.23. Example 8 (l-smoothing nonconvolution kernel): solution with regular- ization, 6 = 5%, N = 100, R = 65. Figure 4.24. R = 53. 5.5 . 4.5 "fi“"*—‘“‘~\\‘§\.-:N\\ 2.5 ~ 1.5" 0.4 Example 8, continued, solution 76 315i ‘ Q 0.6 0.8 1 with regularization, 6 = 1%, N = 100, 5.5 ~ 4.5 " -kl~“'\ \ rs. 3.5 \ \‘\_\ \ 2.5 - “\ Figure 4.25. Example 8 continued: solution with regularization, 6 = 0.1%, N = 100, R = 25. 77 5.5 - 4.5 r \‘x 3.5 ~ \\ 2.5 ~ \\. Figure 4.26. Example 8, continued, solution with regularization, 6 = 0%, N = 100, R = 2. 78 BIBLIOGRAPHY 79 BIBLIOGRAPHY [1] J.V. Beck, B. Blackwell and CR. St. Clair, Inverse Heat Conduction, Wiley- Interscience, New York, 1985. [2] R. Miller, Nonlinear Volterra integral equations, W. A. Benjamin, Menlo Park, CA, 1971. [3] W. Ring and J. Prix, Sequential predictor-corrector regularization methods and their limitations, Inverse Problems 16 (2000) 619-634. [4] P. K. Lamm, Approximation of ill-posed Volterra problems via predictor- corrector regularization methods, SIAM J. Appl. Math., 195 (1995) 469—494. [5] P. K. Lamm, Future-sequential regularization methods for ill-posed Volterra equations: applications to the inverse heat conduction problem, J. Math. Anal. Appl, 56 (1996) 524-541. [6] P. K. Lamm, Regularized inversion of finitely smoothing Volterra operators: predictor-corrector regularization methods, Inverse Problems 13 (1997) 375-402. [7] P. K. Lamm and T. L. Scofield, Sequential predictor-coreector methods for the variable regularization of Volterra problems, Inverse Problems 16 (2000) 373- 399. [8] P. K. Lamm, A survey of regularization methods for first-kind Volterra equations, Surveys on Solution Methods for Inverse Problems, ed. by D. Colton, et al, (2000) 53—82. [9] P. K. Lamm and T. L. Scofield, Local regularization methods for the stabilization of linear ill-posed equations of Volterra type, Numerical Functional Analysis and Optimization 22 (2001) 913-940. [10] P. K. Lamm, Full convergence of sequential local regularization methods for Volterra inverse problems, Inverse Problems 21 (2005) 785-803. 80 [11] P. K. Lamm, Z. Dai, On local regularization methods for linear Volterra equa- tions and nonlinear equations of Hammerstein type, Inverse Problems 21 (2005) 1773-1790. [12] Z. Dai and P. K. Lamm , Local regularization for the nonlinear inverse autocon- volution problem, SIAM J. Num. Analy., Submitted 2006. [13] Gripenberg C, London S O and Staflens O 1990, Volterra Integral and Functional Equations (Cambridge: Cambridge University Press) [14] H. ZEGEYE, Iterative solution of nonlinear equations of Hammerstein type, Journal of Inequalities in Pure and Applied Mathematics 4 Issue 5, Article 92, (2003). [15] J. M. Bownds, On numerically solving nonlinear Volterra integral equations with fewer computations, SIAM J. Numer. Anal. 13 No.5 (1976) [16] K. Deimling, Nonlinear Volterra integral equations of the first kind, Nonlinear Analysis: Theory, Methods, and Applications 25 (1995) 951-957. [17] H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands (1996) [18] F. Perri and 1. Pandolfi, Input identification to a class of nonlinear input-output causal systems, Comput. Math. Appl. 51 (2006) 1773-1788. [19] C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, Boston (1984). [20] C. Corduneanu, Integral equations and applications, Cambridge University Press, Cambridge, 1991. [21] H. W. Engl and P. Kiigler, Nonlinear inverse problems: theoretical aspects and some industrial applications, Multidisciplinary Methods for Analysis, Optimiza- tion and Control of Complex Systems, V. Capasso, J. Periaux, eds., Springer- Velag Berlin Heidelberg, New York, Series Mathematics in Industry, (2005), 3-48 [22] M. M. Lavrentiev, V. G. Romanov and S. P. Shishatiskii, Ill-posed problems of mathematical physics and analysis, Translations of Mathematical Monographs 64, Amercan Mathematical Society, Providence, RI, (1986). [23] G. Gripenberg, Periodic solutions of an epidemic model, Journal of Mathematical Biology, 10 No. 3, (1980). 81 [24] F. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology, Springer. [25] J. A. Dixon, A nonlinear weakly singular Volterra integral equation arising from a reaction-diffusion study in a small cell J. Comput. Appl. Math, 18 (1987) 289-305. [26] M. Hanke, A. Neubauer and O. Scherzer, A convergence analysis of the Landwe- ber iteration for nonliknear ill-posed problems, Numer. Math, 72 (1995) 21—37 [27] M. Hanke, Regularizing properties of a truncated Newton-CG algorithm for nonlinear inverse problems, Numer. Funct. Anal. Optim, 18 No. 9—10, (1997) 971-993. [28] M. Hanke, A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems, Inverse Problems, 13 No. 1, (1997) 79- 95. [29] R. Ramlau, A modified Landweber-method for inverse problems Numer. Funct. Anal. Optim, 20 No. 1-2, (1999) 79-98 [30] B. Kaltenbacher, Some N ewton-type methods for the regularization of nonlinear ill-posed problems, Inverse Problems, 13 No. 3, (1997) 729-753. [31] Bakushinskii, The problem of the convergence of the iteratively regularized Gauss-Newton method, Comput. Math. Math. Phys, 32 (1992) 1353-1359. [32] Bakushinskii, On convergence rates for the Iteratively regularized Gauss-Newton method ,IMA J. Numer. Anal, 17 No. 3 (1997) 421-436. [33] F. Liu and M. Z. Nashed, Convergence of regularized solutions of nonlinear ill-posed problems with monotone operators, Partial Differential Equations and Applications, ed. by P. Marcellini et al, Dekker, NY, (1996) 353-361. 82 IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII lIr]]I]]I]]]I]]I]]I]I]]I]]]u