'“ X 2%: $3.: vi“ $3 hs‘ a. (“a i .l .. v .1 > .4. ‘Ow'tv . .l - ‘- ‘m L3 ‘ ma x. .vnfl. .1 .34". 1 w 1...»). a {3.11 z. VG. .. A nu. a. n r . ( % "xii? 7:151? ‘ Hr , 01.... than. ‘l I a; up .2 .s..... :44 4 Eu... . . .l l I 2 3... :3. s .. :A to“! . , ‘ . n ., — ¢ , . , .3. . z .. . 3.12:! ,r..: v .130» ‘ i. 5..-; ‘ . ‘ , ‘ . .3 a... . . ....x1.f..nw‘.ais . .s . . . ‘ _ ‘ : r. 1.3.. . , . 1‘ I ‘ \ ‘ .I. l ... a. 1.4.; and» ‘ ‘ . q . n 15 wfiSlS Q LIBRARY 34507 Michg State University This is to certify that the dissertation entitled A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS presented by Cara Dylyn Brooks has been accepted towards fulfillment of the requirements for the PhD. degree in Mathematics Major Professor’s Signature Two, or, 1007 Date MSU is an afiinnatlve—action, equal-opportunity employer — A—<—.cunL-tJN-AL-I-u-a--l‘_LV‘J-a—n-.-.-.—-n-n-u-I----a-— _.—.-.-n-u-a-I-c-u-‘- PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DAIEDUE DAIEDUE DAIEDUE W DLC I 32839 6/07 p:/ClRC/DateDue.indd-p.1 A DISCREPANCY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS By C am Dylyn Brooks A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 2007 ABSTRACT A DISCREPAN CY PRINCIPLE FOR PARAMETER SELECTION IN LOCAL REGULARIZATION OF LINEAR VOLTERRA INVERSE PROBLEMS By C are Dylyn Brooks We consider the problem of solving of a linear, first kind Volterra convolution equation with finitely smoothing kernel. Deeming that the problem is ill-posed. we approximate its solution using local regularization. In [22], sufficient conditions were established for local regularization of the problem under which regularized approx; imations based on exact data in C[0,T] were shown to converge uniformly to the problem’s true solution. An a priori strategy was also provided for choosing the reg- ularization parameter for which uniform convergence of approximations made with perturbed data in C [0, T] was also guaranteed. However, until now, no a posteriom’ regularization parameter selection criteria existed to be paired with local regularization and convergence of the resulting method proved. We supply this missing piece by defining a new discrepancy principle for selecting the regularization parameter constant-valued based on measured data and the known level of noise. Vl'e establish sufficient conditions for the local regularization scheme, based on those in [22], so that when paired with our discrepancy principle, we are able to prove uniform convergence of approximations made with perturbed data in C [0, 1] to the true solution in C [0, 1] as the noise level shrinks to zero. We also extend the theory of local regularization to address the case when the linear Volterra convolution operator with finitely smoothing kernel is defined on the space LP [0, 1], 1 < p < 00. We amend our conditions slightly and prove them sufficient for Lip-convergence of approximations based on exact data in LPIO, 1] and provide an a prion" rule for selecting the regularization parameter given perturbed data in Lp[0,1]. W'e redefine our principle and again establish sufficient conditions on the local regularization scheme so that when paired with the principle, approximations based on perturbed data in Lp[0,1] converge to the true solution in Lp[0,1] as the noise level shrinks to zero. For both the C[0,1] and LPIO, 1] cases, we provide a rate of convergence. Numerical examples are provided to illustrate the methods effectiveness. Our principle is found to be a natural complement to the existing theory in C [0, 1] as well as its extension to Lp[0,1],1 < p < 00. This is an initial, yet fundamental step in the development of a poster-tori principles for use with local regularization in solving linear and eventually non-linear Volterra equations. To GM. and GP. iv ACKNOWLEDGMENTS I first want to thank my advisor, the most amazing person I have ever known, Patti Lamm. Without you. this would not have been possible. Since my first day at MSU, you have always inspired me to learn more, to work harder, and to become stronger mathematically. Your kindness convinced me to keep going and your tireless generosity helped me to finish. Thank you for seeing potential in me, taking me as your student, being patient with my development, and investing in me your valuable time and resources. From you, I have gained so much insight and understanding about mathematics and to you, I am forever indebted. I thank my mom, dad, grandma, and grandpa for encouraging me to pursue my dream. Thank you for your patience and believing that I could do this, and for financially supporting me all of these years. I also thank my sister and friends who loved and cared for me during the ups and downs of what seemed like my never ending educational pursuit. I thank my fiancee, best friend, and math buddy. Alberto, who has had nothing but a sincere desire for me to succeed. You taught me how to study math and helped me to learn analysis. ’ou were my Apollo. These last four years, you constantly challenged my understanding and gently pushed me to work harder. I would not have come this far without you. I also thank my committee members Drs. C. Chiu, K. Promislow, Z. leOIl. and B. Yan, and my teachers: Drs. Gian Mario Besana, David Folk, and Clifford \Veil. TABLE OF CONTENTS LIST OF TABLES viii LIST OF FIGURES ix INTRODUCTION 1 1 BACKGROUND 1.1 The Linear V-smoothing Volterra Problem ............... 5 1.2 Regularization and A Priort’ Parameter Selection ........... 1.3 A Posteriori Parameter Selection .................... 13 2 THE THEORY OF LOCAL REGULARIZATION IN CID, 1] 18 2.1 Local Regularization in CID, 1] ...................... 23 2.1.1 The Approximating Equation .................. 23 2.1.2 Properties of m- .......................... 25 2.1.3 Properties of XT ......................... 34 2.1.4 Uniform Convergence with A Prion Parameter Selection . . . 50 2.2 A Discrepancy Principle for Local Regularization Given f 6 € CID, I] 56 2.2.1 Preliminaries ........................... 56 2.2.2 Definition and Properties ..................... 61 2.2.3 Uniform Convergence ....................... 75 3 EXTENSIONS OF THE THEORY OF LOCAL REGULARIZA- TION 84 3.1 Extensions to LPID, 1], 1 < p < oo .................... 86 3.1.1 The Approximating Equation .................. 86 3.1.2 Properties of n,- .......................... 87 3.1.3 LID-Convergence with A Prion Parameter Selection ...... 99 3.2 A Discrepancy Principle for Local Regularization Given f6 E LPID, 1], 1 < p < oo ............. 106 3.2.1 Preliminaries ........................... 106 3.2.2 Definition and Properties ..................... 111 3.2.3 U’-Con\.-'ergence .......................... 122 vi 4 Discretization and Numerical Results 132 4.1 One-Smoothing Problem, Continuous Measure ............. 132 4.2 Four-smoothing Problem, Lebesgue Measure .............. 138 4.3 Four-smoothing Problem, Continuous Measure ............. 143 4.4 Two—smoothing Problem, Discrete Measure ............... 148 BIBLIOGRAPHY 153 4.1 4.2 4.3 4.4 4.5 LIST OF TABLES Example 4.1 Error Analysis ....................... 134 Example 4.2 Error Analysis ....................... 139 Example 4.3 Error Analysis I ...................... 144 Example 4.3 Error Analysis II ...................... 144 Example 4.4 Error Analysis ....................... 149 viii 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 LIST OF FIGURES One-Smoothing Problem with Continuous Measure given in Example 4.1 with 5% Relative Error in the Data. Plots of fi(t) = 1 + 3t Isin(10t) — sin(t)] and trim) with predicted value of 'r(6) = 0.35 ........................... One-Smoothing Problem with Continuous Measure given in Example 4.1 with 2.5% Relative Error in the Data. Plots of 11(t) = 1 + 3t Isin(10t) — sin(t)] and trim) with predicted value of r(6) = 0.25 ............................ One-Smoothing Problem with Continuous Measure given in Example 4.1 with 1.25% Relative Error in the Data. Plots of 21(t) = 1 + 3t Isin(10t) —— sin(t)] and [if/516) with predicted value of r(6) = 0.183 ........................... One-Smoothing Problem given in Example 4.1 with 1.25% Relative Error in the Data. Plots of 11(t) = 1 + 3t Isin(10t) — sin(t)] and the Solution with No Regularization .................... Four-smoothing Problem given in Examples 4.2 and 4.3 with 0.1% Relative Error in the Data. Plots of 21(15) = 1 + 31 Isin(10t) — sin(t)I and the Solution with N0 Regularization . . - .............. Four-smoothing Problem with Lebesgue Measure given in Example 4.2 with 0.1% Relative Error in the Data. Plots of 11(t) = 1 + 3t Isin(10t) — sin(t)] and ugw) with predicted value of r(6) = 0.69 ........................... Four—smoothing Problem with Lebesgue Measure given in Example 4.2 with 0.05% Relative Error in the Data. Plots of ii(t) z 1 + 3t Isin(10t) -— sin(t)] and 117516) with predicted value of r05) = 0.58 ........................... Four-smoothing Problem with Lebesgue Measure given in Example 4.2 with 0.025% Relative Error in the Data. Plots of 21(t) = 1 + 3t Isin(10t) —— sin(t)I and trim) with predicted value of r(6) = 0.51 ........................... ix 136 137 140 4.9 Four-smoothing Problem with Continuous Measure given in Example 4.3 with 0.1% Relative Error in the Data. Plots of 17(t) = 1 + 3t Isin(10t) — sin(t)] and trim) with predicted value of r(6) = 0.69 ............................ 146 4.10 Four-smoothing Problem with Continuous Measure given in Example 4.3 with 0.05% Relative Error in the Data. Plots of 17(t) = 1 + 3t Isin(10t) — sin(t)I and trim) with predicted value of r(6) = 0.58 ............................ 147 4.11 Four-smoothing Problem with Continuous Measure given in Example 4.3 with 0.025% Relative Error in the Data. Plots of 27(t) = 1 + 3t Isin(10t) — sin(t)] and “$05) with predicted value of r(6) = 0.51 ........................... 148 4.12 Two-smoothing Problem with Discrete l\v’Ieasure given in Example 4.4 with 1% Relative Error in the Data. Plots of the step function 11 and 113(6) with predicted value of r(6) = 0.0233 .............. 150 4.13 Two—smoothing Problem with Discrete Measure given in Example 4.4 with 0.5% Relative Error in the Data. Plots of the step function 21 and ”3(6) with predicted value of r(6) = 0.015 ............... 151 4.14 Two-smoothing Problem with Discrete Measure given in Example 4.4 with 0.25% Relative Error in the Data. Plots of the. step function 21 and ufw) with predicted value of r(6) = 0.010 ............ 152 Introduction Over the last few decades, a variety of applications emerged that require solving inverse problems and thus accurately approximating their solutions is of great interest to the scientific community. For example, the need to solve an inverse problem arises in areas such as tomography, image reconstruction, and remote sensing. The difficulty lies in that. these problems are frequently ill-posed, either due to non-existence or non- uniqueness of a solution, or due to a lack of continuous dependence of a solution on data. Issues of existence and uniqueness can often be handled, though the problem’s ill-poscdness, due to the failure of solutions to depend continuously on data, is much more problematic. As is the case in practice, we are given measured data that always has an element of error associated with it, thought of as a slight. perturbation of the true data or a version corrupted by noise. The problems lack of stability means that a solution obtained using noisy data can have arbitrarily large error. Let A : X —+ X be a compact linear operator from a Banach space X into itself. We may consider an inverse problem as the abstract operator equation which we would like to solve for u E (lo'rn,(.4), where f E X is the data and the operator A represents a known relationship between the variables derived from the physics of the problem. In this thesis, we consider specifically the problem of solving a linear first-kind Volterra integral equation with finitely smoothing kernel. We take A : X —+ X to be a linear Volterra convolution operator from a Banach space X into itself defined as t Au(t) z/(I k(t — s)u(s)ds, a.e. t E [0,1], (1) where the kernel I: E CVID, 1] is V—smoothing, u 2 1, meaning (——(0) =0. 6:0.1.....u—2, and (——-(0er~ We would like to solve t Au.(l,) :=/O A:(l. — s)u(s)ds = f(l,), a.e. I E [0,1], (2) for u E dom(A) given f E X. The need to solve a linear first kind Volterra equation with finitely smoothing kernel arises in various applications. For example, determining the u-th derivative of a given function f can be expressed as solving equation (2) with zx-smoothing [V — 1 kernel ICU) 1‘ m. Determining the propagation rate for a simple population problem from measurements of total population can be viewed as a first kind Volterra problem with one—smoothing kernel, [44]. In I4], the author describes the problem of determining the density of a chain, given information about its motion as it slides down a known curved surface. This problem is modeled as a first kind Volterra equation with kernel determined by the shape of the surface, and in the case where the surface is a cycloid, the kernel is two-smoothing. We will show that the u-smoothing problem, a generalization of the derivative problem mentioned above, is ill-posed due to a lack of continuous dependence of solutions on data. It is worthwhile to note that the degree of ill—posedness of the problem depends on V. Thus as u increases, the ill-posedness of the problem becomes more like that of the very ill-posed inverse heat conduction problem, a first kind Volterra problem with infintely smoothing kernel. In the 19605, J .V. Beck develOped a method for approximating the solution to the discretized inverse heat conduction problem that was used successfully in practice for years. In [23], Lamm established its convergence and stability, as well as generalizing the method to the continuous case. This work resulted in the development of a class of regularizations for solving (2) called sequential local regularization or just local regularization (motivation for the name to follow) for which Beck’s method was a special case. We refer the reader to I23], I25], I22], I27], I28], and I26] for developments of the theory to date. With local regularization of (2) . stable solutions of a particular parameter-dependent second kind Volterra equation are taken to be approximations to the problem’s true solution. Lamrn extended the theory of local regularization further by providing sufficient conditions to prove convergence in C ID, 1] for the linear finitely smoothing problem in the noise-free case and provided an a priori rule for selecting the regularization 3 parameter given noisy data for which convergence in CID, 1] was guaranteed. A rate of convergence was also provided. The theory for the linear finitely smoothing problem in C I0, 1] was however with- out an a posteriori parameter selection strategy. The drawback of a priori strategies is that they only provide an asymptotic result, specifying how to select the regular- ization parameter in the limiting case as the noise level goes to zero. In practice, one typically has a single value of 6, relatively small, and would like a rule to select a single value of the regularization parameter, something a posteriori criteria would provide. The work we present here supplies this missing piece. We also extend the results in I22] to include a convergence theory for the case when the underlying space is LPI0,1],1< p < 00. We begin Chapter 1 with a brief introduction to the theory of regularization and a short survey of a posteriori parameter selection criteria that lead to convergent. methods for solving (2) using the classical Tikhonov regularization and those used with the non-classical Lavrentiev regularization. In Chapter 2, we outline the existing theory of local regularization for the linear Volterra u-smoothing problem when the underlying space is C I0, 1]. We then define a discrepancy principle and prove uniform convergence in C ID. 1] for the regularization method. In Chapter 3, we extend the theory of local regularization to the linear Volterra V-smoothing problem when the underlying space is LpID, 1]. 1 < p < 00. we prove LAD-convergence of approximations in the noise-free case as well as redefine our discrepancy principle and prove Lp- convergence for the regularization method. In Chapter 4, numerical examples are presented to illustrate the success of the method in practice. CHAPTER 1 BACKGROUND 1.1 The Linear u-smoothing Volterra Problem we now take A : X —> X to be a linear Volterra convolution operator with u-srnoothing kernel I: E C V I0, I], where X is a Banach Space. We refer to solving t Au(t) 22A; k(t — s)u(s)ds = f(l‘.), a.e. t E I0,1], (1.1) for u. E dom(A) given f E X as the u-smoothing'problem or the finitely smoothing problem. If f E R(A), then a solution to our problem exists. If N(A) = {0}, then A.1 exists and the solution is unique. However if the dim R(A) = 00, then R(A) is not closed since A is a compact linear operator I15]. It follows from the closed graph theorem that .4_1 fails to be continuous and so the problem is ill—posed due to the lack of continuous dependence of solutions on data. we will examine the nullspace and range space for A in the two cases of interest here, namely X = CID. 1] (Chapter 2) and X = LpID,1],1 < p < 00 (Chapter 3). Note that in the case when X is a Hilbert space, (for example L2I0,1]), if the data f E R(A) but instead contained in the dense subspace R(A) + R(A)'L C X, we are content to solve the problem in the least squares sense. More precisely, we solve mtIn IIAu — fIlX and let u+ denote the (unique) solution of minimal norm (MNLS). We define A+ : R(A) + R(A)J' —-> N(A)‘L as the mapping of f E R(A) + R(A)—L to u+ E N(A)i. Then u+ = A+f, where the unbounded linear operator A+ is called the generalized inverse of A. In an effort to mimic the properties of Hilbert spaces, the concept of or- thogonality and orthogonal subspaces can be defined for the Banach spaces LpI0,1],1 < p < oo,p aé 2 I11], or one may consider so called semi-inner product spaces [12]. However, exploring the possibility of extending the least squares so- lution concept to finding a best approximation to u E dom(A) for which u uniquely solves muin IIAu — fIILpID, 1] for 1 < p < 00., p # 2 is beyond the scope of our current study. It is worth noting that the extension of the theory of regularization of linear ill—posed problems from Hilbert spaces to Banach spaces is of relatively new interest to the inverse problems community and we refer the reader to I8], I35], and I41] for recent developments. 1.2 Regularization and A Priori Parameter Selec- tion In practice, one commonly does not have access to exact data, rather a measurement of the exact data which always has an element of error associated with it. The ill- posedness of the problem due to a lack of continuous dependence of solutions on data now becomes an issue. As previously mentioned, we think of the given measurement data, denoted f 6, as being a slight perturbation of the “true” data f or a version of the true data corrupted by noise. And so for a given level of error 6, we assume that the given “noisy” data ffS satisfies IIf — f6IIX _<_ (5. We are then faced with aprnoxirnating the “true” solution u of Au 2 f given f 6. Notice that even if we are given [6 E 11’.(A), the ill-posedncss of the problem (due to A”1 being unbounded) means that A— 1 f 6 could be a very poor approximation of u even if the noise level is small. Moreover, there is no guarantee that solutions .4’1 f (5 converge to A—1 f in X as the noise level 6 shrinks to zero. A similar argument can be made regarding the unbounded generalized inverse. It follows that A+ f 6 need not converge to u+ in X as the noise level shrinks to zero. To handle this issue, we implement a regularization method. The idea is to con— struct parameter—dependent approximations that depend continuously on the noisy data f ‘5 in such a way that the approximations converge to the true solution as the noise level shrinks to zero, or for a given level of noise, so that the error made in approximating the solution is small. Definition 1.1 Let {Tala > 0 be a family of continuous operators from X into itself such that for any u E X, all—TD IITaAu — UHCID, 1] = 0. (1.2) We say that Two > 0 is a regularization operator for A—l. ( We may define similarly a regularization operator for A+.) A regularization method consists of a family of regularization operators {Tala > 0 accompanied by criteria, which we denote as da, for choosing the regularization parameter a. Definition 1.2 A regularization method ({To'la > 0 , da) is said to be convergent if for any u e X, and given data f5 e X such that I] f — f5IIX g a, with f :2 Au, the regularization parameter or 2 add), selected via the criteria (la, satisfies . _ . , - (5_ = hm d(6)—0 and alimoIITawfl ule 0. (1.3) —+0 (We note that generalizations of the above are used in the case of least squares problems when X is Hilbert.) For a > 0, let Ta be a regularization operator and consider the bound on the error made in approximating the solution, IITaf‘5 — u X 3 “Ta/6 — TafIIX + IITaf — ullx _<_ “Tall |If5 — fIIX + IlTaf — uIIX s IITaII6+ llTaf—UIIX- (1.4) The second term on the right-hand side of (1.4) represents the error due to regular- ization and tends to zero as a —> D. The first term on the right—hand side of (1.4) represents the error due to regularization accompanied by noise in the data and as a —> 0, IITQII tends to infinity in the case of A_1 unbounded. Thus regularization parameter selection strategies are devised in an effort to minimize this upper bound or to guarantee that this upper bound shrinks to zero and the noise level goes to zero. Note that the value of a- dictates the amount of regularization; in classical methods, a choice of a too small tends to lead to highly oscillatory approximations whereas if a is chosen too large, approximations tend to be overly fiat. Tikhonov Regularization Classical regularization methods, such as Tikhonov regularization, have been fully developed in the context of underlying Hilbert spaces. Since our purpose is only to briefly introduce regularization methods commonly used, we will assume for the moment that X is a Hilbert space. Then classical regularization methods involve regularization operators of the form Ta 1' 90(A*A)A*, C! > 0, where A* denotes the Hilbert adjoint of A and for each a > 0, ga : ID, IIAIIQ] —+ R is a continuous function defined on the spectrum of the self-adjoint compact operator A*.4. If ga has the properties . 1 1) ga(t) —+ f as a —r 0 for each t > 0, ii) Itga(t)I is uniformly bounded for each or > 0 and t > 0, then for each a > 0, the approximation Taf = ga(A*A)A*f (1.5) depends continuously on data and ”Ta. f — uII X ——> 0 as a ——> 0. The function ga(t) = +l corresponds to Tikhonov regularization. \Nith this (I .. choice of ga, for each a > 0, the approximation Taf = ga(A*A)A*f is the unique minimizer of .](,(u) :2 IIA'IL — (“2 + (r II’l/IIQ. Equivalently, considering the Volterra operator defined in (1.1), Taf = gQ.(A*.4).-4*f = (a1 + A*.4)‘1A*f, 10 which solves the second-kind Fredholm integral equation 1 7' 1 alien/t k(r—t)/0 k(T—s)a(s)asaT=/t k(s—t)f(s)ds, ae. te [0,1]. (1.6) 1 It can be shown that for the Tikhonov regularization operator, “Tall S —, I15]. (1 In the case of noisy data f 6, referring back to (1.4) leads to the error bound TarS — u s 3— + IITaf — ull- J3 If we use the a priori parameter selection criteria - 62 - (P1) Choose a = 0(0) such that — —+ D as a —> D, a then (Ta = ga(A*A)A*,P1) is a convergent regularization method. Further, if the true solution satisfies an additional smoothness assumption, we can obtain a rate of convergence. For instance, a common source condition is to assume u E R((A*A)“), for 0 < u g 1 and so u = (A*A)f‘v for some v E X. Then choosing Oz = K62/(2l‘ + 1) (for example as in I15]), it follows that IITaf5 _ u z 0 (AAA/(211+ 1)) , In fact, with Tikhonov regularization the fastest rate of convergence that can be obtained for non—degenerate kernels in (1.1) is ()(62/3). For detailed proofs, see I7], I15], or [21]. 11 Lavrentiev Regularization Lavrentiev (or simplified) regularization involves regularization operators of the form Ta = (a1+A)“1, a>0. Then for the Volterra operator defined in (1.1), we see that Ta f = (OJ + A)—1 f solves the second—kind Volterra integral equation t au(t) +/0 k(t —- s)u(s)ds = f(t), ae. t E [0,1]. (1.7) When X is Hilbert and A is monotone, it was shown that for the Lavrentiev , I42]. With noisy data f 5, referring back to (1.4) QIH regularization operator, “Ta” 3 leads to the error bound . 5 . IIraflS — u] ~_<_ E + Hr...) — uII. If we use the a priori parameter selection criteria 6 (P2) Choose or = a(6) such that — ——> 0 as 6 ——> 0, 01 then if 27(0) = 0, (and possibly higher order derivatives of i] are zero at t = 0 depending on u) it can be shown for the one-smoothing problem that (Ta = (of + .4)—1 ,P2) is a convergent regularization method I24]. If the true solution satisfies the additional smoothness assumption, u E 1?.(Alu) for 0 < p S 1. and so u = Al‘v for some 'L‘ E X, we can obtain a rate of convergence. Choosing a = Kril/(‘u + 1), as shown in I43], it 12 follows that IITaf‘S — u = O(6WI#+1)). 1.3 A Posteriori Parameter Selection Recall that a regularization method consists of a means of constructing parameter dependent stable approximations of u accompanied by a rule for selecting the regular- ization parameter. And so to say that the method is convergent, the rule for choosing a must satisfy i) (i=(r(6)——>0 as (5—90. ii) IITaf6_1lX_—)O as (5-—>0, where T”, a > D is any regularization operator. There are many drawliiacks of a priori strategies. First of all, to make the most proper choice of the regularization parameter requires knowledge of the smoothness parameter p and Hell appearing in each of the rate estimates in the previous section I43]. They also only provide an asymptotic result, specifying how to select the regu- larization parameter given a sequence of 6’s approaching zero. However one usually does not know the the level of smoothness u of the unknown true solution or [III] Furthermore, one typically has a single value of 6 and would like a rule to select a single value of the regularization paran’ieter. As we will find, a posteriori parameter selection criteria are much more useful in practice since they give direction on how to select a for a given value of (I and do not rely heavily on knowledge )1. or ”(II . 13 It must be noted first that convergence cannot be guaranteed for any strategy with which the regularization parameter is chosen independent of the noise level (for a proof see [1]). Therefore we will focus on theoretically sound parameter selection strategies that depend on the noise level 6 and in some cases the noisy data f6. Heuristic parameter choices rules, those not relying on the noise level, are still used in practice with much success. L-curve is an example of one such rule and a counterexample to its convergence can be found in [19]. More examples of heuristic parameter selection strategies may be found in I7] and I20]. A number of a poster-tori parameter selection strategies, a few of which we list below, were developed originally for use with classical regularizations of linear prob- lems. Paired with a suitable regularization, each leads to a convergent method when A is linear, see I7], I15], I16], and I34]. When A is non-linear, slight modifications of these strategies were made in I9] and I43] to obtain convergent methods. Most a posteriori parameter selection strategies require the selection of the regu- larization parameter “online,” i.e. the user computes an approximate solution T a f 5 at various values of oz and selects the value of a for which Ta f 5 satisfies a set of cri— teria. Morozov’s Discrepancy Principle is the most popular with which one chooses the regularization parameter so that the norm of the discrepancy, ]]A(Ta f 6) — f 6]] , is of the order of the noise in the data. The heuristic motivation of the principle is that the method should not produce results more accurate than the level of error in the given data I7]. With this idea in mind, we present the following strategies that are based upon defining a functional d to represent. the “discrepancy” resulting from the approxima- 14 tion and then choosing the regularization parameter (r > D so that d is of the same order as the noise in the data on which the approximation was based. Assume that for a > 0, Ta is a regularization operator and let uéy := Taffy. Let T E (1,2) be fixed and let d : (0, 00) —+ (D, 00) be defined according to one of: ((11) Morozov’s Discrepancy Principle - (1(a) = ”Aug, — f6“ (a2) Archangeli’s Method — a(a) = ,/a IIAaé, — f6 I] (a3) Rule of Raus - am = ”[1 —ATC.]1/<2P0)[Aa§, — f‘51II, p0 > 0 ((14) Modified discrepancy principle - d(cr) 2 (lg ”Aug. — f6” , q > 0. The rule for each is to choose the regularization parameter a > 0 so that (1(0) = r63, where s = 1 in oil, (12. (13 and s > 0 in d4. For a detailed development of each rule see I9], I16], I30], I31], and I34]. Although Morozov‘s principle is by far the most well-known, it is not best suited for use with all regularization operators. For instance, with Tikhonov regularization, the rate of convergence obtained with this principle under standard source conditions is at best 0(61/2) I7]. Bakushinski also proved that Lavrentiev regularization with Morozov’s discrepancy principle is not a convergent method II]. An example of this failure of approximations to converge can be found in [16]. Convergent methods can be obtained however using the other principles listed. Tautenhahn proved convergence of Lavrentiev regularization using the Rule of Raus I5 in I43]. The class of modified discrepancy principles, a generalization of Arcangeli’s method (3 z 1 and q 2 %) considered originally by Engl, Groetsch, and Schock in I5], I14], I39], and I40] was used to improve the rate of convergence with regulariza- tion operators that already lead to convergent methods when paired with Morozov’s discrepancy principle. For example, an optimal rate was given for T ikhonov regu- larization for specified choices of s and q. Convergence of a modified discrepancy principle with Lavrentiev regularization was shown in I16] and I31], as well as values of s and q determined that lead to optimal rates under various source conditions in I9] and [32]. Regularization Methods for Volterra Problems In general, classical regularization methods are deemed unsuitable for Volterra prob- lems I22], I23], I29] since as was previously discussed in the Hilbert. space setting, they require use of the adjoint A* of the operator A in (1.1). In making rise of the adjoint, the regularized equation one solves is no longer Volterra I29] thus one loses the causal nature of the original Volterra problem. This was seen when Tikhonov regularization was applied to the V—smoothing problem leading to the Fredholm equation in (1.6). This loss is also evident when the regularized equation is discretized, for instance by collocation with piecewise constants. The numerical discretization leads to solving a system D1: = b where D is a full matrix which leads to numerical methods requiring 0 (N3) floating point. orwrations where D is N x N. Whereas with non-classical methods, such as Lavrentiev regularization and local regularization (to be defined), 16 the regularized equation is Volterra, (second-kind in the examples mentioned), and thus the causal nature of the problem remains intact. The same discretization leads again to solving a system D2: = b instead with a lower triangular matrix D (in the convolution problem, D is Toeplitz). These methods are faster and more efficient, of order 0 (N2) floating point operations, since the resulting system can be solved sequentially. 17 CHAPTER 2 THE THEORY OF LOCAL REGULARIZATION IN CIO,1] In this Chapter, we begin our study focusing on the case when A : CID, 1] —> CID, 1] is a linear Volterra convolution operator of the form t Au(t) 2/ Ar(t — s)u(s)ds, for all t E [0,1], (2.1) 0 and the kernel k'E CVID, 1] is u-smoothing, z/ 2 1. Recall air d(V_1)A — 0 =0, f=0,1,...,I/—2 and 0 0, and without loss of generality, assume that. kl” — 1)(0) = 1. Our goal is to solve the u-smoothing problem t Au(l) 22/0 k(l — s)u(s)cls = j(l), for all l E [0.1], (2.2) 18 for u E C I0, 1] given f E C [0, 1]. Note that the range of A is given by RM) = Gag == {9 e Ono. 1119(0) = g’t0) = = 90’ — 1Me) = 0}. which we verify as follows. For any u E CID,1], we may write Au(t) = k * u(l.) for all t E [0,1]. Since k E CID,1], then 1.: * u E C1I0,1] and k * 11(0) 2 0. (see Theorem 3.5 of I13].) We may repeatedly differentiate with respect to t to obtain for all t E [0,1], f . 5%Imanyzk02auu)ecflgrn mm k“l*um)=0, using that HE) E CID, l] and k(g)(0) = 0, for E = 1, - -- ,I/ — 1. Differentiating the V—th time, we obtain u(t) + if”) * u(t) e CID,1], for all t E [0,1]. Thus V d Au E C [0,1] and Au(D) = —tIAu(t)] = — ——IAu(t)] = 0. 4 i=0 —CMV‘U i=0 This proves that R(A) _C_ CM 0. 19 On the other hand, if f E 01/. 0, then consider the second kind equation for all t E [0, 1]. There exists a solution a E C [0, 1] to this equation since f (V l E C [0, 1] and k E CVID, 1] (see [3] or [13]). Now consider the initial value problem, (2.3) t where equality is understood to be pointwise. If y(t) = / k(t — s)u(s)ds for all 0 t E [0, I], then it, it easy to see that y satisfies the initial value problem (2.3). However, y = f also solves (2.3), so by uniqueness of solutions, it follows that for all t E [0,1]. Therefore given f E CV.Dv we have found it E C [0, 1] for which Au = f. This proves 012,0 g [{(A). Therefore R(A) 2 CV: 0. One can also verify that the solution to the V—smoothing problem defined in (2.2) 2D is unique. Suppose that t [0 k(t - s)u(s)ds = 0, for all t E [0, 1]. As before, we may differentiate 1/ times. Using that k is u-smoothing, we obtain the second-kind Volterra equation for all t E [0, 1], which has a unique solution that may be expressed using the variation of constants formula, [3] or [13] (see also equation (2.20) below). With this, we conclude that. u(t) = 0 for all t E [0, 1] and N(A) = {0} . Since k E CVID, 1], it follows that A is compact (Theorem 2.5 in [13].) We showed that A is injective with dim R(A) = 00, however R(A) is not closed in CID,1] and so A-1 fails to be continuous. Our problem is ill—posed due to a lack of continuous dependence of solutions on data. In this chapter, we add to the theory of local regularization for solving the linear Ix-smoothing problem, by introducing an a posteriori parameter selection strategy. We begin by deriving the second kind Volterra equation associated with local regu- larization for the underlying data space C [0, 1]. We outline the sufficient. conditions developed in [22] under which uniform convergence of regularized approximations to the true solution was achieved for exact data f E C [0, 1] and for perturbed data f 6 E C [0, I] for appropriate a priori parameter choices. While doing so, we normalize the approximating equation and alter one of the conditions therein slightly, defining 21 the conditions here to be [AD] and [AI]. We make explicit a sufficient condition on the relationship between the kernel k and length of the interval (0, R] for which the approximating equation is well-posed and the resolvent of the approximating equation is uniformly bounded for all r E (0, R]. We restrict the choice of regularization pa- rameter to an interval (0, R] over which this condition is satisfied so that regularized approximations are uniformly bounded on (0, R]. This result sheds light on the con- dition in [22] and [36] that so long as k and R are sufficiently small, approximations are bounded uniformly. Once applying the normalized local regularization scheme satisfying [AD] and [A1] to the V-smoothing problem in equation (2.2) , we define the additional assumption IA2] in preparation of introducing our a posteriori parameter selection strategy. Our strategy consists of a newly defined discrepancy principle to assist a user in select- ing a constant value of the regularization parameter r. We establish that conditions IAD]-IA2] are sufficient to conclude that local regularization paired with our discrep- ancy principle is a convergent method. We do so by showing that approximations, constructed from noisy data f 6 E C [0, 1] in a local regularization scheme satisfying IAD]-IA2] and the new discrepancy principle to select the regularization parameter, converge uniformly to the true solution as the noise level shrinks to zero. we also provide a rate of convergence under a suitable source condition. One will find that the discrepancy principle we introduce is a natural complement to the theory established in [22]. 22 2.1 Local Regularization in CID, 1] Let 2] denote the “true” solution given “exact” data f and f 5 the “noisy” version of f. Our first assumption concerns the availability of the data. [AD] Let 0 < R << 1 be such that a E CID,1+ R] and f E R(A) Q CID,1+ R] so that Ail = f for all t on the interval [0,1 + R]. Given 6 > D, the data f6(t) is available for all t E [0, 1 + R] and f 6 E C [0, 1 + R] satisfies 6. [[f‘-f6 CID,1+R]S If additional data is unavailable, we suffice with approximating a on the slightly smaller interval [0, 1 — R] . 2.1.1 The Approximating Equation Proceeding as in [22], it follows from [AD] that it satisfies t-I- ,0 /0 k(t + p — s)u(s)ds = f(t + P), for all t E [0,1] and ,0 E I0,r] for any r E (D, R]. Splitting the integral and using a change of variables we obtain ,0 t f k'(/) — s)'u.(l. + s)(l.s +/ k’(l, + p — s)u(s)(ls = /(l + p), (2.4) 0 0 for all t E [0,1]. p E [0. r] and for any r E (0, R]. 23 For each 7' E (D, R], consider the space of all bounded linear functionals on C [0, 7]. Recall that the continuous dual space of CID, r] can be identified with the space of all regular Borel measures on IR. Then for any QT E [CIO, r]]*, there exists a signed measure qr defined on the o-algebra of Borel sets of IR [37] such that mm=Igmmot for any g E C I0,r]. Applying a functional QT, we integrate both sides of equation (2.4) with respect to Dr and obtain /0T /Op,l,:(p,—s)(1(I+HQ)(19(II]7(/))+At Anti-fp-SMUM/ll’ldsldg =/() .l( H'l’) "Ill/(Id (2.5) an equation that a still satisfies for each 0 < r g R and all t E [0, 1]. Beck’s original idea in [2] for solving the discretized inverse heat conduction prob- lem involved stabilizing the problem by assuming it. to be constant locally. Extending this idea to the abstract setting formulated above, i.e. taking a constant on the small local interval It,t + p], p E [0, r], leads to consideration of the second-kind Volterra equation, r p t r r u(/.) / / k(p — .s)(ls(lr)v,- (p) +/ / k(l + p — s)(l‘r/7-(/))'u.(s)(ls = f [(l + [))(l'I’/-r(/)), 0 0 0 0 0 (2.6) for t E [0.1]. 24 2.1.2 Properties of 77¢ In [22], hypotheses (H1)-(H3) were given indicating how to select a family of measures m, r E (0, R], in the approximating equation (2.6) . we state them here as assumption [A1], with any changes made to (H1)-(H3) mentioned in the remark that follows. [A1] The measure 77¢ is chosen to satisfy the following hypotheses. (H1) For i = 0. 1, V, there is some a E IR and c, = cz-(V) E IR independent of (H2) (H3) r such that .7‘ . . /0 pump) = r2 ”a. with cu # 0. \Vithout loss of generality, we may assume that Tlr is scaled so that Cu = 1A. The parameters c,, i = D,1,...,z/, satisfy the condition that the roots of the polynomial pl/(A). defined by Cl/ l/ (:V—l V—l C1 CO A =—/\ ———/\ —.\ —. M) u! +(u—1)! + +1! +0r have negative real part. There exists C > 0 independent of r such that I f ’4de (p) 0 _<_ “hum, r} Crag for each h E CID, r] and any 0 < r g R. Remark 2.1 In [22], a generalization of (H1) was given which allowed for a larger class of gr ’3. It allowed for Ci to be replaced by ci+O(r) as r -> D in (H1). In fact, the theory we develop in Chapters 2 and 3 carries over to this more general case without much conceptual difiiculty; however, the increase in some technical detail needed to consider this more general case did not seem to be worth the slight additional benefit, especially since the measures 727‘ used in practice do not require this ()(r) term. We recall two classes of measures (constructed in Lemma 2.2 and 2.3 of [22]) that satisfy assumption [Al]. Lemma 2.1 [22/ 1. Let 1/ = 1,2,---, be arbitrary and let w E L1I0,1] be such that -1 / pI/w(p)dp = M. Then for r E (0, R], the measure 777‘ defined by 0 / gtpldnr(p)= / 9(p)i:rr(p)dp. QECIOWI. 0 0 where wr E L1[0,r] is given by il’ripl = LI (é) -. (“3' p E [0’ TI’ satisfies condition (HI) (with (51/ = fol put/)(phlp and o = I) and condition (H3) {with C = ”an Further. for all I/ = 1,2, ' - ~ , and given arbitrary L1I0,1] ‘) positive m1. m2. - ~ , and 7711/, there is a unique monic polynomial u of degree l/ so that the resulting family {m} satisfies ([1]) with cu = V! and o = 1, {H2} with the roots of the polynomial pp in (H2) given by (—mz-),i = 1, . -- ,z/ and (H3). . Let 1/ =1,2,--- , be arbitrary and let 55,71 E Rt: 0,1,---L, be fixed so that 0ST0 0, then pl/(A) = H (A + mi) taking into account the i = 1 scaling of 77¢ by assigning cu 2 V1. Further, with Ci E Ki = 0, - - - ,V, then CO: H m >0 (2.10) i=1 It follows that 7/,” = r000 > 0, (2.11) for all r > 0. Then define for any u E C[O, 1] and v E C[0, 1 + R], (tr := ID [pk — 8) )ds (1,770) )0)? (2.12) "tr _fci‘l t+p )d77r(Pl "tr t .’1'r'tl.(/.) :=/ krrr(t — s) 'u,(s)(ls, (2.14) 0 . (t-l—p) (tr) /) fr“) 3: [of A, 1( ), (2.15) [T v(t+ s (lsdi ) Drv(t):= [0 [OH 7 l I' (I), (2.16) ’r 28 for all t E [0, 1] and each r E (0, ft]. With this notation and recalling that it, satisfies equation (2.5), then equations (2.5) and (2.6) may be written equivalently as D7~u(t) + A7~u(t) = f7~(t), for all t E [0,1], (2.17) and aru,(t) + Ara—(t) = f7~(t), for all t E [0,1], (2.18) respectively. Remark 2.2 Equation (2.18) is a normalized version of the regularized equation con- sidered in (22/. However, this is an important difference and will later allow for our discrepancy functional to be appropriately normalized. Provided or 75 0, equation (2.18) is well-posed and there exists a unique solution u7~ E C[0, 1] that. depends continuously on f E C[0, 1] [13]. For each r E (0, R] for which or # 0, (arI + ATV—1 is a bounded linear operator on C[0, 1] ['13], so that we may represent the solution as W = (art + A7~)_1,/1,~. (2.19) We may also express ur using the variation of constants formula in [3], f7 (5') (tr -t 71.7““) = ila—E‘Q — /0 XT‘U — 8) ds, (2.20) 29 for all t E [0, 1] and r E (0, H] for which or 7f 0. For each such r > 0, there exists a unique function X)» E C[O, 1] [13], called the resolvent kernel, that satisfies the integral equation of the resolvent kernel (corresponding to equation (2.18)) [4] given by t Xr(t) +/[‘) er(S)dS 2 ha), (2.21) ar Gr for all t E [0,1]. Therefore to guarantee well-posedness of equation (2.18) for all r E (0, R], for some 0 < R g R, we must ensure that a7~ does not vanish on the interval (0, R]. The following lemma gives a condition under which this is true. Based on assumptions [A0] and [A1], we obtain the following results that will be used frequently later sections. To simplify notation, define Lemma'2.2 Assume [AU] and (A 1/ are satisfied. I. Let a7» be as defined in (2.12). IfR and k satisfy 1/ + 1)! R. HM”) < (——.—_ 2.22 C10, R] _ CK- ( ) for some K, > 1 and 0 < R g R. then — 1 f. 1 K «rl/ S ar ‘_<_ I + ~rV, (2.23) m0 MD for all r E (0, R]. 30 2. Let h E (.7[0, 1 + H] and define 7. ht d ~ hT‘U) :2 f0 ( +p) Mp), 7r for all t E [0,1]. Then rh—TO [[hr -— hllC[0,1] = 0. (2.24) 3. Let Ar be as defined in (2.14) for r E (0, R]. Then lim [[Ar — 4]] = 0, (2.25) where H“ is the operator norm on B(C[O,1]). Proof: 1. Consider the general u-smoothing kernel which may be expressed in terms of its Taylor expansion about 0. Then I: E CV [0, 1 + ii] is of the form tl/_1 W) = (I, _ 1), + 142mg. tl/ 31 for some Ct E [0, t] and t E [0,1 + R]. By [Al], for any 7‘ E (0, R] f0 f6) k( p—Ts)ds)dnr(p) (17‘: ( f0 (féo k(s)ds)dn7(p) = 7i [0(1) p VV_—11)d)d77r(P)+ f0 ( (”runs-85d.) am] 9] u+o) )2 V:/()(.Apkal(Cs)::-;ds)dnr(fll]- If R and k satisfy R ”All V < M C]0. B] On ‘ for some K: > 1 and 0 < R _<_ R, then since r llk(V)]]C[0,r] is a continuous, monotonically non—decreasing function of r that has value zero at r = 0, it follows that for all r E (0, R], 1 Tlll‘lu C[0 7-] < Rlll‘l) V C]O,R] ‘ (143,211 32 and so TV 1 (V) rV+1~ a > ______. ~ _— . Ctr - CO T000 lk ||C[O,T](U+1)l0 T " (V) > _7: _Crllk C[0,r] — CO (I/+1)l z/ a 7+1) CO H. 5—1 1/ = .7" K60 Similarly, a7. < i __1__lk(V) fléwa _ c0 T080 C[0,7‘](1/+1)l ” «M. < i a] W Co + (u+ 1)! I/\ as OI: A p-a + xiv—- \_/ 2. For all t E [0,1], (H1) and (H3) imply |h,.,~(t)—h.(t)| = f0 hltl‘vfldmp) —h(t) Z 1'5" (W + p) — ha» dmp) raco 3 0|th + .) — h(t)|lC]0,r] 2' C sup |h(t + p) —— h(t)|. p E [0.7“] 33 Since h is uniformly continuous on [0, 1 + R] and by properties of suprema, lim ||h7~(t) — h(t)||C 0 1 S 6‘ lim sup sup |h(t + p) — h(t)| = 0. 7—”) [’1 "‘—’0te]0,1]pe]0,r] 3. Let a: E C]O, 1]. Recalling the notation in (2.13) and (2.14), for any 7‘ E (0, R], um~ — All = sup “(Ar — Ammo, 1] ”an“, 1] = 1 t = sup sup / (kr(t — s) — k(t — 3)) :z:(s)ds ”1]qu 1] =1 t E l0,1l 0 S llk'r - kllC]0$1] - From the previous result, it follows that l' A‘—-A <1' k" —k =0. T1310” 7 |l_T1_H)10 r HC]0,1] 2.1.3 Properties of 2% We now turn to the establishment of an estimate on the size of ] (arI + Ar)_1]]. This information can be obtained by examining the size of H267” . Ring and Prix established in [36] that for any V—smoothing k and posit-zine mea— sures 777‘ satisfying versions of (H1) and (H2) as given in [25], there exists a con- 34 stant f) = D(u,c0,c1,--- ,cy) independent of r, such that if [lk(u)[[L1[O 1] g A then the norm of the resolvent was bounded uniformly, i.e. there exists 0 < M 1W(k,z/, c0,c1,--- ,CV) for which “X7” 3 1W for all r sufficiently small. A L1]0, 1] detailed proof of this result can be found in Lemma 1 of [36]. By introducing the use of signed measures 777" satisfying the versions of (H1) and (H2) in addition to the condition (H3) as given in [22], Lamm later proved the existence of a constant C = C(U, c0,c1, - -- ,Cy) independent of r, such that if ”’6‘” ll L°°]0, 1 + R] is still bounded uniformly for all r E (0, R]. g (j for R > 0 sufficiently small, then the resolvent “(15'7” I 1[0 1] In Lemma 2.2, we proved that a sufficient condition for ar # 0 for all 'r E (0. R] is that (z/ +1)! R “(“01)an R] 3 Ch; , (2.27) for some K. > 1 and O < R S R. Therefore we have existence of a unique solution to equation (2.18) for all r E (0, R]. In the two lemmas that follow, we take the analysis of ”A?” in [36] and [22] further by clearly illustrating the dependence of the Mo, 1] length of the interval (0, R] on which HXTI] is bounded for all r E (O, R] on Up, 1] the size of R [[k(V)[[C[O R] . In the first of two lemmas below, we establish a condition on the size of If that determines the length of the interval (0, R] to which we restrict our choice of r in relation to the selection of the roots of the polynomial pp in (H2) of [A1]. In the second lemma, we conclude that the resolvent [[A3]] is uniformly bounded for L1]0. 1] all r e (0.1%]. provided ”M satisfies a particular bound. The proofs of C]0.1+ R] 35 these two lemmas are quite technical and serve as an update of the proof found in Lemma 1 of [36] with modifications found in [22]. Therefore many of the arguments follow closely or are identical to the proofs in the references mentioned. Lemma 2.3 Assume [A1] holds and that for some 0 < R _<_ R, R and k satisfy (11+ 1)! )!!0]0,R]S (in ’ (228) Rule for some K. 2 R where R > 1 is sufficiently large. Then the eigenvalues of Ar := A + MT have negative real part for all r E (0, R]. where 0 0 1 0 A := , (_(70/0! —(-1/1! . —(¢V_1/(1/—1)!) ] 0 0 0 0 l 0 0 0 0 M7. = , K _m0,r ‘mlm —mz/ — 1,r / and 36 forj=0,1,---,V—1. Further, for the matrix 00 T Br 5/0 (exp(A7~t)) exp(A7‘)dt. there exist positive constants L, K, and 8' so that [:r] > 2L(:1: TBrx)% 1 T .. '2' (2.29) [8710] 1 sufficiently large, if n 2 R1, then the eigenvalues of A + Mr have negative real part. It follows that if R and k satisfy (2.28) with r: 2 R1, then for Ar = A + Mr, the matrix 00 T Br 5/0 (exp(Art)) exp(A7~)dt is well-defined, symmetric, positive definite and such that AIBT +137~A7~ = —1 for all r E [0, H] [3]. Further, since Br ——> BO (as r ”130/ 0) as r —+ 0, there ”C [0. r] —) exists a R > R11 sufficiently large so that if R and k satisfy (2.28) with K. 2 Ft, then there exist positive constants L, K, and S so that (2.29) holds for all x 6 RV and any 7- 6 (0.1.21 [36]. :1 Lemma 2.4 Assume the hypotheses of Lemma 2.3 hold. Then there exist constants C > O and M > 0, independent of r (but dependent on k,1/,c0,c1, - ~ ,cy), such that 38 llk(y)llC[O,1+ R] S C’ then we have lerll 1, < A L1[0,1] - for all r E (O, R], where Xr is the resolvent defined in equation (2.21). Proof: Let Xr denote the resolvent defined in equation (2.21) . Then for "r E (O, R] 1 and t E [0, —J, Xr satisfies 7‘ rt k 't— k ‘t Mun/0 vawsws: "“ h (LT 07' and so making a change of variables, we obtain t ,. . _ ,. Xr(rt) +/ TMX7‘(TS)(IS = “(71), 0 (LT (L 1‘ for all t 6 [0,1] . Define 7' AW) ;= MAN). Then t , , . . AA’T(t)+/O rw£~(s)ds=rkr(7t), (2.30) 39 we see that 7 for all t E [0,1] . In order to bound lerHL1[O,1]’ -l 7. d /0 IX (t)! t X 1 1 . t = / — Xr (—) dt .0 r r UT. 0 £‘ . I 7 L1[O,l/r]' and so it suffices to show that '21} 1 S M for all. 7‘ E (O,R] under the L [0,1/7‘] conditions of the lemma. Proceeding as in [22] and [36], and since k E CWO, 1], we may differentiate equation (2.30) j = 1,--- ,1/ times to obtain . j — 1 5 +1 . . 3 7' (T A —1-—[ Wm = — Z kl howl] la) (Ir 5 = 0 .t j +1 . , .j +1 . — / T kfflhrn — s))2€r(s)ds +7 kffllm, (2.31) . 0 ar Gr 1 for all t E [0, —] . 'We focus on the V-th differentiation r U V-1.,.1<'+1 AU_ _; X5 )(t) = — Z a kl‘homl 1 ”(a z: 0 T 't T” +1 (V) A TV +1 (V) -/0 a k'r (r(t—s))X7v(s)ds+ (1 hr (rt), (2.32) for all t E [0,1] . T 40 Recalling the definition of kr in (2.13) and or in (2.12) , using the Taylor expansion of k at 0 and [A1], we have thm ='i- kwhmmto> 7r_0 _ 1 rpV—1—€ 7",.” (UV—g, —-5;:A~ijjjmmHWhnA(J)Ku—1—Da:jmwn@fl P r u—E _ i CV—l—KTV—l-K—l-U V) fl.__7. foreach€=0,l,-~,z/-1and Then dividing we obtain 7.6 + 1,140 (0) : CV—1—€ (L7. (1/ — l — C)! —€ 1 8+1/ p” (C(V ———d ‘ CIT/YT )() 0 independent of r E (O, R] and define the Lyapunov functional 1 _ t 1/7‘ Vr(t,17(')) = ($T(t)37‘113(t))2 + K/() ft HDHC — 8)” dCCtSWSa (2-34) h 1 I I C for suitable :1: : [0, —] ~—+ RV , where ”H 15 the matrix norm on R” X V induced from 7‘ the Euclidean norm H on IR”. Let Z () denote the 1/ x u matrix whose columns zj(-), j = 13 - - ~ V, are solutions of the homogeneous part. of (2.33) (taking 97 :— 0) such that Z (0) = I. Following the arguments in [30'], we differentiate VT along zJ-(-) to obtain goat. zJ-(m) _ 1/r S —(L-K/t HElC-UHCK) As in [36], our goal is choose If > 0 independent of r E (07 R] such that (13. t zj(t)l — (I? — Ix’)/O ||Dr(t - s)“ 23(8) 1 _ for t E [0. —] and some L > 0 independent of 7‘ E (O. R]. r If this can be done, then exactly as in [36] we may integrate to obtain ds. 3.)(8) 43 Then from (2.29), we have % zJ-(t)l < (zf(t)Brz]-(t))% S Vrftazjltl) t g V7~(O,zj(0)) — L/O 253(5) d3 = (zf(O)Brzj(0))% — L/Ot zj(s)| d3 g 21—L zj(0)|—L/Ot(zj(s)lds 1 for all t E [0, ,7] . Exactly as in [36], I 1 z- < —=, H JllL1[0,1/’r] — 2LL and Z(t) satisfies 1/+1 — . < ' — t . (12(0))- 2L Sam SL ), 1 for t E [0, —] and so 7" r u+1 lllll < ——=—. Ll(0,1/7-] — 2LL 44 (2.35) (2.36) Bounding g7~(l,) we have 7.1/ +1 V (gran ——- kl )(rt) (17' -;_ 1 7' S 7‘_M0 -— / kMW + many-(Ml N *1’71‘ 0 _ Eco -— (1,)” < ———C L. _ — Tit—1 C[0.1+R] = ~ (30/) _ 2.37 m" H C(0,1+ R] ( ) for some constant 7h > 0 independent of r E (0.12]. Further, from (2.31) 71+ 1140(0) r2.) (0) (W0) = It —— 0 = , (LT (17 and for€=0,~o .1/—— 1, . f a (u—i—o! ’V—l-flc where _ (n+1)! 1 Cll-l—fl ~ _ < < A] 1 . my_1_€17~ _ H—l (1/—€)!+(V—1—€)l(u+l)! _ (“<00 for all r E (0, R] using the bound in the proof of Lemma 2.3. Therefore |.a(0)| s don), for some 0 S (10(R) < 00. Evaluating (2.31) at I, = 0, we have . j-1 g 1 , . , ' 1 . 565%)? Z T + ki‘)(0)2259‘1‘f)<0>+”+ 4%), (2.38) (1.7" (17' I? = 0 and we may argue by induction that with 0 g dJ-(R) < oo, forj = 0, - -- ,1/ — 1, and all 1‘ E (O,R]. Therefore VENOM S (1(3) < 00: (2-39) for some (1(R) Z 0, independent of r E (0, R]. Using the variation of constants formula in [3] for the non—homogeneous equation, we have t 727-(t) = Z(t)R7-(0) +/0 Zl,'(t —— s)gv,‘(s)(ls, 'I l for t E [0, f] . From (2.35), (2.36), (2.37), and (2.39), we have 1/ |R7~(t)l < J211Sexp(—.S'I_Jt)d(Fc)+ 52%,,” Haul) C[0,1+R]’ 46 7 for all t E [0,1] and r E (0, R]. Therefore, l3" L1[0,1/r] ‘ ”RTHLWOJ/Tl =“zonam+()%t-”W“”5nmani S 2:15)...) /,1 (>-—llk‘")||cm,1+m s ”2:18dg%+2257fi0 = M 0 independent of r E (0, R] such that 1(waamns-Zeml dt 1 ' _ - for t E [0, —] and some L > 0. Returning to (2.35), if we take K > K > O, we have 1“ that (WM-(1.») -1/r g — (L — R/t IIDMC - t)“ CK) Szlgk ZjU)‘ 1 _ _ for t E [0, :l . And so we need K > 0 and L > 0 so that I _ Ur _ L-Kfl' mac—unrat. can 47 for all t E [0,1] . Consider 7‘ l/T 1/1‘7,1/+1 I/ / IIDMC — t)” dc / k5 >0.“ — t))d< f, f. ds S Ctr .1/ +1 1— rt = 7 1/ 1279/)(5) d3 GT 7' 0 RC 1 1— 7/ T - 0 — / We + P)d77r(/0) K *177‘ 0 0 :ij0 V < _ - 1011’” 11C[O,1+ R] ,7; .. . _ R—ICC’ 1 for all t E [0, —] , then returning to (2.40), we have that r .17“ —_, ~A L—R’/ ”Dag—mm; > L—R_"‘loo .t n— : L—I—LK'IC'C‘, which means we need R > 0 and L > 0 so that or so that H. _- Ix’<1. L ‘ 1 ' or equivalently, for any C satisfying ~ L — L ‘ — 1 C < - - K CF: This proves existence of a R e (K, L _. L .— 1) , C ‘R for which (2.40) is satisfied for all r E (0, R]. [:1 Corollary 2.1 Assume that R. and k are such that Lemmas 2.3 and 2.4 are satisfied. Define 11] = inf {/1 I ”XTHLllfl, 1] S n for all rin(0, Bl}. (2.41) For each r E (0, R], _ 1 W l|(a7~1+A7~) 1” S +1 , (2.42) (1r where ||-|| t3 the operator norm on 3 (C(0, 1]). 49 Proof: Representing ur using (2.20), we have for each 1' E (0, R] [13], Mariam 1] = fl — r * E ’ “7‘ 07‘ C[0, 1] fr fr S -— + lerll _— ar C[0,1] LllOa ll ar C[0,1] S (1+M) I: . “7‘ C[0, 1] Therefore representing ur using (2.19), 1 + M (ar[ ‘1' A7)-1f7‘ 010,1] 2 HuquQ 1] S ar HfTHC[0,1]7 and thus _ 1+ M (as... I): , Ctr for all r E (0, R]. [:1 2.1.4 Uniform Convergence with A Priori Parameter Selec- tion In Theorem 3.1 of [22], under condition [A0] and the original (H1)-(H3) in condition [A1] described in the remark, uniform convergence of solutions of (2.6) to u was proved given exact data f E C[0,1+ R]. Replacing f by f6 in equation (2.6), an a priort rule was given to guarantee uniform convergence for the case when the given data f 0 E C [0, 1 + R] contained noise. For purposes of obtaining a rate of convergence, it was assumed that '17: is uniformly Holder continuous with power (1 E (0, 1] and Holder constant Lg, i.e. for any x, y E [0, 1 + R], was) — My)! 3 La Ix — yla- (2.43) We state the results of Theorem 3.1 of [22] using the hypotheses [A0] and [A1], as well as the conditions on R and k as given here. We instead consider solutions of the normalized approximating equation and give a proof that will be referenced in sections that follow. Theorem 2.1 [212/ Assume that [A0] and [A1] hold and k satisfies Lemma 2.4. I. Let u,» denote the solution of equation (2.18) forr E (0, R], for R > 0 sufficiently small. Then [[ur — fil[C[0, 1] —> 0 as 'I‘ —> 0. Moreover, if the true solution 2] satisfies (2.43), then [[ur — i7.|]C[0,1] = O (ra) as r —+ 0. 2. Let if; denote the solution to equation (2.18) with fr replaced by ffS for r E (0, R], then (5 _ a < 2 — — '— C[0,1] - C174! + ”W “HOW, ll’ for some Cl 2 0. so that a choice of r(6) satisfying 51 i) 7(6) —> 0 as 6 ——> 0, and a) 5 [r(6)]_V —» o as 5 —+ 0, CTLSUT‘BS (1(5) _ 17110“), 1] —->0 as 6—+0. | u If ii satisfies (2.43), then , c5 — (1'7“- _ '(l 6 (:1 7.7 + (3,271), C[0,1] S for some C1, C2 2 0, and so for any K > 0, the choice r = 7(6) = Kill/(a + V) gives [[ugwrz-L 20(50/(“+V>) as 5—)0. C[0.1] Proof: Let [H] = “HC[0 1]. 1. Let ur denote the solution to equation (2.18) for r E (0, R]. We bound the error 52 due to regularization using (2.17), (2.18), (2.19), and (2.23) to obtain HUT-fill = [[(ar1+Ar)—1(fr-Arfi—arfi)” = [[(arl + AT)_1(D,~u — awn)“ (1 + M) a [[Dr'll — a747,” (1+ 1W) KCO - rva— 1) ((137.11 — art-1|). (2.44) Recall the respective definitions of ar and Dr in (2.12) and (2.16). Then using the Taylor expansion of k in (2.26) , we have 16‘ 10” up — s) (m- + s) — no) dsdm-(p) llDr’l—t — art—Ll] = 7r 3 (7 .l-(- — s)ds sup [[(7(t + ) — u(t)|[ v 7. f0 C[0,r]t 6 [0,1] (’[0’ l s or llkllqofl sup nae + -) — mum.) ' t 6 [0,1] ' ‘ ——’”V _1 ll ( ) < )1) g 1r sup ut+-—ui , (V — 1)! t 6 [0,1] C10, 7] (M _ C[0,r] _ _ + Cr sup ”u(t + -) -— u(t)|[ V! t 6 [0,1] C[0,r] — 2 g .*V——— sup ”u(t + -) — u(t)|[ _, (2.45) (1/ —1)!,JE [0,1] Cl03rl for all r E (0. R], for R > 0 sufficiently small. Substituting into (2.44), we have (1 + Ill) H170 (WV “m. _ 7-,,” S 11%;: _ ) . (V _ 1)! t 681(0) 1] ”270 + .) —1'1(f)[lC[0,,~] = DI sup “u(t + -) — u(t)||C[0 7.], (2.46) t. 6 [0,1] ’ 53 for DI > 0 constant. Since 77, E C[0, 1 + R] it follows that lim [[ur - 17]] S Dl lim sup ||a(t+ ) —1‘1(t)[[C 0 = 0. r—>0 T—*0tE[0,1] [’7‘] If the true solution "(“1 satisfies (2.43) , it follows that sup Mac + .1 — 21(01qu r, s sup [IL/u(‘lallcm, a tE[0,1] tE[0,1] S Lara. Returning to (2.46), we have [[217 — 21]] < DlLflTa = Cgra, for C2 > 0 constant. Therefore [[ur — 17]] = O (ra) —+ 0 as r —> 0. 2. Let ué denote the solution to equation (2.18) with fr replaced by j? for r E (0, R]. we bound the error due to regularization and noise in the data using 54 (2.19), Corollary 2.1, and (2.23) to obtain ué — u7~ [[(arl‘l' A7‘)—1(f76_ fr) (1er) [[ffi‘fr (1+M)I-:.c0 rV(I~c—1) rV’ |/\ I/\ 56, (2.47) for CI > 0 constant. This establishes the bound on the total error for 17. E C[0, 1] to be u?» — 17]] S [[ué — Ur +1170!“ — all 5 _ S 0177? + [ll/r — 11]]. (2.48) If 1] satisfies (2.43) , then it? — 17. < C11 + CgTOZ. —. TV 1 and so for the choice r = r(6) = chl/(a + V) for some K > 0, we have [1%) ‘ ”ll 3 01mg)? + C2 Wig : (A,.1(,.(K(,1/(a + u))—V + (,2 (mu/(a + 12)) 0' = (71604“ u) 1 (baa/(a +1»), 55 for Cl, (:‘2 > 0 constants, and so I u;,_a]=o(wMa+u) s 5.0 2.2 A DiScrepancy Principle for Local Regulariza- tion Given f5 E C[0, 1] 2.2. 1 Preliminaries For the purpose of defining our discrepancy principle, we make our final assumption that the choice of measures 177. satisfies the following continuity property on (0, R). [A2] The measure 77¢ is chosen so that for any 9 E C[O, R], -—>0 as h—+0, -r+h r /0 9(pld77r+h(p)- f0 gamma) where the convergence is uniform in bounded equicontinuous sets of g E C [0, R]. Our assumption implies the following. Lemma 2.5 Assume [A 0/ - [A2] are satisfied. For any 9 E C[0. 1 + R], define gr“) :___ lo 90 :me-(pl’ 56 for allt E [0, 1]. Then the mapping r 1—+ gar is continuous in CIO, 1], for all r E (0, ..R) Proof: Let [H] = II-IICIO 1] and r E (0, R) be fixed. Let h be such that r + h E (0, R). By (2.11), we have —1— - . Then 7r _ r000 + h 16 g(- + pldnr + Mp) _ f6 9(- + pldmlp) 7r + h 77“ gr+h_g”" + h It at + on + M _ 16‘ g(- + mare) (r + h)0c0 TOGO f6 + h 9(' + Pldnr + h(/)) - f6 9(- + p)dnr(p) (T + h)0c0 |/\ that + p)dm~(p) _ 169(- + p)dnr(p) (r + h)0c0 raco 1 _ Since ”tr > 0 for all r > 0, the mapping r H — is continuous on (0, R). Then fi/7_ lim 1619(- + pldnvlp) _ for 9(- + p)d'l7r(p) .___ 0 —> 0 (r + h)0c0 raco ’ and 7‘ + h d r d lim [0 g(- + p) .7), + W?) - lo 9(- + n) We) h —-> 0 (T + h)0(:0 1 . = hm sup 7*0c0 h ——> 0t E [0111 r+h r I) go. + an, ,1 he) — [0 W + new) Define L(h) = sup t 6 [0,1] ) r + h r [A ar+nmi+ho)—I;nr+mmte) and let 6 > 0. By properties of suprema, there exists to E [0. 1] for which L(h) < + balm r + h r [0 900 + elder + Mp) -/0 900 + p)dnr(p) From the continuity property of 717‘ in [A2] and the uniform continuity of g on [0, 1+R], :0, lim r + h r t+ (l —/ t+ dr- h _> 0 f0 9( p) or + Mn) 0 g( [2) MP) uniformly for t E [0,1], and so in particular, there exists 6(6) > 0 such that for all IMp>2o-gHc:(o,1I+hHg<ICIOJIl =0, where the rate is uniform in bounded equicontinuous sets of g in ([0. RI. . Recall the discrete measure defined in part 2 of Lemma 2.1. Namely, for 7/ = 1, 2,- -- , arbitrary, let ,BII, TI E R, t” = 0. 1. - - - L, be fixed so that 0_<_r00 €L lim 0: ISII Ig( (r+h)Tg)—9(T7£1I 06:0 |/\ h——> 3 lim sup gr(( +h) )7' (rr 6 72—2054) ng l) flcllggold :02 where the rate is uniform in equicontinuous sets of g in ( ’[0, R]. 2.2.2 Definition and Properties We are now ready to define our discrepancy principle for selecting the regularization parameter in local regularization on C [0, 1]. We restate our assumptions and modify [A0] to be: [A0] Let 0 < R << 1 be such that it E CIO,1+ R] and f E R(A) Q CIO,1+ R] so that Au— — f for all t on the interval I0,1+R]. Assume that IIf’ C,I0 1+ 1?] f 0 Given (5 > 0, the data f6(t) is available for all t E [0. 1+ R] and ffS E CIO, 1+ R] 61 satisfies IIf' ‘5 and IIf6IIL2[0,1]>(T+1)6’ f6[ICI0,1+ R] S with 7' E (1,2) fixed for all (5. If additional data is unavailable, we suffice with approximating 77. on the slightly smaller interval [0, 1 — R] . [A1] The measure 777‘ satisfies hypotheses (H1)-(H3) for all r E (0, R]. [A2] The measure 7;»,~ is chosen so that for any 9 E C [0, R], T' r + h I) 9(p)dn,~ + Mp) - /0 g(pldnr(p) -—>0, as h——+0, where the convergence is uniform in bounded equicontinuous sets of g E C [0, R]. Remark 2.3 A specification of the form IIfCSIILzIO 1 > (r+1)6 is a classical as- , l sumption when working with a posteriori parameter selection rules (see [’7], [17], and i [43].) The specification IIf’IICIO 1 + R] 7é 0 simply implies that our true data is sufificiently “rich”. Henceforth we will assume that R and k satisfy the conditions of Lemmas 2.3 and 2.4. Definition 2.1 Discrepancy Principle for Local Regularization in CIO,1] Assume that the conditions [AW-[A2] are satisfied. Let d : (0, R) —> [0,00) be the discrepancy functional defined by d(7') 2: a]? IIAI'ué — jIO (2.49) L2IO, 1] ’ 62 for m E (0,1I fired. Choose the regularization parameter r so that A7116. — fig 771 “r I L210, 1] = T6. (250) By Lemma 2.2, we observe that Ar —+ A and ff ——2 f5 in B (CIO, 1]) and CIO, 1] respectively as r ——> 0 (and therefore also in B (LQIO, 1]) and LQIO, 1] respectively.) Also, by Lemma 2.2 we have that ar ~ r”. Thus for R small, our rule leads one to select the parameter r = 7(6) such that 6 6 ~ 6 5 _ r6 ~ 76 “1“?" f IIL2I0,1I “’ II‘4"“"“ ‘ f7” IIL2I0,1] _ 2735 ~ me which corresponds roughly to the class of modified discrepancy principles defined in Chapter 1. we will show existence of an r E (0, R) for which d(r) = r6 once establishing properties of d. To do so, we use the bounds on Hay“ in following lemma. LQIO, 1] Lemma 2.7 Assume [A 0] and [A 1/ are satisfied. For any 7‘ E (0, R], HfTHI.2[0,1I a... + CIIA‘IICIO, 1 + R] fr (1. r (2.51) < < fl" AWN/3210.1141+ ” LQIo,1I’ where u7~ is the solution. of (2.18). 63 Proof: Let [II] = II-IILQIO 1] . Representing 7772 using (2.19), for any 7' E (0, R], llfrll llfarI + Arlurll S ((17 + IIATII) HWII Let :1: E LQIO, 1]. Recalling the notation in (2.13), “Ar” = SUP IIArIII llrll = 1 = sup II/ k7~(- — s):r:(s)ds Hill =1 0 S sup llkrll III” III“ = 1 : 16kt +p)dn2~(p) ”Yr S Cllk'lle[o,1+RI- Therefore Ilfrll s (a2-+ (0,00) defined as in (2.49) has the following properties: i) The mapping r I—> d(r) is continuous on (0, R). ii) lim d(r) = 0. r ——> 0 iii) There exists an R = R(f, R), 6 = 6(f,k, R), and 73 = 725(f,k, R), such that if > 7'36, then lim ~ d(r) > T6. 6E(0,6) or llféllLQI THR 0,1] llféllL'ZIo, 1] 6 sufiiciently large, there exists Therefore, for 6 sufficiently small or r E (0, R) such that d(r) = 76. Proof: Let [II] = ”'“LQIO 1]. i) Fix r E (0, R) and let. h be such that r + h. E (0, R). From the definition of ur, we have d(r) = 2%+ m 22? 1 65 where Lemma 2.2 gives ”r > 0 for all r E (0, R]. Thus (ar+m—dan== fiifllu$+Aqutm d 1+ m 6 6 1+ m 1+ m ' s ar+h u,.+,,_e,. + ar+h _e, IIIegII. Since r is fixed, we may use (2.51) and obtain ff ar a1+m me+h)— d(r S+41Hufi+h-uu7 _I_ a}: 71-— at+mI(1+M) Since ,0 I—> fgk(s)ds is continuous on (0, r) for all r E (0, R], it follows from [A2] that r I——> or is continuous on (0, R). Therefore 01+m al+m (r+h—Ir —)0 as Ila—)0, and thus lim0 [d(r + h) —— d(r )I < a1+m limsupIqu:+ h— u(5 (2.52) —> h —> From the representation of ii? in (2.20), we have 6 6 ‘ IIu6 u6II: f__r+h _f7___ h*fr+h+xr*_r(i “r+h_ “r+h a7“ 71+. ar+h (‘7 66 Define for all t E [0, 1], M, (”3+ 11(1) _ 19(1) — “r+h 0'7" ~ Xh(t) 3= XI + hft) - X170) (511(1):: kr + ha) _ krff') a7. + h “T I6 Using the above notation and adding and subtracting XI :1: i T + h, we have "'7‘ + h - - f5 h o 0 ~ ~ . ~ ”r+h-Turn S IIthI + thar+ +IXT*fl7.I r + h I"5 , < * «2 —— III - llfhll + ’1 L1I0,1] a7.+ I, +“ ’“L1(0,1] fh 15 , < 1 M IIII+IIXI ——T+" — i + ) fl). h LIIO, 1] 0T + h. Using the fact that 7- 1—2 or is continuous on (O, R) and Lemma 2.5, we have ~ 1 ‘ 1 1 IIthI S a Iffi+h - f5 + a — :- [,6 +2 0 mboxas h —> 0, ‘r + h 'r + h "T and .75 6 .. ii}! — f—r SIIfI, —>0 as h.—>0. 67 Therefore 6 6 _ sf r+h “’7' (17‘ lim sup h —> 0 limsupIIA7 II and f_2‘~5 ar hm ld(r +12) —~ d(r)] s 2.1+ m —> lim sup II)? II . h —2 0 h L1[0,1] It remains to prove that lim sup IIX~hII 1 = 0. Using equation (2.21), we h _, 0 L [0.1] have k~ k k (t) 1.1 Xr+h(f)—Xr(f)+_7__tfl*x+h(t)__r_*XT(t)= r+h _ 7() a7. + h “'7‘ a7. + h “7‘ I: for all I E [0, 1]. Adding and subtracting T + h a7. + h * XI, we obtain e k. .. k lu- k t lI'v-l Xh(t)+ 7+h*Xh(t)+(T+h——r)*/Yr(t)= r+h or continuous on (0, R) by kr+h “r+h |/\ Gronwall’s Inequality. It follows from [13] that kr+h I/\ llfghllL1IO,1] (llkh * X’llCIO, 1] + lll‘fh llCIO, 1]) mp l/\ kr+h _— “r+h 1? II ,. + II h CIO, 1] ETC kr+h “r+h. |/\ <2 I+2ICI. 1,22( Lemma 2.5 and [A2], and a2,~ > 0, it follows that 1 1 1 ar+h 0”" “r+h IIkh l S 67' + h. ’ k7" llkrllCIO. 1] T 0’ (..‘IO, 1 (:(o. 1] + 69 as h—> 0, and kr Gr kr—l—h “r+h exp > ash—>0. C]0,1] _, exp ( C[0,1] Therefore lim sup “/1; l = 0, h _9 0 h ]L1[0,1] and so hlim |d(r + h) — d(r)] = 0 proving that 'r 2—-> d(r) is continuous on (0, R). , and the upper bounds on or in Lemma 21') Using that ar > 0, (H3) to bound ' ff] (5 117+ 2.2 and in Lemma 2.7, d(r) = (#1 “14211.9: — jg” — (1,1 + m l 11.55- (2.53) ”fill! Gr s 271+m(1+ M) = 2%”(142 M) f;S K, +1 m (2; 2W” 1 M 0]] H . r ( + ) < 2.71:0 ) f C]0,1+ R] |/\ Since m E (01 1], it follows that limOd(r) = 0. T' —‘) iii) Define Hf H 2 R 2: min R, CO,1 - ., L [0’ 1]_ 20 ”f ”C[0,1+H] (2.54) 70 Then for all r E (0, ft], [6‘ 12+ 2) -— 1(2) 222(2) lfrlt) — fltll = 77‘ s Cllf'nclo, 1 + T12 < m _ 2 S i [”25 - fll + llf‘slll s $121211 22 For all r E (O, R], define B(7‘) 2: 071+ m (2.56) 02‘ + 5' Millet), 1 + 111' 6 Ur Then using the lower bound on in Lemma 2.7 and (2.55), we have for all ,TE(0,R) 6 d(r) : a71+m ”r IV 30“) f9] IV 8(2) [ V 5 22) H2] 4259-121] 2 B1,.) 112:] - (2:11)] . |f5]| — ”If? — fr” — “1’5 — 1|] — llfr — fll] IV 2 (2.57) For all 7‘ E (0, R], define Define and By [[510], and imply H6 6 (0,3), 22221122122)+1212 (2+1)6 < ”115” g ||fH + ]]f—f6]] 5 “fl! +5- ]lflll 2 ||f|| — 2 > 0. L < __‘5_ ”f5” _ llfll—5 Hf!) F12?) llfl|(1+ 12(2))- IlfllFU?) FUD. (2.58) (2.59) (2.60) (2.61) Equivalently, ”fa” > 2(2) (2.62) Alternatively, if 6 E (O, 6), the assumption that ]]f(5 ]]L2[ > 756 implies that 0, 1] (2.62) still holds. Then substituting (2.58) and (2.62) into the lower bound on d in (2.57) and taking the limit as r approaches R, we have that . - ((2) — 3 $3164”) 2 66.2.) ]—2- _ (6 + a). BM) 6 > 2 LF(~)—(20+3)6] _ B(")Fi _ _, J _ 2 WW) (2 +3) 6 = fl 2-T+B(R)(.C+3/2)—(2C‘+3) 6 2 _ 3(3) CI We observe that the choice r(5) of the regularization parameter given by the dis- crepancy principle in (2.50) is bounded away from zero by -r"‘((5) > 0. where 7‘*(6) is defined in the next lemma. Lemma 2.9 Let 6 E (0.6) 07‘ Ufa” > “/36, where 6,7,9 > 0 are as given in Lemma 2.8. Let T = ’r(6) be defined by 7(6) = min {7‘ E (0. R) | d(r) = T6}. (2.63) 73 There exists an 'r* = r*(6) > 0 such that 2(6) 2 r* > 0, where r'“ E (0, R) is given by T 1/(2m) 6* z: (i) , 6 m with e := (N+1) (1 +M)C7]] FLCO 6]] > 0 f C [0, 1 + R] ' Proof: We first obscrve that the set {r E (0, R) | d(r) = T6} is compact and thus has a minimum value 7(6). Note that 1 m _, < .I/m 6+ , J 6 — 7 (260) (HAM f ]]C[0.1+R] I/ = 67" m 3 by Lemma 2.2 and Corollary 2.1. U7”. Since '1' 2—2 1. 7' is a continuous, strictly increasing function that bounds d from above for all r E (O, R), lim_ eer 2 lim ~ d(r) > r6. ”’3 r —> R Therefore there exists a unique 1"“(6) E (0, R] for which 6 (2*)Vm = 76, 74 and so for 1(6) E (0, R) for which d(r(6)) = 76, we have necessarily that r(6) 2 r*(6) > 0. Further, 1/(Vm) C 6 1/(1/m) c1+mnm * (T) =2 0 + (2 +1)m(1+ M)(Z‘ ”f5 C[0, 1 + R] 2.2.3 Uniform Convergence We now make more definite the choice of the regularization parameter r given by our discrepancy principle. Definition 2.2 Discrepancy Principle for Local Regularization Let d : (0, R) —+ [0, 00) be the discrepancy functional defined by ATUQ _ [76 d(r) :2 a]? (2.64) mo, 11’ for m E (0, 1] fixed. Choose the regularization parameter r = 7(6) to be the smallest r E (O, R) so that a?” ”Ara? — f7é 76. (2.65) ]]L2[0, 1] 2 Remark 2.4 Any r E (O, R) satisfying (2.65) would be acceptable. we now prove that local regularization with the discrepancy principle defined via. equation (2.65) is a convergent regularization method for f 6 E C [0, 1]. For purposes 75 of obtaining a rate of convergence, we make the additional smoothness assumption that 11 is uniformly Holder continuous with power a E (0, 1] and Holder constant Lg. Recall the definitions of R, B(r), and F(r) in (2.54), (2.56), and (2.58) respectively. Theorem 2.2 Assume that [AW-[.42] hold and let 6, 75 > 0 be given as in Lemma 2.8. For 6 E (0,6) or f6” > 736, let a?» denote the solution to equation (2.18) with fr replaced by ff? Then for 7(6) selected according to the discrepancy principle in definition 2.2, we have 1. r(6)——>0as6——>O. 6 _."' __) -_.) ”7(6) u. 0 aso 0. 010, 1] 3. If it satisfies the condition. (2.43), then — 0 (5772/1221) + 0(6‘1/0/(11’ W”) as 6 —2 0, 6 —- — _ ur(6) u C[0,1] and so the rate of convergence is determined by min {02, um}. Ifw = min {(1, um}, then Hugo) _ 27' cm, 1] = 0 (“SW/(”(1 + m») “5 5 _’ 0' Gt Moreover, if the choice of m is such that m = —, then u ”3(2) ‘ 1"] C10. 1] = o (as/(0 + ”l) as 6 —2 0, 76 which is identical to the rate of convergence obtained in Theorem 2.1 using the a priori rule to select the parameter r = 7(6). Proof: 1. Let {6n}n > 1 be a positive sequence for which 6n —> 0 as n ——> 00, with 6n 6 (0,6) or ]]f6"]] > 73622 for each n, and ]]f - fén']]0[ 0.1+R] < 6n for each n. Let {Tn}n > 1 be the corresponding sequence of regularization parame— ter values selected according to the discrepancy principle for local regularization given in definition 2.2, namely for each 71., Using the lower bound on d in (2.57), we have and so IV IV IV d(T‘n) ”ft. _ 3 B(Tn) "—2—— _ (C + E) 6" ' .. _ f5n B(T‘n) I]? _ L— — (6+ 3) 5n B(rn) [MQH - ((7 + @571] 7 rn = r(6n) = min {r E (0, R) | d(r) = 767]}. (2.66) Then B(Tn) llfll n—Lmoo n—nlr-IngOT+B(rn)(2+C) 2 _ We now claim that lim rn = 0. n—200 By (261). I fl] > 0, so this can only be true if , 11m00 B(rn) = 0. 72—) Recalling the definition of B(rn) in (2.56), it must be that n l_i_1)nOO Urn z 0. Using the lower bound on or in Lemma 2.2. we have . ’I' l/ 0 = 11m ar > 7‘, > 0 n —+ 230 n n Therefore n limoo rn = 0. . Let r be chosen according to the discrepancy principle for local regularization given in definition 2.2, i.e. r = 7(6) = min {r e (0. R)l d(r) = 76}- Let ”H = |]-||Lg[0, 1] . Using (253), our choice of r = 7(6) is such that ath I ugw) II = r6. 6 T“) as in (2.19), we obtain by substituting into (2.47), 6 “7(6) - 2(2)] S (21% [7(5)] and so representing u 736)] - |/\ ”7(5) ll = Cl _la1+m “6 II [2(6)]V r 7(6) 7"(6) , H +m z/ m [7314601) 6W” (1+ 0155(2)] = CA71l7‘(25>lVm 2&5)”. for (“2'1 > 0 constant. Using Theorem 2.1 and the first part of the theorem where we proved that 6lim0r(6) = 0, we have l/\ “7(6) — 'L-LII +IIT1I] W 2 222)] |/\ + 112-2)]. for 6 > 0 sufficiently small. Therefore Him)” S 61 “(5)le 222(5)” + 3 “all 79 and thus igmjug ]]u§(6)]] g gut-1]]. (2.67) Substituting the principle in for 6 into (2.48), we have Inf“) _ 2j]](7]0, 1] S Clmffip + ] “7(5) _ 11]] S 671]7(6)]Vm 113(6)” + ”7(5) —77. . Then by (2.67) and part 1 of the theorem, it follows that 11.6, lim (C1 [7(6)]Vm 7(2‘) _ allow, 1] S 6 -> 0 ”2(2) — flll] = 02 “3(2)” + l lim —)0 proving ui( converges uniformly to i]. on [0, 1] as 6 —+ O. 5) . Returning to (2.48), we have 6 _ 5 a . . . ~ 0‘ . To obtain a rate of convergence, it remains to bound and [r(6)] in _L_ [7(5)]V terms of 6 using our rule. First we bound ]r(6)]a in terms of 6. Using the upper bound on er in Lemma 2.2 to bound B(r) in (2.56), we have for all r E (0, R] m B(r) < (5 +1) Tum HCO 80 and so it follows from the lower bound on d in (2.66) , r6 = d(r(6)) 2 32(6)) 73(5)] 2 B(+(6))“—§—”— (+(6))(2+C‘)6 ,. lLfL 2.1. mm - _ B((6)) 2 (260) R (2+C)6. (2.68) Thus, (7' + E1)6 Z B(r(6))“—£fl, (2.69) for the constant E1 > 0. Now using both bounds on ar in Lemma 2.2 to bound B(r) in (2.56) , we obtain for all r E (0, R), —1 1+m ' K. -—1 K +1 ~ — B(r) 2 ( ) .141 +72) ("ES-RV +CllkllC[0,1 +121) RCO Therefore, 62(6)) 2 E2122-(6)1“(1+m). (2.70) for the constant E2 > O. Combining (2.69) and (2.70). we have 2(T+E1) r ~ 7./(1+77L) E2117] ‘5 Z ”a” 3 81 and so raising both sides to the power, V(1+m) [7(6)]0 3 1360/” ”(1 + m» (2.71) for E > 0 constant. Next we obtain the bound for in terms of 6. Bounding (1 above using 7(My the inequality in (2.53), Lemma (2.2), and [A0], we have T5 = 6 6)” g glam] "65(6) — 77. (5).] + 77 (7.6) —17 + |]17.]]] g 6:]6)(1+M)o6+61+ (6) m]‘2]7~(6)]“+|]2|]] 3 (“HM (‘5 My) (1+M)C6 RCO NCO 1 '6 z/ 1+m + ((K'l- )l7( )l ) ((v2]?a+]](-l]]) = 01(2~(6)1”m6 2 0212(6))”‘1 + m). for G] > O and 02 > 0 constants. Since 7 — GI [7(6)]Vm > T - GIRW” > 0 for 7(6) E (0, R] for some R sufficiently small, we have that _ 1 “ma T CIR 6 S [7(6)]1/(1 +771), G2 82 for 6 > sufficiently small so that 7(6) E (0, R). Then ] V 66‘1/(1 + m), (2.72) [70)] |/\ for (7 > 0 constant and for 6 > 0 sufficiently small. Substituting (2.71) and (2.72) into (2.48), we have 6 — a 6 r a urw) ]]C]0,1] S Cl l7"(6)lV + C2] (6)] S Clééé—l/(l‘i—nt)+CQE6CI/(I/(1+7n)), and so 6 — _ m m l ~a u 1 + m .M,a._o(+2 2))+6@/<< »)2.26 If (.2 = min {07. um} . then Iuiw)—u]]20(6w/(V(1+7r"))) as 6—20. (1 . Then taking m = —, it follows that 1/ ]ty&_a]=6(222+2) 2 626. 83 CHAPTER 3 EXTENSIONS OF THE THEORY OF LOCAL REGULARIZATION The purpose of this chapter is to extend the theory of local regularization to solving the 1/-smoothing prol')lein in the case when the true solution is no longer continuous. but instead contained in the space l.”](), 1], for some 1 < p < 00. In this chapter, we take the operator .4 : [27)]0, 1] —+ [.~p]0, 1] to be defined by .1, . Au(t) :=/( [v(t — s)u(s)ds, ae. t E [0,1], (3.1) J where the kernel k E (II/(0, 1] is V—sinoothing, z/ 2 1, and it is assumed without loss of generality that hf” — 1l(0) = 1. \\»'e would now like to solve Au 2 f for u E Lp]0, 1] with f E R(A) Q L7)]0. 1]. An argument may be made similar to the one in Chapter 2 to show that R(xl)=(l1/,p:= {71E ll'll’ 1)]0. 1]]g(0) : 7/(0) :: - - - : g(V —1)(0)= 0}. and that the .2\"(.—‘l) : {0}. Since dim R(A) = 00 and A is eoi’1'11i)a.ct, it follows that 84 .4-1 is unbounded and the problem is ill-posed. We begin our extension of the ideas in [22] by rederiving the second kind equa- tion associated with local regularization for the case when the underlying space is Lp[0,1].1 < p < 00. l\+"l(.)difying the conditions in [22], assumptions [AD] and [Al] are redefined to be [AG] and [A1-p]. Vt’e again restrict the choice of regularization parameter to an interval (0, R] over which Rllk(u)llC[0, R] is sufficiently small so that the resolvent of the approximating equation remains uniformly bounded for all r E (0. R]. \Ve prove that: these conditions are sufficient to guarantee convergence in LP[0, 1]. 1 < p < 30, of regularized approximations to the true solution 1'1. E LP[O, 1] in the noise free—case. Given noisy data f 6 E L7)[0, 1], we provide an a priori para-uneter selection strategy and determine a rate of L-p-couvergence of approximations to 17 sat- isfying the source condition of uniform Holder continuity. In doing so, we complete our generalization of [22 . Next, we once again apply the normalized local regularization scheme satisfying [AD] and [Al—p] to our problem. define the additional assumption [A2], redefine our dis(.:repancy principle in this new context and establish its properties. We show that approximations, constructed from noisy data f6 E Lp[0,1].l < p < 00 using a local regularizatirm scheme satisfying [A0], [Al-p], and [A2], and the new discrepancy principle to select the regularization parameter, ccmverge in 1.7)]0, 1],1 < p < (X: to the true solution 17. E U" [0, 1] as the noise level shrinks to zero. We also give a. rate of cmivergence for 1‘1 uniformly llolder continuous. \\"e conclude that. local regularization paired with the redefined discrepancy principle is a convergent regularization method. 3.1 Extensions to Lp[0,1],1 < p < 00 Again, let 17. denote the “true” solution given “exact” data f and let f 6 denote the “noisy” version of f. We begin with an assumption on the data. [A0] Let 0 < R << 1 be such that. 17. E Lp[0,1+ R] and f E R(A) g LP[0,1+ R] so that A77. 2 f for a.e. t on the interval [0,1 + R]. Given 6 > 0, the data [6(1) is available for a.e. t E [0, 1 + R] and f6 E L7)[0, 1 + R] satisfies [[7 — .75 s 6. Lmai+a] If additional data is unavailable, we suffice with approximating 11'. on the slightly . 1' _ smaller interval (,0, 1 — R] . 3.1.1 The Approximating Equation As in [22]. it follows from [A0] that 71 satisfies / Ig(/7 — s)'u.(l + s)ds +/ l;f(/. -+— /) — s)7/.(s)(ls = /(/. —+- p), (3.2) 0 O for a.e. t E [0,1]. ,0 E [0. r] and for any r E (0. R]. For each r E (0, R], consider the space of all bounded linear finictionals on Lp[0, r]. 1 < p < oo. Recall that the continuous dual space of Lp[0. r] can be identi- . . 1 1 . . . . fied With the space I.” [0. 7'] for — + — = 1. Then 101' any 5272 E [U’[(), 7']]*, there ex1sts P (I 86 .437, E [,(I]0, 7‘] such that for any 9 E Lp[0, 7‘]. We define the measure 777+ by T 7" ‘ 2 d7 7- := ( ,7 (27+ 7d,), . /Oy(7) 7(0) /OJ(/)2 (7) 7 (33) for g E Lp[0, r], where dp denotes Lebesgue measure. Applying a functional 97-, we. integrate both sides of equation (3.2) with respect to the measure 77,» and obtain '7‘ p f -T' .r / / l.:(p—s)u(t+s)dsd7],-(p)+/ / k(t+p—s)(17),~(/))'u(s)ds = / f(t+/))d7],~(p), . 0 . 0 . 0 . 0 - 0 (34) which 17 satisfies for a.e. 0 < 7' g R and a.e. t E [0. 1]. With the idea of holding it constant locally. we consider the second-kind l'olterra equation 27‘ p t 7‘ r u(t)/ / k(p—s)(lsd7;r(p)+/ / h(t+p—s)d7’)7~(p)u(s)ds =/ f(t+/))d7),2(,o). 0 0 0 0 0 (23-51 for a.e. t E [0,1] and r E (0. R]. 3.1.2 Properties of 7],~ \Ye again specify how to select a family of measures 71,-, 7‘ E (0, R]. in the ap];)roximat— ing erpiation (3.5), redefining [\ 1] to he [Al-p]. 87' [Al-p] The measure 77¢ is chosen to satisfy the following hypotheses. (H1) For i = 0. 1, ..., u, there is some 0 E IR and Ci 2 c,(z/) E 1R. independent of r such that. r , _+ 7 , i o / 7) (1777(7)) = r C?» 0 with c], # 0. \K'ithout loss of generality, we may assume that 777+ is scaled so that c], 2 vi. The parameters q, i = 0, 1, ll, satisfy the condition that the roots of the polynomial pl/(A) defined by ("CW + C”_1AV—1+...+ 512+ 6—0 WW 2 77! (2) _1)! 1! 01 have negative real part. There exists C > 0 indepei‘ulent of r such that S llhl ,. I/0 h(/’i”"7rtpi («6'1027‘1677‘0’ for each h E (.'[0. 7'] and any 7‘ E (0. R]. and there exists T]; < 1, indepen- dent of r, such that r I/ h(/))d7)+,-(p) O for each h E U"[0. 7‘]. 1 < p < x and any 7' E (0. R]. S [ill'liLJ’]0, r] CTU — Tp‘ Remark 3.1 If the function to which the measure "17‘ is applied is in (;‘[0,7'], then {H3} of [AI-p] coincides with (H3) of [A 1] in Chapter 2. However, if the measure 7);- is applied to a function that is only contained in Lp[0,r], then the bound in (H?) of [AI-p] requires the factor r_TP. we give an example of a class of measures (similar to the one defined in Lemma 2.2 of [22]) that. can be constructed to satisfy assumption [Al-p]. 1 1 Lemma 3.1 Let 7/ = 1,2, - - - , be arbitrary and let 7/ be such that — + - = 1, for each P q -1 1 < p < 30. Let v.2 E Lq]0, 1] be such that /0 chZ-'(p)dp = M. Then for r E (0, R], the measure 77)- defined by -r -r /0 9062177707) : /0 Meier-(7)1212 9 6 LP 1027‘] where 152-7 E Lq[0, r] is given by 7 (eff/Ll : 17" (g) 7 (1.8. 70 E [077‘13 satisfies condition (HI) (with ('7/ = fol p”y’2(/))d/) and o = 1) and condition (H3) (with ~ 1 C = ll’U’lquIO. 1] and 7p 2 1—7) Further, for all 7/ = 1,2,--- , and given arbitrary positive 77117712, - -- , and 7771/, there is a unique monic polynomial 71’: of degree 1/ so that the resulting family {777+} satisfies (HI) with c], = 1/1 and o = 1, (Hz?) with the roots of the polynomial [)l/ in (H2) given by (—-777..,j).i = 1. - -- ,I/ and (RB). Proof: The proper construction of the measure 1),» satisfying (H1) and (H2) can be handled as in Lemma 2.2 of [22], namely for i =2 0, 1, V, 7' 7‘ 2f ___ 7: 3 f0 pdrnlp) [0 pi (Jet/J 1 . = r/() (Witt/imp . 1 . = r-“l/O Pit/'(Pldp ’ _ ,.i+1.. -— r (,1. Then (HI) holds with {01 piw (p) dp 2 (‘2- E IR, independent of r E (O, R], and o = 1. In order to satisfy (H2). one can always construct a I/th (‘lcgree polyannrrial 11‘ for are determined by the choice of which [01 pit" (pi) dp = (‘2', i : 0, 1. I/. where the (2,1- ~1 roots of 111/“) and / p” u','(,o)(lp = ('1/ = I/!. 0 To Show that (H3) is satisfied for such it E qufl. 1]. consider 9 E Clo, r]. Then .7‘ U0 g(o)dnr(e)‘ = l/0£l(/)) ->d/J llflllC.r 0 ,]/ 1 = Hgnqw fog u(pndp |/\ (1p S ligliC-‘m, 1‘] HC‘HLQEO’ 1] 7'. 90 If g E U’]O. r], then applying Holder‘s inequality, /0T9(p)dnr(p) = /0Tg(p)v" (961p (/ ,, W” (t a <5?) 1 = IIKIIILI’[0,,~J (7' /01 lit/Olqd/i) M = ”gums Home, 1],1/q |/\ 1/ q qdp) = llgllemm] llt’lqum, 1V1 _ ~ 1 For 0 = 1, (H3) holds with C : |]i-']]Lq]0 1] and TP 2 —. , I) For 7‘ E (O, [1’], assume 717‘ is any measure satisfying {Al-p] and define .T 0 As described in Section 2.1.1, it. follows from (H1) and (H2) that. "fr :- r060 > 0. for all r > 0. Then we may define for any u E LI’[O, 1] and 1’ E L-l’]0. 1 + R], l0 .l(‘])]'fli — 5W8 (1"7]7~(p) (11‘ I: , A ' _foh'f + P WW ([1) A , (7' 91 I. .~47~u(t) :2/0 k.r(t — s) u(s)ds, (3.10) 2 ft m + p) (1mm) hm: W , en) -T‘ p " — ’U , S (S 7‘ 074(1):: 1010 A-(p 8)7(f + )1 (in (p), (3.12) for a.e. t E [0.1] and each 7' E (OJ—1’]. With this notation and recalling that {i satisfies (3.4) , equations (3.4) and (3.5) may be written equivalently as the normalized equations Draw) + .4,-u(t) = f.,-(ti), a.e t e [0.1], (3.13) and anal!) + .-’17«'zi,,-.(t) = fr(t), a.e. t 6 [0,1], (3.1113) respectively. Provided a,-- y’ 0, equation (3.1-1) is well-posed and there exists a unique solution ur E L.p]0,1].1 < p < :30, that depends continuously on f E Lp]0. 1] [13]. For each r E (0.13] for which or # 0, (art + :‘lr)—1 is a bounded linear operator on LPN), 1] [13], so that we may represent the solution as Us)“ 2 (Or/‘1 + Ar)—1f7'. (3.15) We may also express ur using the variation of constants formula in [.3], -t try.) 2 M” _/0 .r,.(z~_ .s)f"(“’(1.s, (3.16] (Ir (tr 92 for a.e. t E [0, 1] and r E (0, It] for which (Ir 7; 0. Exactly as in Chapter 2, for each ‘1 such 1‘ > O, the resolvent kernel Xr E L1]O, 1] uniquely satisfies t ". —— “u awn/O Memes: M” (3.17) 7 (17“ a7" for a.e. t E [0,1]. As in the C]O, 1] case, to guarantee well—posedness of equation (3.14) for all r E (0,113], for some 0 < R. g R, we must ensure that a;- does not vanish on the interval (0, R]. The following lemma gives a condition under which this is true based on assumptions [A0] and [Al-p]. Lemma 3.2 Assume [:10] and [AI-pl are satisfied. I. Let (17" be as defined in (3.8). If]? and k satisfy < (n+1)! Ig(”) . .—, 3.1/3 C(0, R] _ CH. ' ( ) R l for some s > 1 and O < R S FE. then. NCO — — HCO 3 (3.19) for all r E (0. [1’]. 2. Let h. E Ch), 1 -l- [3’] and define 1]]le + [)’)(l'l‘)7-(/)) li,-(t) 2: A Ir 93 for all t E [0, 1]. Then 1‘ It —I, . =0. 3.20 7‘ {1:10 ]] lr IIILP]O,1] ( ) .3’. Let Ar be as defined in (3.10) for r E (0, R]. Then lim “Ar — 4]] = 0, (3.21) r —-> 0 where [II] is the operator norm. on B (L-p]0. 1]) , 1 < p < 00. Proof: 1. It was established in the proof of Lemma 2.2 that 1 Z [aw / / Vat I CO I/ Since pet/Ops” Mageqoi] it follows from (H3) of [Al-p] and the hypothesis of the lemma that .1/ +1 / /0plc (V) (Cg—(19(1)), (p WNW therefore the bounds on a;- may be established exactly as in the proof of 7.z/~l—o , < CVM‘COI] _ H. Lemi‘na. 2.2. 91 2. Since )1 is continuous, it follows from (H3) that for each I. E [0, 1], It" lh 1 is sufficiently large. Then the eigenvalues of 147' := .4 + til-7- haue negative real part for all r E (0. [1’], where 0 0 1 0 A 2: , k ——c0/Ol —c1/1! —cV _1/(1/ — 1)! j K 0 0 0 0 \ 0 0 0 0 Air : 1 K —7770.7‘ -—‘17)1_r —777.V__ 1‘7. ) and ' . _ ~ J "(1’) . ' “ _J ”(l/) TI“ -‘ Til], 7. — r";"r [7 A) h “Li—(J 1.)!(107 (p) . j! / f0 1.. (C) I"! (isdin (p) , forj=0.1,---,V—1. Further, for the matrix: ‘X’ r Br E /( (exp(."1v,~t)) ex1)(.47-))(‘lt, . ) 97 there exist positive constants L, K, and 5’ so that 1 Ir] _>_ 2L (rcTBrrr) 2 ) NIH (3.23) H lem 3 K (1TB). NJIH 9 Ir] S 3(1‘TB7'1‘) for all x E R” and all r E (0, R], where I] denotes the usual norm. on. IR”. Lemma 3.4 Assume the hypotheses of Lemma 3.3 hold. Then there exist constants C7 > 0 and M > 0, independent of r (but dependent on. k, I/, c0, c1, - -- ,cV), such that if < C, limp) C]0.1+ R] — then we have L1[0,1] - for all r E (0, R], where X,‘ is the resolvent defined in equation. (3.17). We have the following estimate on the size of (ml + .4,~)‘1]]. Corollary 3.1 Assume that R and k are such that Lemmas 3.3 and 3.4 are satisfied. Define til : inf {)1 I HXTHLHO 1] S )1. for all 7. E (0. R]}. (3.2-1) For each r E (0, H], 1 +11 (17‘ ((1.7‘] + 4473—1” S 98 where III] is the operator norm on B (U’lO, 1]) , 1 < p < 00. Proof: Representing ur using (3.16), we have for each r E (0, R] [13], fr fr .. = _ _ x. _ ll’ul ”LPN, 1] a". 7 * ar £14031] fr fr S ‘— + lerll — “r LPHLIJ Llhxl] aT Lpfitll g (1 + M) f—’ , . 07' U’lO, 1] Therefore representing u-r using (3.15), 1 + M far] + x“l7')— 1/7 =I-.. < .1 . LPuii] |“'l’llU’lllll— a, ”f’lbphlll and thus 1+M 0r (fl-r1 + ATl—l“ S for all r E (0, 1%]. El 3.1.3 1])-Convergence with A Priorz' Parameter Selection We obtain the following (:orwergence results in the case when '17 E Lplfl, 1 + R]. Given noisy data f6 E Lpl0,1 + till < p < 00, we provide an a priort rule for which approxunations converge to the true solution in U’lO. 1]. For purposes of obtaining a. rate, we take as our Source cornlition that 27. is uniformly Holder continuous with power a E (0, 1] and Holder constant La. 99 Theorem 3.1 Assume that [A 0] and [AI-p] hold and l: satisfies Lemma 3.4. 1. Let ur denote the solution of equation (3.14) for r E (0, R], for R sufficiently small. If the true solution. 17. E lJ’lO. 1 + R] then HUT - fill-LPN) 1] —> 0 (18 T —-> 0. Moreover, if the true solution {1 satisfies (2.43), then ll‘Ur —u|le[031]=-‘ O (ra) as r ——> 0. _Co Let 7/? denote the solution to equation (3.14) with fr replaced by [,0 for r E (0, It]. If the true solution a E L-plO, 1 + Lt], then with rp given in (H3), l < (1 6 + l' l i _— 'U ‘ — Us . Ll’lt), 1] — 171/ + Tp . 7 MK), 1] - o _ urw) — u for some ('1 Z 0, so that a. chm‘ce of 1(8)) satisfying i) rlo) —> 0 as 6 ——> 0, and ii) (5 [Thin-(V + Tp) —+ 0 as 5 —+ 0, 8 718117119 U r05) _ fillmlo. 1] _i 0 as 6 _i 0' 1 (l0 U17. satisfies (2.43), then 5 0, the choice rzray=Kfl“”+V+ufl gives 0 _ _.— = ‘O'/(l/+(1+Tp) __> O, "(0) U'llLP[().1] 0(1) ) as 6 O. (3.1)) 'lt Proof: Let ||-H = lllle[01]. 1. We bound the error due to regularization using (3.13), (3.14), (3.15), and (3.19) to obtain 111,. — 111 = (all + Arl—l [fr — aru 34,117] ’1 + M _ (——a-———) IIDrl—l — 077—!“ 1" h‘.(.‘ 1 -+- 1W " _ _(;()(_—1)_772 l1])7"ll. —a.r17H . (32‘) Notice that .l) p 1—>/ k(,o — s) [17(t + s) — 17(t)] ds E C(f), r], (l and recall the rosixmtiV’c definitions of or and 1.7;- in (3.8) and (3.12). Then by 101 (H3), we have llD-rfi- - arfill = l/\ |/\ |/\ |/\ l/\ .16 151717 — <> 11 + s> — 71>) 111111,,11) A1 I 7‘ 1 p Up :00 //| jpko >11-117177>—1-111>>111111111p>) 71) 1 . p ~ l/P — [1 H/ k( — s) [7(t + s) — u(t)] ds Cprapdt 7T . 0 . O ClOT] _ 1 p I, 1/1) C / sup / lc(,o — s) [170‘ + s) — 17(1)] ds dt 0 p E [0 r] l/p 1 c. 111+71>P , dt /0 7513,71” 11 H10 ,1171 >—71.11L_p,0’p] _ 1/ 1 7“ 1/[) Cr ,1; lit-”((01.1 /0 /0 I17(t+s) — 11(1) |7’ (1.7111 11/1) (//|ut—l—s—u( )l‘ldsdt) _ 1 1 7‘ . .~1/p Now using the Taylor expansion of k in (2.26) and the assumption that k and 102 It satisfy (3.18), we obtain llDrfi — art—til l/p w V- A V 1/10 S 61‘ :1—1)!+TV(II1:IICIO,ITI (Aliwt—Wfi(t)|pd8dt) _ V 2 11 '7' _ _ 7) up 3 Cr W (A) 7/0 Iu(t+s)-u(t)| 11.7111.) , (3.28) for all r E (0. R], for R > O sufficiently small. Substituting into (3.27), we have “-7 MCfl/(l _2__ 1(— P d llr 1II_ (fl_1),,/ C. 1)I(/011/0TI1( t.)+s 17(t)Ids t) .1 .,~ 1,11) 1 _ _ p : D1 / — / Iu(t + s) — u(t)I dsdt . 0 7‘. 0 for 01 > O constant. By Lebesgue differentiation theorem [10], 1/1) /\ 7.limO Tl/Or( I11t+s)—u(tIpds—0 for a.e. t E [0.1], but then for r sufficiently small, it follows that for a.e. t E [0,1], 1 "" w . 1 7/0 I17,(t+s) 4.1011213 g 21111711.,>|. 103 Then by Lebesgue Dominated Convergence, 1 1 T l/p lim IIu.7,- -— 17H 3 lim [)1 / —/ I17(t + s) —17(t)Ip dsdt = 0. r —+ 0 r —-> 0 O 7‘ 0 13.29) If the true solution 17 satisfies (2.43) , then .r 7) LIZ/7.1+ op / 1717 + s) — 71t>1 77 s —. . 0 ' 1 + 07p and so 1 L211”) 1p “11, — 11II < D1(/ 11. (It) 2 O (10‘) as 7 —-> 0 0 1+ op 13.311) 2. Let u? denote the solution to equation (3.11) with fr replaced by ff} for r E (0, H]. 'We bound the error due to regularization and noise in the data using 104 (3.15), Corollary 3.1. (3.19), and (H3) to obtain 11$. — 117~ = (1171+ 1477-)—1(f70— fr) I (1+ 1!) s II11— 11 11/1) 1 1]) = Q—(/ It 1>| 71) (L7 11+.11)1 '1 '7‘ 5 .7’ Up = ‘ — / I / f (t+/))-f(t+pldn7-(p)I 77 (17‘ “fr .0 .0 1. _ 1/p 1 1 (l+lll)(.r0‘ Tl) / II (5 IIp < t -— l - . dt __ (LT '77“ 0 f ( + l f( + ) LPI0,7‘I (1+ ll) , _T (5 < —— '- 7) _ _ — (17‘ ( f /' 01+R] < (l-I-ll)h(0( _Tpd _ Il/(H. —— 1) 15 1,,1/ “J1— T‘p‘ for C1 > 0 constant. This establishes the bound on the total error for 17. E Ll’IO, 1] to be II11(rS — 17 |/\ IIu‘; — 117]] + I]11-,. — 17]] (5 S Cl—T1’+Tp + II11T—17II (3.32) If the true solution 17. satisfies (2.713) . then 11 - 1 ““1161” 361,. 105 and so for the choice 7' = 7(6) = [Gil/(a + V + Tip) for some K > 0, we have 6 c ———1 117-161" + ’19 |/\ u’r(6) _ u + C2 Iron“ _. 015(K61/(a + 1/ + Tp))—(l/ + Tp) + C2(I\/61/(a + I/ + Tp))a _ (716(1/(0‘ + 1/ + 7p) + (3260/01 +11 + Tp) for C1172 > 0 constants, and so = o (dd/(‘1' + V + 719)) as 5 —> 0. , 6 _ ,— “7(6) U 3.2 A Discrepancy Principle for Local Regularization Given j“5 E LPIO, 1], 1 < p < 00 3.2. 1 Preliminaries Before redefining our discrepancy principle, we assume that the choice of measures 777 satisfies the following C(‘mtinuity property on (0,R). Recall that there exists 106 ’Ul’r E [1(IIO, 7'] for which / g(pldnr(p)= / 9(0)¢"‘r(p)dp, 0 0 for all g E LPIO, r] and any 7‘ E (0.17]. we first embed LQIO, r] into LqIO, R] via. the zero extension. For all r E (0, l7] and 12’27- E [1(1I0, 7"], define the function 11} E IJQIO, [7’] such that. , yfi'r(p) a.e. p E [0,7‘], '1"1-r(/)) I: (333) 0 a.e. p E (r, R]. We make our final assumption. [A2] The measure 727“ (and thus ti’vr E IJqIO, r]) is chosen so that ~ 11.2,]. + h. — '¢~7~ —+ O as h. ——> 0, ”[0, I?) for all r E (0, f7). Our assumption implies the following. Lemma 3.5 Assume [A 0/, [AI-p], and [427 are satisfied. F07“ any 9 E LIPIO. 1 + 17.], define 7‘ . .(1(t+ {OWN/1) 97(t) :: f0 ’17“ 1 for allt E [0. 1]. Then the mapping T +—> gr 1's 0071.1111110'115 1n LPIO, 1]. for all r E (0, [7). Proof: Let IIII : H'HLPIO,1] and 7‘ E (0,R) be fixed. Let h. be such that 7' +11 E (OJ—1’). Without loss of generality, let 11. > 0. Recall that by (3.7), we have. 107 — = . Th 7r TOGO en 7' + h d' ( 7. g _ gr 2 f0 9<- + 1)) 1771+ h 1)) _ IO 91- + QIde-(p) T + h 77* + h '7'?“ 7' + h , ‘ = f0 91- + M»- + h

dp _ 1; 91- + pimpidp (7‘ + 11,)0c0 ‘I‘OCO ‘ + h ' < .6 91- + My + 1W1) - f6 91- + p)1»1p>dp _ (7' + h)0c0 + f5. 90 + p>1w~1pidp _ 16' 91- + p>wr1p>dp (r + h)(’c0 T060 ' Since 717' > 0 for all r > 0, the mapping 7“ 1——+ — is continuous on (0, R). Then ‘1‘7‘ hm .16 g(- + 911%pr _ f6‘9(~ + pl'Ui’r(/))dp = h —+ O (7‘ + (2)0170 r000 ’ and T + h 1' .rr . i lim ./0 g(- + may. ,1 11(1))011) - .lo 9(- + p)w(p)dp h —+ 0 (7‘ + h,)0c0 1 = lim T000 [1 —-+ 0 I 7‘ + h 7‘ . /0 g(- + ION/"7‘ + h(/))dp “/0 9(' + p)'¢’7‘(P)d/) - 108 Then for a.e. I E [0,1], 7' + ll, 7' f0 90 + m, + h,(p)dp - [0 90 + p)1/1r(p)dp 7‘ + 11 ~ 7" + h - f0 11(1 +10% + 11(1))(111- /0 90 + /))'c”17~(/))11/) 7‘ + h ~ ~ g /0 1911+ p>1 1 +110?) — ¢‘T(P)I dp s 11111111110311.1111 ll 1 111' Therefore by assumption [A2], r + h. ‘7‘ li1n f0 9(- + 1111174 hlPldP - jg 9(- + 1711171111111 h ——> 0 (7‘ + h)0c0 < 1 » I l — 1' 1] 7 = 0‘ and so 1' I — ~ = 0, Ill—TO gr+h g; LpIO,1] for all g E Lp]0,1+ I7]. U Lemma 3.6 Let 17’; E C‘IO. 1] be such. that 11hr E CID, r] is given. by for all ,0 E I0, 7‘] and 7‘ E (0. R]. Then for any 9 E LPIO, 7'] and 7‘ E (0. R], the measure 109 nr defined by / g(p)dnr(p)= / 9(1))111'r(p)d/), 0 0 satisfies assumption [A2]. Proof: Let r E (0.17) be fixed, 11 E (0. I7.) such that r + h E (O, R). Without loss of generality, let. h > 0. Then for all p E [0, R], 1.1% + Mp) — 1,19,.(p), if p E [0,711 1b,. + 110)) _ 157(9) = 11),. _+_ h(,0), if p E (7‘, 7‘ + h], 0, if p E (7‘ + h,1—{]. Then ~ 111310 I] "31‘ + h ‘ "’7‘ R = lirn / h. —+ 0 0 7. = lim / h —> 0 0 7. = lim / h ——> 0 0 =0, 19,011] q 1/q L1,,_ + l).(p) ‘ 11’7‘(/))I (1p) ,-, p ' T + h 1,:( iii) _ 1,1,1 (§)qup+/T7~+h .1 1(1 p _ 1,. 8 .. q 7,, ("r + h) u (T) (lp+(r+l1)L/(T +11)“, (p)| dp) q 1/q 11/) l (lq I fl which follows from the continuity of 1;". Therefore lb ’ O 0. [100.17.] = 110 3.2.2 Definition and Properties We assume that we are given data f 0 E LP [0, 1 + R] that is a version of the true data f E LPIO, 1 + R] that contains noise. We restate our assumptions and modify [AD] to be: [A0] Let 0 < 17 << 1 be such that 17. E LPIO,1+ R] and f E R(A) Q Lp]0,1+ 17’] so that .417 = f for a.e. t on the interval [0,1 +17]. Assume that " . — 0. or» o‘ > 0.1.1.11. 51 *«r'i-au f-~ ll‘fHLPIO,1+H] yé 1x111 , ie (a a f () 15 axaia)e or ae /, E [0, 1 + 17’] and f6 E [1”]0. 1 + 17’] satisfies I11 — f" [17”[01 + R] 5 5 and llfillLPIo, 1] > (T +1l5~ with T E (1, 2) fixed for all 6. If additional data is unavailable, we suffice with approximating 17. on the slightly smaller interval [0, 1 — H] . The measure 771" satisfies l'iypotheses (H1)-(H3) with 7p > 0 for all r E (0. [7’]. The measure 77,- (and thus 1ft,- E LQIO. 7]) is chosen so that IIU'JT _I_ ll — U’I’T Lq]0,}:{] —«> O as h -—> 0, for all 7‘ E (0, l?) for 117 defined in (3.33) . 111 Henceforth we will assume that I? and l: satisfy the conditions of Lemmas 3.3 and 3.4. Definition 3.1 Discrepancy Principle for Local Regularization in LPIO, 1] Assume that the conditions [140], [AI-p], and [A2] are satisfied. Let d : (0, R) ——> [0, 00) be the discrepancy functional defined by d(r) ;= 0;?1 I‘4T7’ii _ (’QIILPIO 1], (3.34) for m E (0, 1] fixed. Choose the regularization parameter r so that 7'” :1- 5— 5 —~6 335 G’T 1 7117‘ fr LPIO 1] — I . ( . ) We now show existence of an r E (0, R) for which d(r) = 715 once establishing prop— erties of d. To do so, we use the bounds on lIU~7‘II[PI0 1] in following lemma. Lemma 3.7 Assume [140/ and [.4 I-p/ are satisfied. For any r E (0,1t], llfrlle 0 1 0r + C IllCHCIo, 1 + R] ’ 0r LJ’IO, 1] where ur is the solution of (3.14). Proof: Representing 'urr using (3.15), for any r E (0, R] _ I _ , . —1 . IIf'IIILpIO 1] '- I((7'I] ‘i 0):)(07I ‘i‘ A?) fl LPIO, 1] I /\ 1a, + 11141-11) Ila-1111.710. 1] ' 112 1 1 Let .r E Lp[0,1], 1 < p < 00 and — + — = 1. Recalling the notation in (3.9) P (1 lll‘IILPIo, 1] _—_- 1 llrll 1,1255?” =1 “/0 k7" — 8)~r(s)ds s sup llkv-ll M0, 1] H‘FHLPIO, 1] II:L‘]]]JPI0, 1] = 1 .16 1.1. +p>dm~1p> 7r LPI0,1] LqIO, 1] l/\ Cumming]- (3.37) Therefore Miriam,” s (armnkuqmm)11117-11,,p[0,1]. OI' llf"HLp[0,1] a-r + 5 lll’llCIo, 1 + 3] S llurllL.PI()71I- To establish the upper bound, we represent ur using (3.15) and the bound on the operator norm in Corollary 3.1 to obtain (fl-r1 +A4rl—1f’r 1+ M a?“ llfrllLPIO, 1]: II’UII‘IILPI()1 1] = I L‘UIO, 1] |/\ 113 for all r E (0, R]. [:1 Lemma 3.8 Assume that [.40],/A 1-p/. and [42/ hold with 0 < rp < Inn in hypoth- esis (H3). The function (1 : (0, R) —> (0,00) defined as in (3.34) has the following properties: i) The mapping r H d(r) is continuous on (0, R). ii) lim d(r)=0. T -—> iii) There exists an R = R(f, R), 5 = g(f, k, R), and 73 = 73(f, k, R), such that if > 7156, then lim ~ d(r) > T6. 6 l 611 E ( , ) 07‘ / [1])[0a 1] 'r __) H llfallmo, 1] 5 Therefore, for (5 sufficiently small or sufiiciently large, there exists 7‘ E (0, R) such that d(r) = 76. Proof: Let H” = [1|]me 1]. i) Fix r E (0, R) and let h be such that r + h E (0, R). From the definition of up, we have , (l (14,. d(r) = (L11. + m and by Lemma 3.2, we again have or > 0 for all r E (0. R] and so 1 ' 6 1+ . 6 W + h) - do! = i 5? + ill - "v-lll 1+m' 6 _,(l l+m_ 1+m (l _ (I7, +h lur+ h 11,. + or +11 07‘ u,- 114 6 71,7. and obtain Since 7" is fixed, we may use (3.36) to bound E (1r 6 6 MU+hhflWMgal+$’%+h—ur r+ +a, i:$—m}+mML+M) To show that r r—> ar is continuous on (O, R), we note that ar + h - ar = A: + h (fop k(8)d8) (”L/3,. + Mp) — it'd/U) dp, (where we assume without loss of generality that h > 0), and use Holder’s inequality and [AZ]. Therefore a‘1+‘lll_ a7l+m _+ 0 r+h as h—+0, and thus - ~ ‘ 1 + m - 6 - hm |d(r + h.) — d(r)] g a.» 11m sup u, — u.‘ (3.38) h- _’ 0 I h _) 0 r ‘i‘ h I From (3.16), we have 6 6 ‘ l6 6 _ fr+h .76 fr+h 2 76 “(r+h—”(LT —— ——————X7._+_h* +z17-*——- (1,7. _]_ h (Zr 0.7. + h (1r Define for all t E [0, 1], Xh(t) 2: X7. + h(t) — Xr(t) :h ._ kr + ha) __ 137‘“) a7. + h a7» 6 Using the the representation of u,,. in (3.16) and the above notation exactly as in Lemma 2.8, we have 5 5 ~ ~ fi+h w “W H, S ]]fh]]+ Xhakm +]XT*-/h]] ~ ~ fi+h ~ SHRM+'&NflmJI%+h'+MMUWmuww 6 S “+lfllh +hflhflmu J:Q Using the fact that r +—+ ar is continuous on (0, R) and Lemma 3.5, we have 1 5 Hm“< 1M5 f, _ ,Hao % haQ a r+ h r + h_ ar + h and " f 7_‘t_’1 _I; <]]fh _>0 as h—>0. "'1" + h (1 Therefore - .6 lim sup ]]u0 — u? < lim sup ]]X} ]] 1 haO r+h _ W h 'LWJl and 1+ m fig lin |d(r+h) —d(r)l S a,~ Z l’ilmjupH/l’h L1[0, 1]. 116 It remains to prove that lim sup ]]/1.;h]] 1 = 0, though this was already It __) 0 L [0,1] shown in Lemma 2.8 based on the continuity of the maps r t—> hr and r t—> a»,~, and a7. > 0, to argue that ]]k~h]]C[0,1] —> 0 as h ——> 0. However this still holds under the assumptions of this Lemma. Therefore hli_m [d(r + h) — d(r)| = O proving that r +—> d(r) is continuous on (0, R). ii) Using the fact that or > 0, (H3) to bound ]]f7(.5]], and the upper bounds on or u? in Lemma 3.7, in Lemma 3.2 and ] d(r) = aT]]A,~u$— ff = a}. + m ]]ué]] (3.39) 1+ 11ft] ar m(1+ ll!) ar = a’llu + M) Hf?“ 172. Tom — 71W1 + 114)(K+1) Cl HCO |/\ l/\ féllLP[0,1+R]' Since m E (0. 1], and um. > Tp > 0, it follows that lim d(r) = 0. ’f' -—) iii) Define 2—5. ll/"”LP[0, 1 + R] R 2: min ~ ( 1 “Who, 11 >1/(1_ W (3 40> 117 Then for all 'r E (0, R], fidfi+p%aflflwnm) UHU-fml= w s 6* ”rum, 1 + .1131 ‘ 7,, < M _ 2 S élllf‘s - fll + llftlll s ammu- «mm For all r E (0, R], define 1+m B(r) ;= “7‘ (3.42) W’ + CllkllC[o,1+ It]. ué in Lemma 3.7 and (3.41), we have for all Then using the lower bound on r E (0, R], d(r) : a71+'rr), 6 ”7‘ d f6]- 3m 176]- a—v—pa .. .5 _ g _ 131] IV B(r) ffi—t —ht—d(1t—/M IV B(r) ] Z 2 : 6 = 13(1) w — (Cw-‘71) + g) a] . (3.43) 118 For all r E (O, R], define Define and By [4A0], and imply H6 6 (0.3), B(r) PM i: 2 [T + B(r) (Cr—7P + 3/2)]' F053) 5:: ”f“1+F(R)’ Hfll 3 ]]f6]] + ]]f -— I‘ll] s ”(5] +4, (4— +1)4 < ]]f6]] s llfll + ]]f — 45]] s llfll +4. ]]f6]] 2 urn — o" > 0. L < _;‘__ H 6]] " llfll-6 ||f|[F(R) ~ llfll (1 + F(R)) — llfll F(R) = HR). 119 (3.44) (3.45) (3.46) (3.47) Equivalently, ]]f5]] > 765 (3.48) Alternatively, if 6 E (0,6), the assumption that ]] > “/36 implies that 6| f ] 1140.1] (3.48) still holds. Then substituting (3.44) and (3.48) into the lower bound on d in (3.43) and taking the limit as r approaches R, we have that lim~d(r) 2 EU?) —('R TP+§)6 r—>R 2 2 B(~) 6 —~—T > 2 FUD—(2m P+3)5] _ B(~) 1 __ —1~—Tp ] _ 2 PU?) (2,4 +3) 6 ~ 7+ B 14 "1”,"? +3 :2 = L“) 2. ()( - /) (2ce'19+3)4 2 B(R) = r6. [:1 As before, we still have a lower bound on the choice of the regularization parameter as a function of 6. Lemma 3.9 Let 6 E (6(.) or ]]f6]]> {56, where 6,145 > 0 are as given in Lemma. 3.8. Let 'r = 1(6) be dcfincd by 7(6) 2 min {7‘ E (O, R) I d(r) 2 T6}. (3.49) 120 If um > 7p, then there exists an r* = r*(6) > 0 such that 7() Z r* > 0, where r* E (0,R) is given by r:=— , * (75) 1/(1/m — 7p) 6 with . .= (:01)m(1+magnum,1M] > 0. Proof: \Ne first observe that r(6) is well—defined since the set {r E (0, R) | d(r) = T6} is compact and thus has a minimum value. Note that d(r) S a;n(1+ll/I) 5 fr L710, 1] 1 m _ Tum (H+ ) (1+.M)Cr H60 |/\ -7pllf'6llip(o,1+ H] l/TTL—T = (r p urn by Lemma 3.2 and Corollary 3.1. Since r H e r '_ Tl) is a continuous, strictly increasing function that bounds d from above for all r E (O, R), then lim ~ 6."er — Tp 2 lim ~ d(r) > r6. 7" —> R r —> R Therefore there exists a unique r*(6) E (0, R] for which I/m — r!) c (r*) I 2 T6, 121 and so for any 7(6) E (O, R] for which d(r(6)) = 76, we have necessarily that 1(6) 2 r* > 0. Further, 1/(1/m ~ rp) f. . (4)1/4771—44 amm 75]” +1lm<1+ Mlé llféllmo, 1 + R] 3.2.3 [JP-Convergence We again make more definite the choice of the regularization parameter r given by our discrepancy principle for the case f6 E Lp[0, l], 1 < p < 00. Definition 3.2 Discrepancy Principle for Local Regularization in Lp]0, 1] Let d : (0, R) —> [0, 00) be the discrepancy functional defined by Kira? — f? (3.50) d(r) :2 up] 11910.1] ’ for m E (O, 1] fired, with um > rp > 0. Choose the regularization parameter r = 7(6) to be the smallest r E (0, R) so that 76. (3.51) m a. r ]147*’U.é — ff? 14710.1] 2 Remark 3.2 Any 7‘ E (0, R) satisfying (351) would be acceptable. 122 W'e prove that local regularization with the discrepancy principle defined via equa- tion (351) is a convergent regularization method for f 6 E Lp[0,1]. For purposes of obtaining a rate of convergence, we make the usual smoothness assumption that a is uniformly Holder continuous with power or E (0, 1] and HOIder constant Lg. Theorem 3.2 Assume that [AO],/AI—p/, and [A2] hold and let 6, ”)3 > 0 be given as in Lemma 3.8. For 6 E (0,6) or ]]f6]] > 75.6, let ué denote the solution to equation (3.14) with fr replaced by ffl. Then for r(6) selected according to the discrepancy principle in definition 3.2, we have 1. r(6) ——>0 as6—>0. , 6 — 2. - —' “75(4) ullrflo, 1] —>Oas6—+0. 3. Iffi satisfies the condition (2.43), then for rp > 0 as defined in (’HS’), ‘1] < c 5 ’u. . / _— ”[111] — 17,1; + 7p 6 + CQTQ', and hence, ’11. 6(6) _ fl']]Lp]0 1] = O (6(1/771 — Tp)/(V(l + m))) + O (dd/(V0. + 7‘II.)))‘ as 6 —> 0. Thus the rate of convergence is determined by min {(1, ma —- 7p] . lfw = min {m I/m — 7p} , then ’11.:5,(6)— EIILP[O,1] = O (6w/(1/(1'fmw) as 6 —> 0. 123 , . . 0‘ “l" Tp Moreover, if the chaice ofm is such that m = , then u 0 ,’ (“(5) _ “HMO, 1] U =0<50/(0‘+V+TP)) as6—->0, which is same rate of convergence as obtained in Theorem 3.1 using the a priori rule to select the parameter r = r(6). Proof: Let. H” = HHLp[0,1] . 1. Let {6n}n > 1 be a positive sequence for which 6n, ——> O as n ——+ 00, with 6n E (0,6) or fan ]] > 7567) for each n, and ] 3 6n for lf " fanllwo, 1 + ii] each n. Let {rn}n > 1 be the corresponding sequence of regularization parame— ter values selected according to the discrepancy principle for local regularization given in definition 3.2, namely for each n, m = [r(5n) = min {7‘ e (0, R) l d(r) = 751,1}. Using the lower bound on d in (3.43), we have that T677, 2 ([(Tn) P f6n _ 3 Z B(In) l 2 (CTN Tp + 2) 6 " ./— 1W _ _. . Z B(I‘n) ”—3211 — H 2 < n. T‘U + 3) 6n 2 B(ln) [if—gfl — (677:77 "l" 2) 671:] (3 52) 124 and so [T + B(Tn) (2 + 537:7) (in Z B(T‘n)flg—”'. Then 0 = lim 6n 2 liminf B(rn) Hf”, n —+ 00 n —+ 00 — —TP 2 T + B(rn) (2 + Crn > and by (3.47), Hfll > 0, therefore lim B(T") = 0. (3.53) n —~r oo 1’ + B(rn) (2 + Cain?) Equation (3.53) can only be true if nimwB(rn)=0 or if lim ((37:77? + 2) = 00. n —+ 30 If lim B(rn) : 0, then recalling the definition of B(r) in (3.42) and using Tl—+OO both bounds on ar in Lemma 3.2, we have we have that 0 = n 11mOO B(rn) 1+ m m ‘1 . K _1 .l/ K +1 V —Y Z n l_11+nOO (_rrco 771.) ((_Hc0 - It ) + (/ lf]](_,‘[0a 1 + M) : DTg O constant. Then 0 =: lim 157:7,“ + m.) 2 0. 7'). ——-> 00 TL—*OO If lim ((3ng + 2) z 00, then Since rp > O, we conclude that lim T'nZO. TL—>OO . Let r be chosen according to the discrepancy principle for local regularization given in definition 3.2, i.e. r = r(6) = min {7" E (0, R)| d(r) = 76}. Using (3.39), our choice of r = r(6) is such that a71+m 11g“) ]] = r6. 126 6 and so representing Ur (6) as in (3.15), we obtain by substituting into (3.31), 14(5)] ‘ “r(6)“ 3 “3(6) ”"0”” S (71 6+, iron” ‘P 1 (LR—5m 0- : Cllr(6l]V+Tp. r “7(5)” K 1+m7,( I/(1+7n) S C71( [5:01) [ (l'f'lgllL/JFTP laiwlll A ___ 01 1r<6>1”"”" "’7’ “((5) ll’ for (‘1 > 0 constant. Using Theorem 3.1 and the first part of this theorem in which it was proved that lim r(6) = 0, we have —+O “4(4)” 3 lure) " l “1"?“ s lg’lniau. for 6 > O sufficiently small. Therefore aim)” +3115“. 6 A, - um — TP 4 (5)] s Q 1r<4>1 7. And since um > 7p > (1, 3 _ s 5141. (3.54) lim sup 5 a 0 “he” Substituting the principle in for 6 into (3.32), we have 1L6 — — u C 6 u. — u 7(0) S 1[7 (cur/+71) + r(6) l] ‘ I/m — ’r 6 < C1]T(6)] p U (6)” + UT(6) — U Then by (3.54), part 1 of this theorem, and um > rp > 0, it follows that ) life)” + NW) ‘ lim —> 0 45—4 s WWW =0, proving 11.55.“) converges to a in Lp]0. 1] as 6 —-> O. . Returning to the bound on the total error in (3.32), we have 5 5 < e —— +0 [1(5)] . 114(5))” + "(P 2 nd [r(6)]a To obtain a rate of convergence, it remains to bound T a W5 )l p . .~ . . a . in terms of o usmg our rule. First we bound [1(6)] 111 terms of 6. Bound B (r) above using the upper bound on ar to obtain -. 1 m B(r)£a’7ran-1_<_(AJr ) 7,1/771, NCO 128 for all r E (0, R]. Therefore using the lower bound on d in (3.52) , we have r6 = d(r(6)) IV m A ‘3. A 0' \—r v Bowl? — (44>) (2 + 6 iron”) 5 m B(r(6))|l—f—H — (K‘i’l) FBI/”1(2‘f'C-VR—TP) 6, 2 HCO IV IV since um > rp. Thus 2e + E» 2 36(4)) Ilflt (3.5-5) for the constant E1 > 0. As before, using both bounds on or in Lemma 3.2 to bound B(r) in (3.42), we obtain 1 + m —l - K-l ~I/(1+‘IH) H+1~u - B ‘ 6 > ‘ 6 ——R . h M D _ ( 1.50 ) [a )1 0 + C H 1010,11 R] 2 E2 [Willi/(1 + m), (3.56) for the constant E2 > O. Combining (3.55) and (3.56). we have T+E1) ~ ~51/(1+m) 2 ( 6 2 [7(6)] 4 5‘2 ”fl! and so raising both sides to the power, 1/( +111) [r(6)](l' S «16(1/(l/(1‘l‘l’ll,))ls (357) 129 ~ for E > 0 constant. . . 1 . . Next we obtain the bound for ————-— in terms of 6. Bounding d above [7(6)]1/ + Tp using the inequality in (3.39), Lemma (3.2), and [AD], we have _ 1 m 6 r6 — ar(6) ltr(6)]] 1 m 6 _ _ -<— 04(5) lur<5rW<6>ll+ ur<6>“”|l+“””l g 5%)(1+.11)C(r(5)]‘719 5 + 571,657” [0217(5)]0 +1511] : ((HZZW) (1+M)C’[r(6)l—T”6 1 ‘6)V 1+m. + (K'+ ,)[7( l (D2R0+llflll) RCO = G1[T(O~)]V7ll— 77) (S + G2 [7(6)]1/(1 + T”), for 01 > 0 and 02 > 0 constants. Since 7' - Gl [7(6)]an _ Tp > T — GlRl/m’ — Tp > 0, for r E (0, R], for some R sufficiently small, we have that _ Tum. — Tp T 01'? 6 s 1r<6>1”(1+ml- 02 for 6 > O sufficiently small. Then W7]: Séé—(1/+TIJ)/(l/(1+m,)), ( CO C,“ C73 v 130 for (:l > O constant. Substituting (3.57) and (3.58) into (3.32), we have /\ 21.6 —fi C ——6—— 0 7‘6 a “5) H — 1[T(6)JV+TP+ 2i ( )l S (71666—0! + Tp)/(V(1+ m)) + 0213:60/(1/(1 + m))7 and so 6 UT“) _ 1.1) = O (6(1/777. — 7p)/(z/(1+ m))) + 0(60/(1/(1+ m))) as 6 _> 0_ If a) = min {(1,1/771 — Tp} , then 6 ,- “7(6) —‘ U =o(5i°'/<”(1+m)>) as 6—»0. ‘i‘ ’71) a Then taking m = , it follows that =0<5a/(0+V+TP)) as 6—>0. (S ‘_ UT (6) — u 131 CHAPTER 4 Discretization and Numerical Results We illustrate the practical use of the theory developed in Chapters 2 and 3 with a few numerical examples. In each the following cases, we plot the true 1] (dashed line) versus the approximation a6 (solid line). We selected the true solution 77. for each case and found the true data f by computing Afr exactly. We then discretized the data and added uniformly distributed random error to generate the vector f 6. We take T = M3 for our discrepancy principle and take the value of 6 to be Hf — f6”(7[0, 1 + R] in Examples 4.1—4.3 and Hf — (“guy“) 1 + R] in Example 4.4 4.1 One-Smoothing Problem, Continuous Mea- sure Example 4.1 Let k(2‘) : 1, 17(12) 2 1+3f. [sin(10f.) — sin(f)], and 777- a continuous mea- ,. sure as defined in Lemma 2.1, where t?! is a first-degree polynomial with pV(/\) = /\ + o. For the discretization n = 600, R = 0.583 and m. = 0.001 in. our discrepancy principle, 132 we have the following results: — 0 - u _ “re llfll 0 ~ ‘5 ’1‘” “a ’ “i" C[0 1] 6 Cm, 1] ClO, 1 + R] ’ ”~ _ ”rp 0107 1] 0.0500 0.0150 0.35 0. 8040 - 0.0250 0.0233 0.25 0.3654 0.454 0.0125 0. 0126 0.183 0.1954 0. 535 0.007 0. 0071 0.133 0.1021 0. 523 Table 4.1. Erample 4 .1 Error Analysis Based on the ualues of m,a, and u, the ratios in the last column are predicted by Theorem 2.2 to be approrimately (i Figures 4.1—4.4 below. 1) 0000999 133 = 0.999. See graphical illustrations in 0.4 0.6 1.2 1.4 Figure 4.1. One-Smoothing Problem with Continuous Measure given in Example 4.1 o ur(6) with 5% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and with predicted value of r(6) = 0.35 134 i « If, 1‘ R. u \x / ‘ \ I 0 0.2 0.4 0.6 I 0.8 1.2 1.4 Figure 4.2. One-Smoothing Problem with Continuous Measure given in Example 4.1 no r with 2 5% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and (6) with predicted value of r(6) = 0.25 135 .4??- ..~"‘\ / 1 i f \ ii“ i ‘x r/ ‘1‘ ‘1 ,I j 1 o- \ ,. - \\ ,-"/ ff \ .4; l? -1 _ \~. .4} . . ..., i t 2 -2 - 3‘ l l l -3 ~ ‘ . -4 l I l l l l 0 0.2 0.4 0.6 0.8 '1 1.2 1.4 Figure 4.3. One-Smoothing Problem with Continuous lV’Ieasure given in Example 4.1 with 1.25% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and “3(5) with predicted value of r(6) = 0.183 136 1 0 0-2 0.4 i 0.6 0.8 1 1-2 1.4 Figure 4.4. One-Smoothing Problem given in Example 4.1 with 1.25% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and the Solution with No Regularization 137 4.2 Four-smoothing Problem, Lebesgue Measure 3 t, Example 4.2 Let l:(t) = _fi’ u(t) = 1 + 3t [sin(10t) — sin(t)], and Ur Lebesgue mea- sure. For the discretization n = 100, R = 1 and m = 0.001 in our discrepancy principle, we have the following results: i2 " “it Hfll 6 - ‘5 ’1‘” ' “i do 1] Cm? 1] C[0, 1 + R] ’ '1' — ”'77) C10: 1] 0.00300 4.1430a004 0.50 3.8583 — 0. 00100 3031852004 0.42 8. 0355 0.959 0.00050 981536005 0.45 3.8783 0.75 0.00025 543400005 0.30 2.4749 0.978 Table 4.2. Example 4.2 Error Analysis The ratios in the last column are predicted by Theorem 2.2 to be approximately 1 0.000999 (2) = 0.9993. The variation in the values obtained may in part be due the 1 the fact that the ratio of the 6 values is not eractlz 5. We note that the value of m = 0.001 means that the discrepancy principle is approximately one like Morozov's Discrepancy Principle; the slow rate is associated with. m z 0. See graphical illustra- tions in. Figures 4.5—4.8 below. 138 " N l \ -l I I, l | 1 . l I ’ ‘ 1 i ‘ \ ' 1 l f‘ I i _ \q I t ” L - i I 1 \ 4 ‘ 1, 5 l ‘ ‘ l i u l I L 0.4 0.6 0.8 1 Regularization Figure 4.5. Four-smoothing Problem given in Examples 4.2 and 4.3 with 0.1% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and the Solution with No 139 T l l l I l x \ 1 ~ ’ / \ ’ ‘ - ‘5. f \ \ I t \ l 1 ...,..- ...a_.__‘_,____ .\ 0 ._ TR. 1’ if...“ .. " \ xix \ \i ‘1’ a i 1 \ l .\ I l "’1 1"" \ / l " 1 2 " l. i -3 t. l _ X _4 l l l 1 4 l 0 0.2 0.4 0.6 0.8 1 1.2 1.4 no Figure 4.6. Four-smoothing Problem with Lebesgue Measure given in Example 4.2 with 0.1% Relative Error in the Data. Plots of u(t) = l + 3t [sin(10t) — sin(t)] and . . - with predicted value of r(6) = 0.69 ‘r (0) 140 s -2 - 'E‘ _ -3 — ‘. .. _4 l l I l J I 0 0.2 0.4 0.6 0.8 1 1.2 1.4 with 0.05% Relative Error in the Data. Plots of u(t) = 1 + 31‘ [sin(10t) — sin(t)] and ”3(6) with predicted value of r(6) = 0.58 Figure 4.7. Four-smoothing Problem with Lebesgue Measure given in Example 4.2 141 .1 1. -2 — l . I. -3 _ 1 .4 _4 1 l 1 1 1 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Figure 4.8. Four-smoothing Problem with Lebesgue Measure given in Example 4.2 with 0.025% Relative Error in the Data. Plots of 0(1) 2 1 + 31‘ [sin(10t) — sin(t)] and “3(6) with predicted value of r(6) = 0.51 142 4.3 Four-smoothing Problem, Continuous Mea- sure t Example 4.3 Let k(t) = —6—, u(t) ous measure as defined in Lemma 2.1, where w is a fourth-degree polynomial with MA) = (A + e4 3 = 1 + 3t [sin(10t) — sin(t)], and NT a continu- discrepancy principle, we have the following results: . For the discretization n = 100, R = 1 and m = 0.001 in our - 6 - u — u 5 ~ ”’"C 7 . 1 — 6 1(6) la 7 ”line 0 1 5 C [0‘ 1] l fHC[0,1 + R.) l a 1 i2 - u'l'p Cl0.1] 0. 00200 4.1 784 e—004 0. 78 3.41 79 — 0.00100 198126-004 0.69 3.3356 0.976 0.00050 1.02496—004 0. 58 3.8640 1.158 0. 000125 6. 25236-005 0. 51 4. 3731 1.11864 Table 4.3. Example 4.3 Error Analysis I - 6 u — u. . 6 TC 32 _ 5 78(5) 27. — “$5. L2 0 1 5 I [0* ll HfHC[0,1+ R] i a l 27, — urp L210 1] 0. 0002 4. 1 7846-004 0.78 0. 75813 - 0.00100 198126—004 0.69 0.68989 0.910 0. 00050 1. 024 96—004 0. 58 0. 68321 0. 990 0. 00025 6. 2528e-005 0.5] 0. 6996] 1.019 Table 4.4. Example 4.3 Error Analysis 11 143 The ratios in the last column are predicted by Theorem 2.2 to be approximately 1 0.000999 (—> = 0.9993. and should be the same as for Example 4.2 above. However, 2 instability associated with finding the polynomial 1/2 for the measure nr given in Lemma 2.1 leads to instability of the solution. neart = 0. The results overall are better than in Example 4.2, but the growing error neart = 0 has a large negative efiect on the ratios of C[0, 1] errors in the table above, compared to L12[0, 1] errors as we expected to be smaller. This instability of the polynomial att = 0 is a subject offuture research. See graphical illustrations in Figure 4.5 above, and Figures 4.94111 below. 144 2 /i' I f 1 l I I \\ l "s j " I W \ x/fi\ \\ 1 ’“ l 1‘ ,3/ I 1 q i \\ 1‘ I \ r \ x y \ l ‘1 K i 1" \ fl I \ D - 1’ Xxx“ : I y ‘ -1 ,1 '\ .I l} l l (...,--- ’ “ l l \ 1' \‘ '1. —1 _i \ J I, 1‘1 A l 5’ i l \i i" l -2 i - l \1 i. 1 -3 _ l - _4 1 1 l 1 1 l 0.2 0.4 0.6 0.8 1 1.2 1.4 Figure 4.9. F our—smoothing Problem with Continuous Measure given in Example 4.3 with 0.1% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and , 6 ar (6) with predicted value of r(6) = 0.69 3 I I I 17 I I 2 - /“"‘\\ - / .\\‘ F’- \ l r I .7 K ,3 I \ 1 "' f ‘3‘ {ii ‘\ \ -‘ t ,- / \ l i‘ / I x . ." \ I '~.\ ‘3’ I ll 0 _ . “2x , a .l ,1 \. ,’ ‘i ! \\,\- )z‘l / \ 1 \ “ ” \ -1 ’j \ ,, l -. a .1 l. g -2 ,1 ii. . l ‘\ l U -31 1‘ . _4 l 1 1 l m 1 0 0.2 0.4 0-6 0.8 1 1.2 1.4 Figure 4.10. Four-smoothing Problem with Continuous Measure given in Example 4.3 with 0.05% Relative Error in the Data. Plots of u(t) = 1 + 3t [sin(10t) — sin(t)] and “$505) with predicted value of r(6) = 0.58 146 _4 1 I 1 1 1 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Figure 4.11. Four-smoothing Problem with Continuous Measure given in Example 4.3 with 0.025% Relative Error in the Data. Plots of Mt) = 1 + 3t [sin(10t) — sin(t)] and 113(6) with predicted value of r(6) = 0.51 4.4 Two-smoothing Problem, Discrete Measure Example 4.4 Let l:(t) = t and consider the true solution 500, ift e [.25. .35) o [.6, .7), 1, otherwise. We take 77¢ to be the discrete measure defined in Lemma 2.1 with 3i, re corresponding to Beck’s method as in [23]. For the discretization n = 600, R = 0.067, and m = 0.001 in our discrepancy principle, we have the following results: — O - “a — incl 2 Hill 2 O - 6 71(6) fl—ué 12l0 1l <5 L [0’1] ‘ L [01 + R] , ll _ U’T‘p '1’21031] 0. 0210 0.0737 0.0383 283.28 - 0. 0100 0. 0376 0. 0233 366. 56 1. 294 0. 0050 0.0189 0.015 483.08 1.318 0. 0024 0. 0071 0.010 561.55 - 1.161 Table 4.5. Example 4.4 Error Analysis Because the true 17 is discontinuous, our theory does not predict a convergence rate. It is obvious that the general shape of the curve is found quite well by our method, however the anomalies at the jumps lead to large L2[0,1] error. The sharp spikes in the graphs of the approrinzations ertend beyond the viewing window making it difficult to visualize the impact they have on the L2[0.1] error as shown in the table. See graphical illustrations in. Figures 4.12—4.14 below. 148 700 I I I I 1 I I I 600 - T Ix r 500 .. ”as.“ '11! .l I} ll I; ii 400 — :3 ll '3 1 - l H l}? a l" l] ll 5' 30-0 ~ | 9: 1 g: . 5.. l =. ' 1 200 — : l . - . : 100 — I g: '5' ‘ - l ll ill “J if". {\ ,‘l ll 2' i [I l I '. 0 _ 7")" _.____}J H L.--_-;1 313..."... .l _100 1 1' 4 l :1 1 l 1 1 1 -0.4 -0.2 0 0.2 0.4 0.5 0.8 1 1.2 1 4 Figure 4.12. Two-smoothing Problem with Discrete Measure given in Example 4.4 with 1% Relative Error in the Data. Plots of the step function a and. 11:2 ) with (l predicted value of r(6) = 0.0233 119 700 I I I I 7 I I I 600 — . ft 500 - ,{L-q ‘1»...1, 1 I= I ll 3| l . . 400 - l 1 5' _ . I . l I 5'" 30-0 — t J 1 i' l 200 — : I in l l it I ' l] l! l! 0r ...,.-. r” _.____. ‘ _100 l J l ' L I ' 1 J; L —o.4 —0.2 o 0.2 0.4 0.5 0.8 1 1.2 1.4 Figure 4.13. Two-smoothing Problem with Discrete Measure given in Example 4.4 with 0.5% Relative Error in the Data. Plots of the step function a and u:( - with 0) predicted value of r(6) = 0.015 700 I I I I I I I I 600 - . 500- '1’“: .1.., _ l: l I 1 400- l u ; ' 1 l I l l . . 1 300— 1 ' 1 1 . . . I I . _ I ll 200» l l; 11 * l i I? 11 . 1 :l l 91 100 1' l .- l 3‘ l l .4 5 l ls l ll i l Ill 1 if}: 0 ... :1, ._ ......11 Tl“.._ -..! :Lifi--_..__ .- ‘100 1 1 1 1 1 h 1 1 1 -0.4 -0.2 0 0.2 0.4 0.6 0-8 1 1.2 1.4 Figure 4.14. Two-smoothing Problem with Discrete hrr'leasure given in Example 4.4 with 0.25% Relative Error in the Data. Plots of the step function a and u‘5 > with r(o predicted value of r(6) = 0.010 BIBLIOGRAPHY 15‘2 ll] [9] BIBLIOGRAPHY A. B. Bakushinski. Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. Computational Mathematics and lV’Iathe- matical Physics 24 (1984), 181—182. J. V. Beck, B. Blackwell, and C. R. St. Clair Jr. Inverse Heat Conduction. In- terscience, New York (1985). T. A. Burton. Volterra Integral and Differential Equations. Academic Press, New York (1983). C. Corduneanu. Integral Equations and Applirations. Cambridge University Press, Cambridge (1991). HW'. Engl. Discrepancy principles for Tikhonov regularization of ill-posed prob- lems leading to optimal convergence rates. Journal of Optimizaticm Theory and Applications 52 (1987), no. 2, 209—215. H.VV. Engl and A. Neubauer. Optimal discrepancy principles for the Tikhonov regularization of integral equations of the first kind. In: G. Harnmerlin, K. H. Hofl'mann, editors. Constructive M'ethods for the Practical Treatment of Integral . Equations. Birkhauser, Germany (1985), 120441. H.VV. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer, Dordrecht (1990). HIV. Engl and C. Hoodina. Uniform convergence of regularization methods for linear ill-posed problems. Journal of Computational and Applied Mathematics 41 (1991), 87—103. H.'W. Engl and O. Scherzer. Convergence rates results for iterative methods for solving nonlinear ill-posed problems. In D. Colton, H. W. Engl, A. Louis, J. R. McLaughlin, W. Rundell, editors. Surveys on Solution Methods for Inverse Problems. Springer-Verlag, Vienna (2000), 7 —34. 153 [101 [11] [12] Ml 04] [18] [19] 120] L. C. Evans. Partial Differential Equations. American Mathematical Society, Providence (1998). G. D. Faulkner and J. E. Huneycutt, Jr. Orthogonal decomposition of isometries in a Banach space. Proceedings of the American Mathematical Society 69 (1978), no. 1, 125—128. J. R. Giles. Classes of Semi-Inner-Product Spaces. Transactions of the American Mathematical Society 129 (1967), no. 3, 436—446. G. Gripenberg', S. O. Londen, and O. Staffens. Volterra Integral and Functional Equations. Cambridge University Press, Cambridge (1990). C. W. Groetsch. On the asymptotic order of accuracy of Tikhonov regularization. Journal of Optimization Theory and Applications 41 (1983), no. 2, 293—298. C. W. Groetsch. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. Pitman, Boston (1984). C. W'. Groetsch and J. Gaucaneme. Arcangeli’s method for F redholm equations of the first kind. Proceedings of the American hilathematical Society 99 (1987), no. 2, 256—260. H. Gfrerer. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Math- ematics of Computation 49 (1987), 507—522. U. Harrierik and U. Tautenhahn. On the monotone error rule for parameter choice in iterative and continuous regularization methods. BIT Numerical Mathematics 41 (2001), 1029—1038. M. Hanke. Limitations of the L—curve method in ill—posed problems. BIT Numer- ical Mathematics 36 (1996), no. 2, 287—301. M. Hanke and T. Raus. A general heuristic for choosing the regularization pa- rameter in ill—posed problems, SIAM Journal of Scientific Computing 17 (1996), no. 4, 956—972. A. Kirsch. An Introduction to the ilifathe'matical Theory of Inverse Problems. Springer-Verlag, New York (1996). P. K. Lamm. Full convergence of sequential local regularization methods for Volterra inverse problems. Inverse Problems 21 (2005), 785—803. 154 he [23] P. K. Lamm, Future-sequential regularization methods for ill—posed Volterra [24] [25] [26] [281 l29l [32] [33] equations: Applications to the inverse heat conduction problem. Journal of Math- ematical Analysis and Applications 195 (1995), 469—494. P.K. Lamm. A survey of regularization methods for first kind Volterra equations. In: D. Colton, H. W. Engl, A. Louis, J. R. McLaughlin, W. Rundell, editors. Sur- veys on Solution Methods for Inverse Problems. Springer-Verlag, Vienna (2000), 53—82. P. K_. Lamm. Regularized inversion of finitely smoothing Volterra operators, Predictor-corrector regularization methods. Inverse Problems 13 (1997), 375— 402. P. K. Lamm and Z. Dai, On local regularization methods for linear Volterra equations and nonlinear equations of Hammerstein type. Inverse Problems 21 (2005),1773—1790. P. K. Lamm and T. L. Scofield, Sequential predictorcorrector methods for the variable regularization of Volterra inverse problems. Inverse Problems 16 (2000), 373-399. P. K. Lamm and T. L. Scofield. Local regularization methods for the stabilization of ill-posed Volterra problems. Numererical Functional Analysis and Optimati- zation 22 (2001), 913-940. M. M. Lavrentiev, V. G. Romanov, and SP. Shishat-skii. Ill-Posed Problems of Mathematical Physics and Analysis. American Mathematical Society, Providence (1986). V. A. l\v’lorozov. On the solution of functional equations by the method of regu- larization. Soviet Mathematics—Doklady 7 (1966), 414—417. S. George and M. T. Nair. An a posteriori parameter choice for simplified reg- ularization of ill-posed problems, Integral Equations and Operator Theory 16 (1993), 392—399. M. T. Nair and U. Tautenhahn. Lavrentiev regularization for linear ill-posed problems under general source conditions. Zeitschrift fr Analysis und ihre An- wendungen 23 (2004), no. 1, 167—185. R. Plato. On the discrepancy principle for iterative and parametric methods to solve linear ill—posed equations. N umerische l\r’Iatl'1ematik 75 (1996), 99—120. 155 [34] T. Raus. Residue principle for ill-posed problems. Acta et Commentationes Uni- versitatis Tartuensis de Mathematica 672 (1984), 16—26. In Russian. [35] E. Resmerita. Regularization of ill-posed problems in Banach spaces: Conver- gence rates. Inverse Problems 21 (2005), 1303—1314. [36] W. Ring and J. Prix. Sequential predictor-corrector regularization methods and their limitations. Inverse Problems 16 (2000), 619—634 [37] W. Rudin. Real and Complex Analysis. 3rd ed. hi'IcGraxv-Hill, New York (1987). [38] S. Pereverzev and E. Schock. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM Journal of Numerical Analysis 43 (2005),2060—2076. [39] E. Schock. On the asymptotic order of accuracy of Tikhonov regularization. Journal of Optimization Theory and Applications 44 (1984), 95—104. [40] E. Schock. Parameter choice by discrepancy principles for the approximate so- lution of ill—posed problems. Integral Equations and Operator Theory 7 (1984), 895—898. [41] F. Schopfer, A. K. Louis, and T. Schuster. Nonlinear iterative methods for linear ill-posed problems in Banach spaces. Inverse Problems 22 (2006), 311—329 [42] H. Tanabe. Equations of evolution. London, Pitman (1979). [43] U. Tautenhahn. On the method of Lavrentiev regularization for non-linear ill- poscd problems. Inverse Problems 18 (2002), 191—207. [44] G. M. Wing. A Primer on Integral Equations of the First Kind. SIAM, Philadel- phia (1091). ll][]l][l[[ll[][i[l[ll[[1]]