139 132 LIBRARIES —° °5 MICHIGAN STATE UNIVERSITY EAST LANSING, MICH 48824-1048 This is to certify that the dissertation entitled Local Regularization Methods For n-Dimensional First-Kind Integral Equations presented by Changjun Cui has been accepted towards fulfillment of the requirements for the Doctoral degree in Mathematics Major Professor’s Signature 5/ 6 / 05' Date MSU is an Affirmative Action/Equal Opportunity Institution PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 2/05 cJClRC/DateDqundd-pJS Local Regularization Methods for n-Dimensional First-Kind Integral Equations By Changjun Cui A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mathematics 2005 ABSTRACT Local Regularization Methods for n-Dimensional First-Kind Integral Equations By Changjun Cui We examine a new method for regularizing ill-posed first-kind (non-Volterra) in- tegral equations on R”. We analyze the method and prove that the regularized solutions converge to the true solution. We also develop a numerical algorithm based on this theory, and present evidence that this local regularization method is superior to classical regularization methods in that it can recover sharp features in the true solution. Our numerical examples also Show that this algorithm has a faster speed of convergence compared to existing techniques. ACKNOWLEDGMENTS My foremost thank goes to my thesis adviser Professor Patricia Lamm.Without her insights, directions, suggestions, and encouragements, this dissertation would not have been possible. Especially, I thank her for her patience and strong support that carried me on through difficult times, while I was making progress on my research at a slower pace compared to other students. I also want to thank all professors I have taken classes with at Michigan State Uni- versity - several of them are also my thesis committee members. I am deeply grateful for the knowledge and research skills I learned during my graduate studies. iii TABLE OF CONTENTS 1 Introduction 2 Basic Definitions 2.1 The Local Regularization Parameters I" and a ............. 2.2 The Spaces X, X and X, ......................... 2.3 The Map E, : X, r—+ X .......................... 2.4 Construction of F,, F}, Ff, Ff, U,, U, E X, .............. 2.5 Operators A,, B,, l, T, T, and C, .................... 2.6 The Local Regularization Problem ’Pfia ................. 3 Convergence 4 Numerical Implementation 4.1 The Discrete Local Regularization Problem ............... 4.2 Numerical Examples ........................... BIBLIOGRAPHY iv 20 33 33 45 78 1 Introduction Consider the equation Au = f (1.1) where for Q = [0,1]" C R", A is the bounded linear operator on L262) given by Au(x)=[zk(x,s)u(s)ds, a.a. 3:69. (1.2) Here I: E L°°(Q x (2) satisfies |k(x,s) — k(y,s)| S Lk(s) ||x — yll’“c a.a. x,y,s E $2, (1.3) for m, > O and L}, 6 L262), where H - H is the Euclidean norm in R". We assume that equation (1.1) has solutions and 11 E L2(Q) is the (unique) minimum norm solution. Clearly f E L°°(Q), and we will assume 2] E L°°(S2). Equation (1.1) can be used as a mathematical model of an inverse problem, i.e., given data f which represents a desired or an observed effect in real world problems, determine the cause which is represented by a physically relevant solution (often a generalized solution) 21 of equation (1.1). Mathematical inverse problems arise in many areas, ranging from physics to population problems. For example: 1. The Inverse Heat Conduction Problem (IHCP): consider an insulated semi- infinite bar coincident with the non-negative :r-axis. An unknown heat source u(t) is applied to the end of bar at a: = 0, here t is the time variable. The temperature of the bar at :r: = 1 is measured as f (t) Then the unknown heat source u(t) is the solution of first-kind Volterra integral equation t / k(t—s)u(s)ds= m), t€[0,1], . (1.4) o with kernel function _ 1 —1/4t 2. Blurred Image Reconstruction: let 22(x), x = ($1,:r2), represent gray-level values of an image on Q C R2, where Q = [0, 1] x [0, 1]. The image blurring is governed by a blurring operator H: for E > O, HU(X) = "g ' jg e—finx-yl'z 'U(y) - dy- (1-6) When 3 —-> 00, H becomes the identity operator and no blurring occurs. The image reconsruction is solving the first-kind integral equation H u = 9 where g is the observed image gray level. Notice this is a non-Volterra equation. 3. X-ray Tomography, or Radon inversion: this is a medical application in which we recover a human body’s density f from X-ray measurements in a cross-section of the human body. 4. The time resolved fluorescence problem in physical chemistry: a substance to be analyzed is illuminated by a short-pulse and absorbs energy (photons) as some molecules switch over into an excited state. In order to determine the fluorescence lifetime distribution a(t), which determines the probability density of a certain molecule excited at time to = 0 to emit a photon at time t, we need to solve an inverse problem represented by a first-kind integral equation with observed fluorescence intensity as given data. Most inverse problems are ill-posed in the sense that solution 21 does not depend continuously on data f. As a matter of fact, in (1.1), as long as k is a non-degenerate function on Q, the range of A is not closed which means A“1 is unbounded or discontinuous. So an inverse problem is usually unstable. In a practical setting, this implies that whenever our given data is an imprecise, or noisy measurement (say f5), we can not rely upon the solution 115 obtained by solving equation (1.1) using the perturbed data f‘5 in place of accurate data f. Unfortunately, the presence of a small data error is unavoidable due to measurement errors or computer round-off errors, and this leads to highly oscillatory solutions. To overcome the ill-posedness of the inverse problems, different regularization methods have been developed. The regularization of an ill-posed problem gives a method of constructing 216 in a way that we have guaranteed convergence 116 -—> a as the noise level 6 —> 0. That is, for sufficiently small 6 such as [If - f6“ S 5, the regularized approximation 176 should be reasonably close to the true solution 21. The most well—known and classical regularization method is Tikhonov regular- 3 ization. Assume K : H1 -> H2 is a compact linear operator, where H1,H2 are Hilbert Spaces. The Singular System, or Singular Value Decomposition (S VD) of K is defined as the sequence {um 22,; An} that satisfies the following conditions: 0 )‘IZA2Z"‘ZO, 0 An —» 0 as n —> oo, 0 {an} and {on} are complete orthonormal sets in H1, H2 respectively, 0 Kun = Anon and K‘vn = Anun for n =1,2,-~ If g 6 H2 satisfies 2:: 1-(~X1— (g,vn))2 < 00, then one can show that the minimum 11 norm solution 11 of K u = g is given by S3: ll °° 1 9:21}— (,)gv,, -u,,, (1.7) where K l is the generalized inverse of K. When the true data 9 is replaced by noisy data 9“, K 1g as defined in equation (1.7) would magnify the high frenquency components of the error by 1 /)\,, —-> 00 as n ——> 00. A classical regularization technique uses the construction u: = Z Ma(An) ' <96: ”72) ' um (1.8) to limit the effect of high frenquency errors. It can be shown that with suitably chosen Ma and a = 0(6), we can ensure that of, —> 21 as 6 —+ O. The scalar a is the regularization parameter. Tikhonov regularization method is defined by _ A — A2+oz’ M000 (1.9) Historically, Tikhonov introduced his regularization method by the formulation u: = argmgninKu — 96112 + a- Ilullg}, (1.10) which is equivalent to the definition given by (1.9). Using (1.10), it’s easier to see that the Tikhonov regularization parameter a serves to have a trade-off between accuracy of the model-fitting (i.e. the first term in (1.10) ) while achieving stability through penalty term ”all. For Tikhonov regularization theory, see [5] and [6]. Tikhonov regularization has a very noticable disadvantage: it tends to pro- duce an overly smooth solution. Sharp or discontinuous features in the true solution tend to be lost during Tikhonov regularization process. Solutions with Sharp or discontinuous features are typical in image recontruc- tion and signal processing applications. More recently, several regularizations have been proposed and studied to avoid the over-smoothness during regularization process. One method, called bounded variation regularization, is to seek u}? = argmgnuKu — 96”? + r3 - 6(a)}, (1.11) where the penalty term is the bounded variation seminorm G(u)=/Q[Vu|(:r)dx. (1.12) While the bounded variation regularizations can capture sharp features and discon- tinuities much better compared to classical Tikhonov regularization, it does bring in its own drawbacks. One drawback is that the penalty term 0(a) is not differentiable, which leads to nontrivial difliculties in practical implementation because standard optimization packages can not be used. As a result, numerical algorithms derived from bounded variation regularization methods are costly and have slow speed of convergence. For references in bounded variation regularization, see [1],[2], [3],[4], and [15]. Another regularization theory being studied is the method of local regularization. As the name suggests, the motivation of local regularization is to regularize locally by applying more fine-tuned local controls during the regularization process in order to recover as much features in the true solution as possible. To illustrate the idea of local regularization, let’s look at equation (1.1). We define Local Regularization Parameter r = (T1,T2, - ~ ,rn) where r,- E (0, %) and consider Afi(X+io) = f(X+ p). where p = (p.)?=1, p.- E [—r,,r,] and x = (x,),'-‘=l, 2:,- E [T,~,1— 7,]. (1.13) This can be rewritten as /k(x + p, s)fi(s)ds = f(x + p), for p and x satisfying (1.13). :2 That is, / k(x + p, s)fi(s)ds +/ k(x + p, s)2‘r(s)ds = f(x + p), (1.14) Q\B(x,r) B(x,r) where B(x,r) = {y =(yi)?=1€ R"; lyt - it IS 7,}. It is easy to see that B(x,r) C 9. Equation (1.14) implies / k(x+p,x+s)fl(x+s)ds+/ k(x+p,s)z‘r(s)ds = f(x+p). (1.15) B(O,r) Q\B(x,r) The above equations hold true for p and x satisfying (1.13). The first term on the left-hand side of equation (1.15) represents the action of A on the x-dependent “local part” of a, so that equation (1.15) suggests a decomposition of the operator A into “global” and “local” parts, for each x E $2. This splitting of A is the basis of the local regularization method and allows us to apply regularization only to the local part of A. It is worth noting that, in addition to the advantage of reconstructing sharp features in the true solution, when applied to a Volterra equation such as /0 k(t,s)u(s)ds = f(t), (1.16) local regularization methods can retain the causal structure of the original Volterra problem, which would be destroyed if using classical Tikhonov regularization. For this type of local regularization theory and numerical results (called sequential local regularization), see [7],[8], [9], [10],[12], [13] and [14]. When solving a non-Volterra type inverse problem (which is the subject of this paper), an iterative local regularization procedure is used. Intuitively, the procedure works like this: an initial guess is made for the solution on the entire domain, then the solution is iteratively updated on small subdomains by solving a local regularization problem while assuming the solution off the small subdomain (i.e. the global part) is fixed at its previously set value. This method is first introduced in [11]. In this paper, we study the first-kind integral equation on R" by using a different set of regularization parameters. The resulting numerical algorithm generates faster convergence speed compared to results presented in [11]. 2 Basic Definitions In this section, we will make clear the ”local regularization” ideas presented in the Introduction section (equation 1.15) by defining all required spaces, operators and regulariztion parameters. 2.1 The Local Regularization Parameters f and a Our first regularization parameter is r = r,- ‘5‘: , where r, E 0, 1 . z 1 2 We use the following notations in this paper: 1 — 1‘ .-_-_- (1" Tall-:1, (r,1—r)={x€R”; r,<:r,<1—r,-fori=1,2,---,n}, (—r,r)={x€R"; —r,- 0}. 2.2 The Spaces X, X and X, Let A = 1. We use the following notations: X = L2((—A7 A)”) with usual norm I - Ix and usual inner product (, -)x, “=- L2(Q; X) with norm “90”ng /I¢(X) )deX=// (p)l2dpdxfor Given a E A, we can also define an equivalent norm on X,: I 2 = 2 II so II... — M ,2 [W] a(x) In] | X Define E, : X, H X by ¢(X)(p1r1,p2r2, - -- ,pnrn), x E (r,1 - r) and p E (-A, A)", Er = 2"Hell?- In equation (2.3), we used 17 = (p1r1,p2r2, - -- ,pnrn). 2.4 Construction of F,, If}, Ff, 15,35, U,, U, E X, We define F,(x)(p) = f(x + p), a.a. p E [—r, r], x 6 Ir, 1 — r], and F,(x)(p) = f(x), a.a. p E [—r,r], x 6 Ir, 1 — r]. Similar definitions can be made for Ff, Ff (where f 5 replaces f) and for U,, (7, (where 21 replaces f). 11 Apparently U,, U, E X, and we can show that “Ur“ S llfilloo, ”Ur“ S llfllloo- (2-4) In fact, U,3=—_/ / fix+ 2M). II II 2"7‘1r2~~rn [mm] [4,,” p)l p 1 n.._______.1_2 __ 1-2” 2n7‘17‘2H-7‘n ( 7‘1)(1 2T2) ( T) S Hall2 -2”T1r2 - - -r and similar statements can be made for U,. 2.5 Operators A,, B,, l, T, T, and C', The operator A, : X, r—r X, is given by A,go(x)(p) = f{_ ]k(x+p, x+s) X, by, for 77 6 13(0) and a.a. x E Q, B,n(x)(p) = / k(x + p, s)n(s)ds, a.a. p E [—r, r] and x 6 Ir, 1 — r]. [x :- x+rI We can similarly show that B, is a bounded linear operator from L2(Q) to X, with N Br II S H k ”00 . (2-7) 13 As a matter fact, we have ll 3:77 H?- l/\ |/\ |/\ l B, x 2d dx 2W IN] [MI 77( Mail p 2 dp dx - / 772(s)ds fl-B(x,r) 1 n / / f 2 7'17'2"'Tn [,,1_,} [_m] n—B(x,r) dx dp dx dp 1 f / llkllio-I / 172(8)ds 2"7‘17‘2"'7‘n [r,l—r] {—m] n 1 - k 20/ f 2 d dx 2"r1r2---rn II II [ml—r] [_r’rllllillmn) P ”k”; ' “Ullfflmr Let I E X "' be fixed and normalized so that ((1) = 1, where 1 E X is given by 1(p) = 1 for a.a. p. Let ’7; denote the unique nonzero element of X satisfying [(21) = (v,'71)x, v E X. (2.8) Notice that we must have f[_ A, A1,, 71(p) dp = 1. Define T E £(X, L2(f2)) by T¢(x) E l(zp(x)), a.a. x E 9. (2.9) for «p E X. We also define a bounded linear operator T, : X, I—> L2(Q) via T, _=_ T E,. 14 As a matter fact, we have 1&nfi ll fl :1 :\- H I .1. T\ 3' .1. P i ii '8, _T: 9.. 'D 9.. >4 2nr1r2 - - - 1 f f = Mx+m$M®$ 2n7~17~2 . . . Tn [131-” [—l',l'] 0—809?) 1 f / k2(x + p, S)dS 2nT1T2 ' ' ‘ Tn [r,1—r] [‘3’] 9’80”.) °/ fifiws fl—B(x,r) 1 f f 2U? 1900-1)st 2nrlr2---rn I 1_fl L_ fill H 9 ( ) 1 = . k 30/ f 2 d dx 2"T17‘2°"Tn H H [r,1—r] [_mlllflllmo) P Ilkllio ' llnlliqoy 2 dp dx l/\ dp dx |/\ |/\ Let I E X * be fixed and normalized so that ((1) = 1, where 1 6 X is given by 1(p) = 1 for a.a. p. Let 7; denote the unique nonzero element of X satisfying l(v) = (v, 71);“ v E X. (2.8) Notice that we must have f[_ A, A1,, 71(p) dp = 1. Define T E £(X, L2(§2)) by T¢(x) E l(¢(x)), a.a. x E 9. (2.9) for (,5 E X. We also define a bounded linear operator T, : X, r—> L2(Q) via T, 5 TE,. 14 Then we have u(x), xE (r,1—r), 0, otherwise , for x E $2. This is because a(x), x E (r, 1 — r), p E (—A, A)", Er U,(x)(p) = 0, otherwise . So 0, otherwise . 15 Notice that 2 HT, (7, — allizm) = [9 [210,00 — a(x)| dx = / lfl(x)l2dx Q\(r,1-r) S (3" - 1) ° llrlloo- Hallie, (2-10) where ”rum = maxnrz, we get that “Tr Ur “QHLRSD = 0( Hr“) We can also get IlTrcpl|3¢ < ||T||2- HEM/9H3» = llTllz-2"-ll 0. Also assume that f‘5 E L°°(Q) is given satisfying M f —— f5 ”00 < 6. The Problem P115", is the problem of finding (p3,, such that 903,0 = arg min {ll Crso- Ff II? + H w Ilia}. cp E X: The following theorem follows from classical Tikhonov regularization theory. Theorem 2.1 Let r,a,6 and f6 satisfy the conditions stated in Definition 2.1. Then there exists a unique solution (pic of Problem Pia. Both (pic 6 X, and 7730 E ,(p‘ia E L2(Q) depend continuously on Ff E X, and thus on data f5 E L°°(§2). We first present the following lemma. Lemma 2.1 Let r,a,6 and f‘5 satisfy the conditions of Definition 2.1 and let (pic denote the solution of Problem ’Pffl. Then 2 _ 2 ll Crwga — Ff H? + H (103,0 “3,03 0 [(Tfrg 7‘3 “"6”00 + llalloo) Halloo + 52], for some C > 0 independent of r, a and 6. Proof: First we observe that 17 HAHU+BWTU—FHE _. __ 2 — 2nr1r2:.7~n/[,1_r]/[_”]|ArUr (Pr)+BTU(X)(p) Fr(X)(p)] dpdx - _ 2 — 2nr1r2°m/[,1_r] [[_”]|Ar:er(x 00+) W(X)(P) f(X+p)| dpdx == 0, because (1.15) shows exactly that f (x + p) = A,U,(x)(p) + B,fl(x)(p). Hence H Cr‘pf-p: — F: “3 + H 903,0: “3,0 ll CrUr - Ff II? + ll (7: ”3,0: l/\ < 2(H Aral - U,.) H: + II A,U, + BrTrUr - F, Ill)2 + 2 || Fr - F3 II? + II Ur ”3,0 2 ll MU, - Ur) II? +2 II Fr - Ff II?» + H Ur ll3,a~ The conclusion of this Lemma then follows from the facts that IIFr — Fin? = [ [IF -Ff(X)(p)l2dpdx 2"T172 [r,1- fl [-r,r] = Tfl/ [I f(-X+p) f5(X+p)|2dpdx [r, l—r] [—r, r] 2nT1T2” s llf- f5”; ”Tn[ [ dpdx 2717.17? [r,1—r] [—r,r] S “f—féllooa and (using (2.4) ,(2.6)) Ill-Allie. S Malice-“Ur“?- S llalloo-llfillio, 18 HAM, - Ur)l|3 S ”Arll2 ' ”Ur — Urllf S 22" T??? “'Ti ° “k“go ‘ 4° “fillie- 19 3 Convergence Our main convergence result is as follows: Theorem 3.1 Let {6k}:°=1 Q 12* with 6k —> O as k —+ 00. Let {rdfiil and {ak}z‘_’__1 C A be given such that llrkll 6 (0%) and Hrkll, || ak Hoo—> 0 as k —> oo. Assume further that there is M > 0 such that (1) ag/ak,min _’ 0a (2) IlrkII"/6k _<_ M, (3) H aknoo/aknfin _’ 11 as k —+ 00. For each k =1,2,---, let fak E L°°(Q) be given with “f — fékll < 6k, let gait,” E X” denote the solution of Problem ’Pff’ak associated with f 6", and let 77’: E Trk‘pf-tpk' (31) Then The —’ a in L2(Q)7 as k ——+ 00, where 21 is the solution of the original problem (1.1). To simplify the notation, we will henceforth write (pk E (pitch, Pk E 793;“, Xk E X“, Fk :— F”, F: E F‘sk rki Uk 5 Ur“ 0]: E Ur“ El: E Er“ Tk E Trka Ale 5 Ark: and 20 SC on. Lemma 3.1 Let {6&211 g 12+ and {cz,.}$,°=1 g A. Let {rk}:‘;1 satisfy “m,” E (O, %) and assume that there exists M > 0 such that (1) (Sig/akmin S M, (2) ”HIP/51c S M, (3) H a): “co /ak,min S M, as k —> 00. For each I: = 1,2,---, let fl“ 6 L°°(Q) be given with N f — f‘sk ||oo< 6k, and let 90k 6 At}, denote the solution of Problem Pk associated with f‘s", with 17,, E TkSOk E 112(9)- Let (5,, E Ekcpk E X. Then there is 95 E X and a subsequence of {9%,} which converges weakly in X to g5. That is, relabelling the subsequential indices, gbk—ng inX ask—>00. Further, 17 E L2(Q) defined by is such that (using the same relabelling of indices as above) nk—wyinL2(Q) ask—>00. 21 Proof. We note that llEkstlli = 2"-|| L2(Q) is a bounded linear operator. Lemma 3.2 Assume {6k},{rk}, {ak} and {f‘sk} are given satisfying the con- ditions of Lemma 3.1, where we additionally assume that 6;, -> 0, “n,” —> O, and Ilcrklloo —> 0, as k -—> 00. Then for n and (b given by Lemma 3.1, it follows that 17 is a solution of Au = f and rfi is a solution of Aw = f, for /l E £(X, L2(Q)) defined by ~ A=AT. 22 Proof. For k = 1,2, - - -, define flk E £(L2(Q),Xk) via flku(x)(p) = Au(x), a.a. p E [—rk, rk] and x E [rk, 1 — rk], for A the original operator defined in (1.2) and u 6 L262). Then, for 77 given by Lemma 3.1, we have “de " 1Fk||2 = n Ak7)(x Fkx< )(P) zdp dx 2 Tk17k12'”lmv/[rk,1-rkl »/[-I‘k Pk] I I = 27‘7“ r Tlm1/ / IAMX) (X)2I dp dx k1 k2' [1']; 1-?::l 1"“ 'k] = [I 1_ ”Am—f(x): dx = fl _ 1|An(x)— f(x)|2-X[.,,1-..1(x)dx " fszlAn(X)-f(X)l2dx= IIAn—fllL2m) ask—+00, (32) where XIrk.1-rk](') is the characteristic function of the interval [rk, 1 — rk]. Notice that n (3.2), the convergence .is guaranteed because we can conclude A77 — f E L°°(Q) from f E L°°(Q) and A77 6 L262). So we only need to show that ”Amy — Ell.”c —’ O as k —* 00. To this purpose, we split ”21m — Fkllu and get “AM— Fkllrk S le+Tf+T§ +71? +T5k, 23 where, using (pk and m as defined in Lemma 3.1, le = ”/4ka + BkasOk — FISH“: T2]: = ”Ak‘PkHrka Tak = “de - Bka‘PkHrk, T: = ”F1;S ’ Fkllrka Tsk = “Fk — Fkllrk' In the remainder we will show that T," —> 0 as k -—> 00 for z' = 1, 2, 3,4, 5, so we can conclude I An — f |= O or 17 solves Au = f. Ti" —> 0: Use the fact that (ch)2 = “CH/9k — Fif||2 rm and the result of Lemma 2.1. T; —> 0: Use (2.2) and (2.6), we get 1 k n , T2 3 “AkH'HSOkllrk S 2 TkiTk-z Tkn'llklloo'fi°||5rk¢kllx —->0 as k—> 00, because “E,.kcpkllx = ”(5ka is bounded (Lemma 3.1). 24 T4“ —> 0: Notice that T2? = ”de ’ BkaSOkHr. = ”A“? - Bknkllrk S “Ale“? - 771:)”1‘:c + ”(Ah " Bk)77k“rk- For the first term in (3.2), we have ”Ak(77—1 77k )1,” = n m/ / IAk( 77— 77k)( )()lzdpdx 2 Tkl Tk2 (Pk, 1— —rk] {—1}, Pk] = A(-n 7?) dpdx 2n Tkl T162 :Mx/[rkl —rk]/[—rk,I rk] k (x )I2 = / |A(n— 77k)(x )1 dx [rk,1-rk] 5 “AU? — Ukllliflm) —* 0, as k —> 00 (from the compactness of A on L2(Q) and Lemma 3.1). 25 T3" —* 0: Notice that T: = ll/ikn—BkaSDan. = “A“? " Bknkllrk S HAW? — Ukllln. + “(A,, — Bk)nkl|rk' For the first term in (3.2), we have HAW) - 177k) 2 ll 2 = k(_77 77 ) ) cide 2" Tkl T112 W/[rk 1- -rk] ~/[-l'k,IA ['7‘] k (x (p )I = A(—77 77 dpdx 21177:] Tk2 '7‘kn,-‘/[rk1rk] »/[—rkrk] k)(x )I2 = / |A(n- n.>(x )1 dx [rk l—rk] 5 “AW — viewing» —’ 0, as k —-> 00 (from the compactness of A on L2(Q) and Lemma 3.1). 25 For the second term in (3.2), we have, for p E I—rk, mg] and x 6 Inc, 1 — mg], l/\ |/\ |/\ l/\ thus |/\ |/\ |/\ I(Ak - Bk)Uk(X )(PII2 |A.n.(p> — B.n.(x>(p>|2 [Ank(X)(p) — / W + as s77.) (s>ds| Q—B(x,r7c | fn k(x, s>n.(s> ds — [Hm knk 0,812 2 I [9(k(x, s) — k(x + p, 3)) 77,,(5) dsI2 + 2| k(x + p, s) 77k(s) dsI2 B(x,rk) 2/0Ik(x,) s kx(+p,s)|2ds [Inflsflis +2] k2 (X+p, S)dS/ ni(S)dS B(X rk) B(x7rk) 2°/nlLk(S)|2' ”PH?“ dS' ”The”2 + 2' Ilkllgo ' 2n7‘k17‘k2 "'Tkn ' “777:“:2 2 II The H2 (llpllzm‘ ll Lk ll2 +Hk|l§o ' 2"?"k17‘k2 "7‘an- I|(Ak — Bklflkll2 2,, 'T’mof / I(Ak - Bk)77k(x (P)I2 )dP dx Tkl Tk2 Irk,1—rk] I-rk, rkII 2"?“k1Tk127‘kn / f 2 II n. H? (Ilpl|2“* II L. ”2 +nkuio - 2117,17”. - -rk.>dpdx If). 1 rk] I-l‘k rk ”We”2 2""17‘k17‘k2 ° ' 'Tkn / (”LkII2 2nrk17k2' ' 'Tkn ° lll‘kll2““ + IIkIIZo ' (2"Tk1Tk2 - - -rkn)2) dx [rk l—rk] llmllz [2 - ”th12 - III-kn?“ + 2W . - n...- Ilkllio]. 26 This shows “(241k — BkIUkIIE,‘ —> 0 as k ——> 00 since IIl'kII —> 0 as k —+ co, and Ilmcll = llTk¢kll S IITkII ° llsbkll S ~/2_"- llTll ° IlsEL-ll (using (211)) is bounded (Lemma 3.1). T1“ -’ 0= (TI)2 S llfa" - fllio S 5%- Tk —+ 0 1 2 Tk 2 = m] f d dx ( 5) 2n m m [u 1_ _rk [_ a r(“I X)(-p) (X)(p))l p 1 Tim/I /[ 2 2" Tk17‘k2 1- -rk] 1'1: ”I“ p)— f( )I p 1 / / |/< ’ 2 = kx+p,s —kx,s)usds dpdx 2"7‘k17“k2 ”'Tkn [ml—u] [—rk,rk] o I I I ) I I I 1 f f 2 S k(x+p,s)—kx,s) ds- 2n Tkl Th2 ' ’ ' Tk'n Irk,1-rk] I—PkJ'kI ( (2 I ( I ) (/ Ifl(s)|2 ds) dp dx (2 IIPIIQH rm] f /( 2 S Lk(S PII “k (is dp dx 2" Th] Tk2 Inc, 1- -rk] I—rk, Pk] I II 2 2 2 Tie] Th2 Irk, l-rkI I—rk, rk] 2 2 = IILIcII III-LII” n/ III‘kII2II" , 2n 7.“ T'k2 . . . Tkn dX 2" 71:] 7'k2 ' ' ' rkn [rk 1 III S IILkII2 - llfillz' Ilrkllz’“ —* 0, as k —) 00 because |Irk|| —+ 0. 27 This completes the proof of Lemma 3.2. Proof of Theorem 3.1 From Lemmas 3.1 and 3.2 we have that n,c ——\ 77, for 17 6 L262) a solution of Au = f, and (fin E Encpn —-\ cfi for (,5 E X a solution of 112/) E AT¢ = f, «p E X. In both cases the convergence is subsequential (the indices have been relabelled). We will first show that cfi = (7 where U E X is defined by U(x)(p) = W, a.a. p E [—A,A]”, a: E 9. (3.4) From TU(x) = l(fl(x) 71(-)/ | '71 lfi) = a(x) for a.a. :1: 6 9, we know that U solves the equation M = AT¢ = f. In fact we will show that U E (kerz‘l)i Q X, i.e. U is the minimum norm solution of/lzb=f. 28 Indeed, let 1]; E ker/I, then x =fIU( x) w )()!de =//{_WU(x>( p)x223< >

>dx l7|x 1 = (1.1., T¢>L2(Q) |? zlx :0, because 1,5 6 ker/I implies T2?) 6 kerA. This shows 0 E (ker/IH. Next we show that HQEHX S ”Ullx. In fact, since 43k = Ekqfik, it follows that llékllzx = 2" ll¢k||3k 271 S _ [IICk¢k- Fill: + “gbkllrkpkl k,mln 2" ~ S , [“0ka —F£||,2-k + “VkHrknkl (3-5) k,mln where Vk E Xk is defined as ~ Vk(x)(p) = U(x)(p1/rk1,p2/rk2, - -- ,pn/rkn),x E (rk, 1 — rk) and p E (—rk,rk). 29 Notice that by the definition of E,, we have U(x)(p), x E (rk,1 — rk) and p E (—A,A)", 0, otherwise . So 0, otherwise . We also have ~ 1 V 2 H klll‘k 2n Tkl Tk2 / /T"" 71(p1/m p2/rk2, - -~ ,pn/nm) /|7:|§r dp dx [1}, 1— —rk] {—1}, rflu —2 2 4 = u xdx/ ' -rr ---rnd 2117.“ m "'Tkn /::)—rk] ( l [—1,1]" ”71(77l/l’711x 1:1 1:2 I: 77 1 / _2 S —— u (x) dx. (3.6) 2"|72|§( [rm—u] Now consider the first term in (3.5): “0ka — Ff”: = “/4ka + BkaVk — Ff“: .<_ 4 ”/1ka - Uklllgk + 4 “AkUk ‘t BkaVk - Pk”? +2I|Fk — Fall”, where IIAkU,c + BkaVk — Fkllfk = IlAkUk + Bkfl -— Fkllfk = 0 (using the proof of 30 Lemma 2.1 and the fact that Tka = a for x E (rk,1 — rk), ||F,c — F)?”2 < 6,3, and Pk— llAka — Uk)l|3k S 2||Akl|2(|le||3k + IIUkIIEk) 2 2 2 < 0711712 ‘ ' 'Tkm because of (2.4), (2.6) and the fact that “)7le < —%— - Ilfillizm) from (3.6). r“ — 2I'Yllx Therefore, for the first term in (3.5) we have “0ka - FISH: S C ' (T1317‘22 ‘ ° 'Tin + 5i)- For the second term in (3.5) we have ‘ ~ ”01ch _ nvkuim s Hakum - “ka1; = 2n 3° u2(x)dx. l’yllx “Jul—Pk] Thus, continuing from (3.5), we get ~ 2n 0 oo _ Hm”; s c+ ”n"”2 / u2 01"] ' ak,min 2 '7le [ml—u] It follows that ”(r/3H3; S liminflléklli (3-7) 3 lim sup C(rilriz-nrgn + 6,3) + “nik”%/ 212(x) dx] 0‘k,min 2 I'Yllx [rk,1—rk] = ”fillii’mfl I '71 Er (3.8) = HUM? (3.9) 31 under the assumptions of the theorem, so that || 00. This fact combined with the weak convergence of 43k to 43 in the Hilbert space X implies the strong convergence, thus 77k=Tq~5k—>TQS=TU=Z-t, ask—>00. so that 77 = 21. Standard arguments can be used to extend the subsequential convergence we just proved to full sequential convergence. This completes the proof of Theorem 3.1. 32 4 Numerical Implementation In this section we develop a discrete implementation of the local regularization method presented in the previous sections. We’ll also study two numerical examples. The algorithms and examples presented here are the case of R1. Numerical implementation in the case of R" will be the subject of a future study. It’s also worth noting that the algorithm here is different from the results in [11], and as we’ll see the new algorithm has better performance in the numerical examples. 4.1 The Discrete Local Regularization Problem Let A13: 7% for fixed N= 1,2,o--, and define xj=j-Az, j=0,1,2,---,N, p,=l-A:c, l=—N,-N+1,---,0,1,2,-~,N, 1, SEEM-1&1], 0, Otherwise. The regularization parameter a(x) is replaced with a discrete approximation N a(z) = Zaj - XJ-(sc) :2: 6 [0,1]. i=1 33 The other regularization parameter r is chosen as r E (0, 211) such that where 1 S 2', < 1%. We also need a discrete approximation of the space X,. It is defined as N—ir—l ir—l ={-so X,”, 36 i1- —1 —rA:x: CjO - X000) dp cjo - rAx (4.2) wefixjandlsuchthatz',SjSN—z'r—land —2', $132, — 1. Then Ar -ssxq<)d q="1r ir—l pq = E / k(Ij+p(,$j+S)°qudS q=_ir pq-l 1r-1 pq = E ch /( k(:cj+pz,xj+s)ds q=—1,- pq—1 = 2 : qu / (tj+z,v 70d?) q=—ir Pq- l+xjk ir_1 = 2 cm ' Aj+l,j+q: (4.3) q="ir where A,,]- is defined as Aid = fpj k(ti,5) d8. (4.4) Pj—l For each j satisfying 2', S j g N — 2', — 1, we can define a matrix Aj and a vector 37 A1 = (Aj+11.j+l2)—1'rSli.12_<_ir-l (Arm—i, Arm—n+1 Aj—ir,j+ir—l\ Aj—ir+1.j—ir Aj—i,+1,j—i,+1 Aj—1r+1.j+ir-1 \Ajfl'r-Lj—ir Aj+i,-1,j-i,+1 Aj+ir-1,j+ir—1) 21, x2, '1‘ _ 2', Cj —- (Ci-1} Cj,-1r+1 . . . (air-1) E R 1 . Thenwehave,forz',§j$N—2',—land —i,£l_<_2',—1, AMWJXPI) = (A1 ' 01),. (4-5) Now we calculate B, T, : X,” —» X,” . According to the definition of B, and (4.2), we have,fori,§j§N—2',—1and—2,3lgi,—1, 11—2 1 N—ir—l BrTr90(:I:j)(/?z) = (f 76+ 2' )k(:1:j +p,,5).( Z CuOXu(S))d3 O Ififi 12:1, N—ir-l 1.3.41, = Z A k($j +9135) 'Cuo'Xu(S) d8 + 2 f“ (“(331 + PM) ' CuO - Xu(8)d8- (4.6) 38 Case 1: If 2', Sj < 22,, then 0 S j — 2', g 2', — 1, hence Xu(8) = 0 for s 6 [0,194,] and 2', _<_u g N—z’,—1.Alsofromj< 22', S N—22',—1 we havej+i, < N—z',—1. So from (4.6) we get N-zr—l 1 BrTrMIjsz) = 2 / k($j+P1,3)'Cu0°Xu(S)d8 u=ir xj+tir N—ir—l in Z / k(xj+p1,s)-cuods u=ir+j+1 “-1 N—1,-1 = 2 C120 Aj-+-l,'u- (47) 11:1} +j+1 Case 2: IfN—22',—1 Sjg N—2',—1,thenj+2', > N—2',—1. Thisimplies that for u = 2,,2’, +1,--- ,N — 2, — 1 and s E [:cj+,-,,1], we have Xu(8) = 0.80 from (4.6) we get N—ir—l xj—tir awe-Mp.) = Z [0 k = Z (I(A.~-c.- +13.- -e— W + (av-aw). l=—ir In order to describe the relaxation type of minimization method we are going to use to solve our discrete regularization problem, it is necessary to introduce the following notations: Fix m, where 2', S m g N — 2', — 1, define 43 For j 74 m, where 2', S m S N — 2', — 1, notice that Hj(Cj;é) depends on cm only through the component cmo in 6. So it is valid to define that N—ir—l Jm(cm0) = Z Hj(CJ-;é), 323'. then 27' . (A$)2( H A790 + 8" TrQD _ FTN’é ”if; + H ()0 ”W330 ) = Jm(cm) + Jm(cvn0)- Using notations Jm(cm) and Jm(cm0), we introduce the following iterative relaxation- type minimization algorithm for the discrete regularization problem: Local Regularization Algorithm 1 1. Initialize vectors c1, c2, cN. 2. Doform=2°,,2',+1,---,N—2',—1: (a) Holding the previously determined values of c,-, j aé m, find ,3 E R2"r solving min{Jm(fi) + jm(fio), )3 = (5—,, 54,44 ,81',—1) E RZir}' (b) Set cm = B. 44 3. Go to step (2). Local Regularization Algorithm 2 1. Initialize vectors c1, c2, - -- , cN. 2. Doform=i,,2,+1, ~--,N—2',—1: (a) Holding the previously determined values of C], j ¢ 772, find B 6 R21, solving min{Jm(fi), 3: (5-1, 23—1,“ 185—1) E Raf} (b) Set em = B. 3. Go to step (2). 4.2 Numerical Examples We will study two numerical examples of a one-dimensional image processing problem using the Local Regularization Algorithm 2 presented above. The kernal function k is k(2:, s) = gexp(—'y(a: — s)2). (4.16) The corresponding operator A represents the blurring operator. In our examples, we choose 7 = 5. A true solution 2’2 was pre-selected and the noisy 45 . VIII .lllllillll data f‘S used in the regularization process is a random perturbation of f = A2], where f‘5 differs from f with 1% relative error. All calculations are conducted using Matlab version 6.1. Example 4.1 In the first example, our true solution is 21(1):) = 3:5 (1 — 2:). The classical Tikhonov regularization generates the following result: full tikhonov solution (for comparison only) 0.8 I I 17 I I I l I T 0.7 - _ 0.6 - .. 0.3 - .. 0.2 - d 0.1 '- _ Results from various iterations of our local regularization algorithm are shown below - with different selections of regularization parameters a and r: 46 1.4 0.8 0.6 0.4 Local Regularization Results with N = 50, a = 0.01 and r = 0.1: Locally regularized solution T I I I I I I I I l 1 J l I l l l I 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N - 50 . r- 0.1 max alpha = 0.01 , iteration 1 47 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I I 1 1 L 1 0.4 0.5 0.6 N I 50 . I: 0.1 max alpha - 0.01 . iteration 2 48 0.7 0.8 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Locally regularized solution I I I l l l 0.1 0.2 0.3 0.4 0.5 0.6 N a 50 . I= 0.1 max alpha - 0.01 . iteration 3 49 0.7 0.8 0.9 Locally regularized solution 0.8 0.7 0.6 0.5 0.4 0.3 0.2 I I I l l l 0.4 0.5 0.6 N a 50 . I: 0.1 max alpha - 0.01 , iteration 4 50 0.7 0.8 0.9 Locally regularized solution r u I I r T I I I 0.75 ~ ‘i 0.7 '- ‘l 0.65 - ‘ 0.6 r a 0.55 - . 0.5 P '1 0.45 - q 0.4 - - 0.35 - “ 0.3 " “ 0.25 - _ l 1 1 1 1 1 1 l 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N :- 50 . r- 0.1 max alpha 2 0.01 . iteration 5 Local Regularization Results with N = 500, a = 0.02 and r = 0.1: 51 UmmWMnmmuadummmt I I I I l l l l 03 0A 05 03 NISOOJIOJ nnmamwa-ODZJmMMMH Uxuwnmmmtuhmmmm I I I I l I l L 0A 05 05 DJ N-flm.hOJ nmxmmm-ODeJmmflmz 52 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.7 0.6 0.5 0.4 0.3 0.2 Locally regularized solution I I I l J l 0.1 0.2 0.3 0.4 0.5 0.6 N - 500 . r- 0.1 max alpha - 0.02 . iteration 3 Locally regularized solution 0.7 0.8 0.9 I I I 1 l 1 0.1 0.2 0.3 0.4 0.5 0.6 N I 500 . f8 0.1 max alpha - 0.02 . iteration 4 53 0.7 0.8 0.9 Locally regularized solution 0.7 '- 0.6 ‘- 0.5 ~ 0.4 r- 0.3 - I- I I I l i l 0.2 0 0.1 0.4 0.5 0.6 N 8 500 . I: 0.1 max alpha . 0.02 . iteration 5 Locally regularized solution 0.75 '- 0.7 '- 0.6 *- 0.55 '- 0.5 r- 0.45 - 0.4 r- - I I I l l l 0.4 0.5 0.6 N I 500 , I: 0.1 max alpha = 0.02 . iteration 6 54 Local Regularization Results with N = 500, a = 0.001 and r = 0.05: Locally regularized solution 1 1 1 1 1 1 1 1 1 2.2 - . 2 >- —1 1.8 ~ . 1.6 — - 1.4 - . 1.2 ~ - 1 ~ ‘ 0.8 ~ _ 0.6 ~ - 0.4 - d 0.2 — . 4 A 1 1 1 1 1 1 1 o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N .1 500 . r: 0.05 max alpha - 0.001 , iteration 1 55 Locally regularized solution I fl I I I I -1!- l l 1 l l l / 0 0.1 0.2 0.3 0.4 0.5 0.6 N - 500 . r: 0.05 max alpha . 0.001 . iteration 2 Locally regularized solution I I I I I I 0.7 - 0.6 '- 0.4 _ 0.3 ~ 0.2 ‘- L 1 l 1 l l 0 0.1 0.2 0.3 0.4 0.5 0.6 N = 500 . r:- 0.05 max alpha - 0.001 , iteration 3 56 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution l l l l l 1 l l l 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 N I 500 . I: 0.05 max alpha - 0.001 . iteration 4 Locally regularized solution I T I I I I I I I l l l J 0.4 0.5 0.6 N 1. 500 . r- 0.05 max alpha - 0.001 . iteration 5 57 0.7 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I I I l l 1 1 l 1 i l l 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 N = 500 . r- 0.05 max alpha - 0.001 . iteration 6 Locally regularized solution I I r I I I I I I l l 1 1 l 1 l l l 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 N 111 500 . r: 0.05 max alpha - 0.001 . iteration 7 58 Example 4.2 In this example, our true solution is r O, :1: E [0, 0.3] low—03% xEWBfiA] Mfl = l L xemAOQ 10m5—$L xemsne] 0, a: E [0.6, 1] In this example we use 7‘ = 0.02. This time we try to use different choicess for a - both constant values and variable (1(1). The classical Tikhonov regularization generates the following result: 59 lull tikhonov solution (for comparison only) 1 I I I I I I I 0.6 - - 0.4 r- - 0.2 - .. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 As we’ll see in the next results, the performance of our local regularization al- gorithm is not so good when using a constant value a (in this case, a = 0.1): 60 Locally regularized solution I I I 119 ha 117 0.6 0.5 114 113 0.2 i11 I I 1 0.9 18 0.7 16 0.5 0.4 0.3 0.2 0.1 b 4.1 1 Q 9 0 3 0 A 95 96 N = 50 . r= 0.02 alpha - 0.1 , iteration 1 Locally regularized solution I I I I T Q 1 0 9 I) 3 1.14 0-5 0 5 N = 50 , r: 0.02 alpha :- 0.1 , iteration 2 61 o in Locally regularized solution I f I 0.9 its 3.5 9.4 E D o 1:) l») 4:) as 1.17 lb ‘1’ b o A Q S N . 50 , r: 0.02 alpha . 0.1 . iteration 3 Our first choice of variable a regularization parameter is: 62 1 Variable Alpha I I j I r I I I n 1 n 2 0-3 0 .1 0-5 as 0-? 1:) 1:0 1:3 10 Local regularrization generates the following results: 63 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I I I l l l 0.1 0.2 0.3 0.4 0.5 0.6 N 1- 50 . I: 0.02 alpha 1: 0.96 . iteration 1 64 0.7 0.8 0.9 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I I I l l l l l 0.1 0.2 0.3 0.4 0.5 0.6 N I 50 , II 0.02 alpha .1 0.96 . iteration 2 65 0.7 0.8 0.9 Locally regularized solution r r r r T f r r I 1 b- d 0.9 - - 0.8 - — 0.7 - a 0.6 - 1 0.5 » - 0.4 ~ ~ 0.3 ~ - 0.2 ~ d 0.1 — ~ 0 — - 1 L 1 1 1 1 1 1 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N I 50 . II 0.02 alpha - 0.96 . iteration 3 In our second try, the regularization parameter a(x) is depicted in the following graph: 66 Variable Alpha 1.4 r r f I r r I fl 1.2- 0.8 P 0.6 - 0.4 - 0.2 '- 0 1 1 1 l i l 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Results generated from our local regularization algorithm: 67 Locally regularized solution 1 5 1— I I I I I I I I I 1 .- 0.5 - 0 __ J 1 1 I 1 1 1 1 1 1 O 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 N I 50 . II 0.02 max alpha - 1.01 , iteration 1 68 0.8 0.6 0.4 0.2 Locally regularized solution I I I T r I fl I I 1 1 1 1 1 1 1 1 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 N I 50 . II 0.02 max alpha - 1.01 . iteration 2 69 0.8 0.6 0.4 0.2 Locally regularized solution l l I I I L l l l l L 0.1 0.2 0.3 0.4 0.5 0.6 N = 50 . r- 0.02 max alpha - 1.01 . iteration 3 70 0.7 0.8 0.9 0.8 0.6 0.4 0.2 Locally regularized solution 1 I I I l l 1 l 0.1 0.2 0.3 0.4 0.5 0.6 N I 50 . II 0.02 max alpha =1 1.01 . iteration 4 71 0.7 0.8 0.9 Locally regularized solution T I I I I I I I I 1 p _ 0.8 - '1 0.6 r _ 0.4 " " 0.2 ‘- .1 o i- 1 L‘f -i 1 1 1 1 1 1 1 1 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 N = 50 . I: 0.02 max alpha - 1.01 . iteration 5 Using the same a, with N = 500, the results are shown below: 72 Locally regularized solution 1.4- 1.2- 0.8 '- 0.6 - 0.4 ~ 0.2 - l #1 l I I I l l 0.1 0.2 0.3 0.4 0.5 0.6 N a 500 . I: 0.02 max alpha .1 1.01 . iteration 1 73 0.7 0.8 0.9 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I I I l l 1 1 J 0.4 0.5 0.6 N .1 500 . r- 0.02 max alpha 1: 1.01 . iteration 2 74 0.7 0.8 0.9 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution I r I l l l l 0.1 0.2 0.3 0.4 0.5 0.6 N a 500 . r: 0.02 max alpha a 1.01 , iteration 3 75 0.7 0.8 0.9 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Locally regularized solution 1 I I I l l l l l 0.1 0.2 0.3 0.4 0.5 0.6 N a 500 . I: 0.02 max alpha . 1.01 . iteration 4 76 0.7 0.8 0.9 BIBLIOGRAPHY 77 BIBLIOGRAPHY [1] R. Acar and C. R. Vogel. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Problems, 10:1217—1229, 1994. [2] A. Chambolle and P.-L. Lions. Image recovery via total variation minimization and related problems. Numer Math., 72:167—188, 1997. [3] D. C. Dobson and C. R. Vogel. Convergence of an iterative method for total variation denoising. SIAM J. Numerical Analysis, 34:1779—1971, 1997. [4] David C. Dobson and Fadil Santosa. Recovery of blocky images from noisy and blurred data. SIAM J. Applied Math, 56:1181—1198, 1996. [5] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996. [6] C. W. Groetsch. The Theory of Tikhonov regularization for Fredholm equations of the first kind. Pitman, Boston, 1984. [7] P. K. Lamm. Full convergence of sequential local regularization methods for Volterra inverse problems. [8] P. K. Lamm. Future-sequential regularization methods for ill-posed Volterra equations: Applications to the inverse heat conduction problem. J. Math. Anal. Appl, 195:469—494, 1995. [9] P. K. Lamm. On the local regularization of inverse problems of Volterra type. In Proc. 1995 ASME Design Engineering Technical Conferences - Parameter Identification Symposium, September 1995, Boston, MA., 1995. [10] P. K. Lamm. Solution of ill-posed Volterra equations via variable-smoothing Tikhonov regularization. In H. W. Engl, A. K. Louis, and W. Rundell, editors, Inverse Problems in Geophysical Applications, pages 92 — 108. SIAM, 1997. 78 [11] P. K. Lamm. Variable-smoothing local regularization methods for first-kind in- tegral equations. Inverse Problems, 19:195—216, 2003. [12] P. K. Lamm and Lars Eldén. Numerical solution of first-kind Volterra equations by sequential Tikhonov regularization. SIAM J. Numerical Analysis, 34:1432— 1450, 1997. [13] P. K. Lamm and T. L. Scofield. Sequential predictor-corrector methods for the variable regularization of Volterra inverse problems. Inverse Problems, 16:373— 399, 2000. [14] P. K. Lamm and T. L. Scofield. Local regularization methods for the stabilization of ill-posed Volterra problems. Numer. Funct. Analy. Opt, 22:913—940, 2001. [15] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60:259—268, 1992. 79 ll]ll][[l]ll[l]j][ill][l]l]ll